CN112465984A - Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering - Google Patents

Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering Download PDF

Info

Publication number
CN112465984A
CN112465984A CN202011263554.2A CN202011263554A CN112465984A CN 112465984 A CN112465984 A CN 112465984A CN 202011263554 A CN202011263554 A CN 202011263554A CN 112465984 A CN112465984 A CN 112465984A
Authority
CN
China
Prior art keywords
reconstruction
dimensional
point cloud
dimensional reconstruction
filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011263554.2A
Other languages
Chinese (zh)
Inventor
杨宁
李东臣
郭雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011263554.2A priority Critical patent/CN112465984A/en
Publication of CN112465984A publication Critical patent/CN112465984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering, which improves the traditional three-dimensional reconstruction technology based on sequence images in the process of adding a filtering process. In the initial stage, still adopting the traditional method to carry out sparse point cloud reconstruction and dense point cloud reconstruction, after the dense point cloud reconstruction, carrying out point cloud filtration before the poisson surface reconstruction, and filtering out partial outliers so as to reduce the influence on the subsequent poisson surface reconstruction operation; after the Poisson surface is reconstructed and before texture mapping, triangular surface patch filtering is carried out, partial outliers are constructed to form a Poisson surface for filtering, and the influence on texture mapping operation is reduced. The result shows that the three-dimensional reconstruction result of the sequence image is superior to the three-dimensional reconstruction algorithm of the traditional sequence image, and the quality of the three-dimensional reconstruction result is effectively improved. The invention has considerable practical effect and application significance.

Description

Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering
Technical Field
The invention belongs to a stereoscopic vision technology and a computer vision technology, and relates to a monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering.
Background
The three-dimensional reconstruction technology based on the sequence images is an important component of a stereoscopic vision technology and a computer vision technology, and is one of core technologies of three-dimensional reconstruction. The three-dimensional reconstruction based on the monocular camera sequence images is to obtain a series of images of a target by shooting around the target through a determined monocular camera, and process the obtained sequence images to obtain a three-dimensional model of the target, which is an important component of a three-dimensional reconstruction technology. When three-dimensional reconstruction is performed, many interferences are often generated due to image reasons, including noise of an image, illumination influence of a shooting environment, surface properties of a shooting target, background impurities except the target in the image, and the like. The outer points of the objects are removed, so that the removal of the outer points of the reconstructed scene except the objects is very important for the three-dimensional reconstruction of the sequence images.
How to reconstruct a realistic three-dimensional scene to meet the application of people in the fields of digital cities, virtual reality and the like becomes a hot concern in the fields of computer vision and the like. Three-dimensional reconstruction techniques generally include both reconstruction by laser detection and measurement techniques and reconstruction by imaging. The latter is more economical and convenient than the former. The three-dimensional reconstruction technology based on the monocular camera sequence images has wider use scenes than a professional three-dimensional scanner due to convenient use and low cost. The image-based three-dimensional modeling is a process of recovering a three-dimensional model of an object through an image sequence shot for the object or a scene, so that the modeling process is more automatic, the labor intensity of workers is reduced, and the modeling cost is reduced. When the image sequence is used for three-dimensional reconstruction of a scene, the shot scene is usually complex, and the situation that the shot image only contains a target object rarely exists. Therefore, outliers except the target object can be generated during point cloud reconstruction, point cloud filtering is needed to remove the outliers after point cloud reconstruction, the influence of the outliers on target reconstruction is reduced, and subsequent reconstruction work is more accurate. The Poisson reconstruction can construct a closed curved surface for the scattered point cloud to generate a three-dimensional grid model of the target object, but a lot of redundant data are generated along with the closed curved surface, so if the three-dimensional model of the independent target object is extracted, the redundant data reconstructed by the Poisson is filtered again to cut out the target object. The three-dimensional model of the target object constructed in the way can be free from the influence of the surrounding environment and can be applied to more scenes. Regarding the surface texture processing of the three-dimensional model, in the field of game production, three-dimensional animation rendering and production software such as 3D Studio Max and Autodesk Maya is generally used for manually attaching textures, and such manual texture recovery causes unnecessary consumption of manpower and financial resources. In contrast, automatic texture mapping greatly reduces the production cycle time.
When the image sequence is used for three-dimensional reconstruction of a scene, the shot scene is usually complex, and the situation that the shot image only contains a target object rarely exists. This may result in outliers other than the target object during point cloud reconstruction. These outliers generally have a number of effects, such as mismatch when performing matching, mapping errors when performing texture and surface reconstruction, and a severe reduction in algorithm speed due to the presence of a large number of outliers. In order to reduce the influence of outliers, the method is generally processed in a manual elimination mode, and the method is time-consuming and labor-consuming. Meanwhile, the operation is affected by the manual work itself, and if the manual work is inexperienced or the model is not known, the removal error often occurs, so that the situation that the correct point and the back are eliminated and the error point still exists is caused. Therefore, point cloud filtering is needed to remove the outliers after point cloud reconstruction, the influence of the outliers on target reconstruction is reduced, subsequent reconstruction work can be more accurate, and the interference caused by manual work is eliminated as much as possible by adopting an automatic method for processing.
The invention relates to a novel method for removing outer points in a reconstructed scene or object to carry out three-dimensional reconstruction, namely a monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering
Technical scheme
A monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering is characterized by comprising the following steps:
step 1: acquiring sequence image data of a target by adopting a monocular camera, and denoising the sequence image data by adopting median filtering; then calibrating the monocular camera, and then reconstructing sparse point cloud:
step 2: for pose information and sparse point clouds corresponding to different images obtained in the step 1, clustering by using a CMVS algorithm to accelerate the speed of processing dense point cloud reconstruction, and then obtaining dense point cloud of a scene by using a PMVS algorithm;
and step 3: filtering the dense point cloud generated by the PMVS obtained in the step 2 to obtain main three-dimensional point cloud information of the object to be reconstructed, and triangulating to obtain the three-dimensional surface of the object, wherein Poisson reconstruction is used for the operation;
after redundant curved surface points in the Poisson reconstruction generation result are filtered and deleted, triangular surface information related to the deleted outliers is deleted, and a visible shell of the three-dimensional model is obtained;
and 4, step 4: and performing texture mapping on the triangular mesh model reconstructed by Poisson, firstly selecting different textures for each triangular mesh of the model, and then mapping the textures to the surface layer of the model in batches through a mapping relation to obtain the three-dimensional model.
The three-dimensional reconstruction of the sparse point cloud based on the sequence image comprises the following steps: 1) carrying out feature extraction and matching on the sequence image set by using SIFT features; 2) performing binocular three-dimensional reconstruction by using the internal parameters and epipolar geometry of the camera; 3) expanding the binocular three-dimensional reconstruction result to multi-eye three-dimensional reconstruction; 4) and (5) bundling optimization.
The expansion and filtering operations of step 2 are cycled three times.
The surface triangular mesh of the model is visible and invisible in each view, and visible is fully visible and partially visible, and only visible triangular faces and partially visible triangular faces in each view are provided with candidate textures.
Advantageous effects
The invention provides a monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering, which improves the traditional three-dimensional reconstruction technology based on sequence images in the process and adds a filtering process. In the initial stage, still adopting the traditional method to carry out sparse point cloud reconstruction and dense point cloud reconstruction, after the dense point cloud reconstruction, carrying out point cloud filtration before the poisson surface reconstruction, and filtering out partial outliers so as to reduce the influence on the subsequent poisson surface reconstruction operation; after the Poisson surface is reconstructed and before texture mapping, triangular surface patch filtering is carried out, partial outliers are constructed to form a Poisson surface for filtering, and the influence on texture mapping operation is reduced. The result shows that the three-dimensional reconstruction result of the sequence image is superior to the three-dimensional reconstruction algorithm of the traditional sequence image, and the quality of the three-dimensional reconstruction result is effectively improved. The invention has considerable practical effect and application significance.
Compared with the traditional three-dimensional reconstruction method, the method has the following advantages: the reconstruction of the target object can be completed by one imaging camera without the requirement of a plurality of imaging devices (such as a three-dimensional scanner) required by the traditional stereoscopic vision imaging. The target object or scene can be well reconstructed from the images shot by the mobile phone or the unmanned aerial vehicle; the automatic texture mapping method reduces unnecessary labor consumption, and meanwhile, by adopting a double-layer filtering mechanism, outlier point clouds far away from a target object can be effectively filtered out, and the reconstruction effect and subsequent application of the target object are ensured.
Drawings
FIG. 1: double-layer filtering flow chart, algorithm flow chart and overall scheme design chart
FIG. 2: results of algorithmic experiments
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the method comprises the following steps:
step 1: using a monocular camera, the sequence image data of the target is acquired and denoised using median filtering. Then, calibrating the monocular camera, and then reconstructing sparse point cloud. The three-dimensional reconstruction of the sparse point cloud based on the sequence image is roughly divided into 4 steps: 1) extracting and matching the characteristics of the sequence image set; 2) performing binocular three-dimensional reconstruction by using the internal parameters and epipolar geometry of the camera; 3) expanding the binocular three-dimensional reconstruction result to multi-eye three-dimensional reconstruction; 4) bundle optimization
According to the method, SIFT features are used for feature extraction, a five-point method is used for estimating an essential matrix E, then SVD is used for decomposing the essential matrix E to obtain a rotation matrix and a displacement vector, and binocular mutual position calibration is completed. And then, performing three-dimensional reconstruction on the characteristic point pairs by using a linear triangulation algorithm to recover the three-dimensional information of the characteristic point pairs. Then, a Perspective N-Point (PNP) method is used to handle the problem of reconstructing a three-dimensional scene from multiple views:
1) and performing three-dimensional reconstruction on the first two views in the sequence image by using binocular three-dimensional reconstruction to obtain three-dimensional information of some space points.
2) A third view is added and matched with the first two images, and matching points (4 at least) with the first image and the second image are found in the matching results.
3) The PNP problem is solved. The position and attitude of the third image are obtained by the three-dimensional point coordinates obtained from the first two images and the pixel coordinates of the points in the third image. And in turn pushed to more views.
And a nonlinear optimization algorithm, namely a bundle adjustment method, is introduced to solve the problem of accumulation of image pose estimation errors. It uses least square method to reduce the error between the coordinates of the observation image point and the predicted image point. Namely, the prototype is expressed by the following formula:
Figure BDA0002775395030000051
x is the parameter to be optimized, f is the cost function, and ρ is the loss function. Since the return value of the f-function is a vector, the 2-norm of the return value vector is taken as the overall cost. Aiming at optimization adjustment in three-dimensional reconstruction, the cost function is back projection error, and the camera internal reference matrix, the external reference matrix and the three-dimensional point cloud data are to be optimized. Let the internal reference matrix of image i be KiThe external reference matrix is RiAnd TiThe coordinate of a certain point of the point cloud is PjThe point is in the imageHas a pixel coordinate of
Figure BDA0002775395030000052
The back projection error is then the following:
Figure BDA0002775395030000053
Pjand
Figure BDA0002775395030000054
are all homogeneous coordinates, where p is a projection function, having
π(p)=(px/pz,py/pz,1) (3)
The loss function ρ uses the Huber function.
Step 2: obtaining pose information corresponding to different images and sparse point cloud through the step 1, then adopting CMVS to combine with PMVS2 to carry out dense point cloud reconstruction, firstly clustering by using a CMVS algorithm to accelerate the speed of processing dense point cloud reconstruction, and then obtaining dense point cloud of a scene by using a PMVS algorithm. The rough flow of the PMVS algorithm is feature matching, expansion and filtering, where the expansion and filtering operations loop three times.
The point cloud filtering process is the first step in the point cloud preprocessing process. This document uses statistical analysis techniques to filter outliers from the resulting point cloud data set. The algorithm traverses the entire input point cloud twice: during the first iteration it will calculate the average distance of each point to its nearest m neighboring points. Next, the mean and standard deviation stddev of all these distances are calculated to determine the distance threshold η.
η=mean+stddev_mult*stddev (4)
stddev _ mult is a multiple of the standard deviation and is used to control the size of the distance threshold. In the next iteration, if their average neighboring point distance is above the threshold, the points will be classified as outliers, i.e., outliers. The number m of neighboring points analyzed for each point is set to 200, which means that if the average distance of 200 neighboring points calculated for one point exceeds the average distance stddev _ mult by more than a standard deviation, the point is considered as an outlier and removed.
And step 3: and (3) filtering the dense point cloud generated by the PMVS obtained in the step (2) to obtain main three-dimensional point cloud information of the object to be reconstructed, and then triangulating to obtain the three-dimensional surface of the object, wherein Poisson reconstruction is used in the step.
And setting a better number m of adjacent points and standard deviation multiple stddev _ mult in the point cloud filtering stage.
And 4, step 4: and (4) performing texture mapping on the triangular mesh model obtained in the step (3) and reconstructed by Poisson. And selecting an optimal texture for each triangular surface of the model, and attaching the optimal texture to the surface of the model through a mapping relation. One main view is selected as much as possible to perform main texture mapping on the model (if the main view is not manually selected for mapping, the first input image is defaulted as the main view), and the rest views are enriched in sequence according to the distance from the main view, so that the generated color model can be accurately colored.
Let the normal vector of the triangular surface of the model be N, and the shooting orientation of each image be a direction vector viN and viIs thetaiCan know thetaiCloser to 180 ° indicates less deformation by the mapping. So that only when 120 deg. is needed<θi<The i-th view is qualified to texture the position 180 deg., and the specific texture to that image is determined depending on which image is used as the main texture map and the direction of the shooting movement of the sequence image.
In the reconstructed three-dimensional scene model, the surface triangular mesh of the model is visible and invisible in each view, and the visibility is completely visible and partially visible. A visibility analysis of the camera view is performed on the triangular faces in the three-dimensional mesh. Only visible triangle faces and partially visible triangle faces are provided with candidate textures in each view. After selecting a specific image as a triangular surface position to perform texture mapping, an internal reference matrix K and an external reference matrix [ R ] of the ith image can be known by a pinhole camera modeli|Ti]With three-dimensional point cloud coordinates and corresponding image point coordinates piThe following relationships exist:
sipi=K[Ri|Ti]X (5)
we analyze the pixel coordinates of the texture coordinates of the image corresponding to the three-dimensional point X of the three-dimensional model according to the above formula. Only when p isiIf the point is within the image range, the point is visible on the ith image, and the point can be according to piThe pixel value of the point gives texture information to the three-dimensional point.
The specific embodiment is as follows:
the embodiment selects a hardware environment: inter (R) core (TM) i5-4590 CPU 3.30GHz, 8GB memory, 4G video memory computer;
the operating system includes a Windows10 system.
The general scheme design of the invention is shown in figure 1, and the specific implementation is as follows:
the invention is implemented as follows:
step 1: using a monocular camera, the sequence image data of the target is acquired and denoised using median filtering. Then, calibrating the monocular camera, and then reconstructing sparse point cloud. The three-dimensional reconstruction of the sparse point cloud based on the sequence image is roughly divided into 4 steps:
1) extracting and matching the characteristics of the sequence image set;
2) performing binocular three-dimensional reconstruction by using the internal parameters and epipolar geometry of the camera;
3) expanding the binocular three-dimensional reconstruction result to multi-eye three-dimensional reconstruction;
4) bundle optimization
For feature extraction, SIFT features are used. And estimating the essential matrix E by using a five-point method, and then decomposing the essential matrix E by using SVD to obtain a rotation matrix and a displacement vector so as to finish the calibration of the binocular mutual position. And then, performing three-dimensional reconstruction on the characteristic point pairs by using a linear triangulation algorithm to recover the three-dimensional information of the characteristic point pairs. Then, a Perspective N-Point (PNP) method is used to handle the problem of reconstructing a three-dimensional scene from multiple views: 1) and performing three-dimensional reconstruction on the first two views in the sequence image by using binocular three-dimensional reconstruction, so that three-dimensional information of some space points is obtained. 2) A third view is added and matched with the first two images, and matching points (4 at least) with the first image and the second image are found in the matching results. 3) The PNP problem is solved. The position and attitude of the third image are obtained by the three-dimensional point coordinates obtained from the first two images and the pixel coordinates of the points in the third image. And in turn pushed to more views.
For the problem that the final reconstruction precision is influenced because the position and posture information of the camera and the accumulated error of the reconstructed three-dimensional point cloud information are larger and larger along with the increase of the reconstructed images, a nonlinear optimization algorithm, namely a beam adjustment method, is used for solving the error between the coordinates and the predicted image point coordinates.
Step 2: and (3) aiming at the sparse point cloud obtained in the step (1), carrying out dense point cloud reconstruction by combining CMVS with PMVS2, clustering by using a CMVS algorithm to accelerate the speed of processing dense point cloud reconstruction, and then obtaining the dense point cloud of the scene by using the PMVS algorithm.
And step 3: and filtering the point cloud generated by PMVS to obtain main three-dimensional point cloud information of the object to be reconstructed, then triangulating to obtain the three-dimensional surface of the object, and performing Poisson reconstruction.
In order to delete redundant surface information, the result generated by Poisson reconstruction is clipped. After redundant curved surface points are filtered and deleted, triangular surface information related to the deleted outliers is deleted, and a relatively ideal visible shell of the three-dimensional model can be obtained.
And 4, step 4: and 3, performing texture mapping on the dense point cloud obtained in the step 3 on the triangular mesh model reconstructed by Poisson to obtain color textures, selecting an optimal texture for each triangular surface of the model, and attaching the optimal texture to the surface of the model through a mapping relation.
Through the steps, the three-dimensional model is obtained through a monocular camera sequence image three-dimensional reconstruction algorithm based on double-layer filtering.
To evaluate the performance of the algorithm of the present invention, we performed tests and obtained the results, as shown in fig. 2 and tables 1 and 2.
Table 1: results before and after point cloud filtering
Figure BDA0002775395030000091
Table 2: triangle face filtering before and after results
Figure BDA0002775395030000092

Claims (4)

1. A monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering is characterized by comprising the following steps:
step 1: acquiring sequence image data of a target by adopting a monocular camera, and denoising the sequence image data by adopting median filtering; then calibrating the monocular camera, and then reconstructing sparse point cloud:
step 2: for pose information and sparse point clouds corresponding to different images obtained in the step 1, clustering by using a CMVS algorithm to accelerate the speed of processing dense point cloud reconstruction, and then obtaining dense point cloud of a scene by using a PMVS algorithm;
and step 3: filtering the dense point cloud generated by the PMVS obtained in the step 2 to obtain main three-dimensional point cloud information of the object to be reconstructed, and triangulating to obtain the three-dimensional surface of the object, wherein Poisson reconstruction is used for the operation;
after redundant curved surface points in the Poisson reconstruction generation result are filtered and deleted, triangular surface information related to the deleted outliers is deleted, and a visible shell of the three-dimensional model is obtained;
and 4, step 4: and performing texture mapping on the triangular mesh model reconstructed by Poisson, firstly selecting different textures for each triangular mesh of the model, and then mapping the textures to the surface layer of the model in batches through a mapping relation to obtain the three-dimensional model.
2. The method for three-dimensional reconstruction of monocular camera sequence images based on two-layer filtering according to claim 1, wherein: the three-dimensional reconstruction of the sparse point cloud based on the sequence image comprises the following steps: 1) carrying out feature extraction and matching on the sequence image set by using SIFT features; 2) performing binocular three-dimensional reconstruction by using the internal parameters and epipolar geometry of the camera; 3) expanding the binocular three-dimensional reconstruction result to multi-eye three-dimensional reconstruction; 4) and (5) bundling optimization.
3. The method for three-dimensional reconstruction of monocular camera sequence images based on two-layer filtering according to claim 1, wherein: the expansion and filtering operations of step 2 are cycled three times.
4. The method for three-dimensional reconstruction of monocular camera sequence images based on two-layer filtering according to claim 1, wherein: the surface triangular mesh of the model is visible and invisible in each view, and visible is fully visible and partially visible, and only visible triangular faces and partially visible triangular faces in each view are provided with candidate textures.
CN202011263554.2A 2020-11-12 2020-11-12 Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering Pending CN112465984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011263554.2A CN112465984A (en) 2020-11-12 2020-11-12 Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263554.2A CN112465984A (en) 2020-11-12 2020-11-12 Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering

Publications (1)

Publication Number Publication Date
CN112465984A true CN112465984A (en) 2021-03-09

Family

ID=74825680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263554.2A Pending CN112465984A (en) 2020-11-12 2020-11-12 Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering

Country Status (1)

Country Link
CN (1) CN112465984A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355977A (en) * 2022-01-04 2022-04-15 浙江大学 Tower type photo-thermal power station mirror field inspection method and device based on multi-rotor unmanned aerial vehicle
CN116580074A (en) * 2023-07-12 2023-08-11 爱维未来科技无锡有限公司 Three-dimensional reconstruction method based on multi-sensor fusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
WO2020019245A1 (en) * 2018-07-26 2020-01-30 深圳大学 Three-dimensional reconstruction method and apparatus for transparent object, computer device, and storage medium
US20200273190A1 (en) * 2018-03-14 2020-08-27 Dalian University Of Technology Method for 3d scene dense reconstruction based on monocular visual slam
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence
CN111882668A (en) * 2020-07-30 2020-11-03 清华大学 Multi-view three-dimensional object reconstruction method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
US20200273190A1 (en) * 2018-03-14 2020-08-27 Dalian University Of Technology Method for 3d scene dense reconstruction based on monocular visual slam
WO2020019245A1 (en) * 2018-07-26 2020-01-30 深圳大学 Three-dimensional reconstruction method and apparatus for transparent object, computer device, and storage medium
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence
CN111882668A (en) * 2020-07-30 2020-11-03 清华大学 Multi-view three-dimensional object reconstruction method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曾钰胜: "基于面结构光的三维曲面重构研究", 中国优秀硕士论文全文数据库信息科技辑, no. 7, pages 1 - 6 *
龙宇航;吴德胜;: "高空遥感图像空间特征信息三维虚拟重建仿真", 计算机仿真, no. 12 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355977A (en) * 2022-01-04 2022-04-15 浙江大学 Tower type photo-thermal power station mirror field inspection method and device based on multi-rotor unmanned aerial vehicle
CN114355977B (en) * 2022-01-04 2023-09-22 浙江大学 Tower type photo-thermal power station mirror field inspection method and device based on multi-rotor unmanned aerial vehicle
CN116580074A (en) * 2023-07-12 2023-08-11 爱维未来科技无锡有限公司 Three-dimensional reconstruction method based on multi-sensor fusion
CN116580074B (en) * 2023-07-12 2023-10-13 爱维未来科技无锡有限公司 Three-dimensional reconstruction method based on multi-sensor fusion

Similar Documents

Publication Publication Date Title
CN112270249B (en) Target pose estimation method integrating RGB-D visual characteristics
CN106803267B (en) Kinect-based indoor scene three-dimensional reconstruction method
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN108335352B (en) Texture mapping method for multi-view large-scale three-dimensional reconstruction scene
WO2016082797A1 (en) Method for modeling and registering three-dimensional scene structure based on single image
CN111882668B (en) Multi-view three-dimensional object reconstruction method and system
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
CN113178009A (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
EP3756163B1 (en) Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics
Holzmann et al. Semantically aware urban 3d reconstruction with plane-based regularization
CN112630469B (en) Three-dimensional detection method based on structured light and multiple light field cameras
CN112465984A (en) Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering
CN111914913B (en) Novel stereo matching optimization method
CN110766782A (en) Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation
CN110070608B (en) Method for automatically deleting three-dimensional reconstruction redundant points based on images
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
JP6285686B2 (en) Parallax image generation device
CN117501313A (en) Hair rendering system based on deep neural network
JP6901885B2 (en) Foreground extractor and program
Nouduri et al. Deep realistic novel view generation for city-scale aerial images
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
CN109712230B (en) Three-dimensional model supplementing method and device, storage medium and processor
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
da Silva Vieira et al. Stereo vision methods: from development to the evaluation of disparity maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination