CN108010123B - Three-dimensional point cloud obtaining method capable of retaining topology information - Google Patents

Three-dimensional point cloud obtaining method capable of retaining topology information Download PDF

Info

Publication number
CN108010123B
CN108010123B CN201711178471.1A CN201711178471A CN108010123B CN 108010123 B CN108010123 B CN 108010123B CN 201711178471 A CN201711178471 A CN 201711178471A CN 108010123 B CN108010123 B CN 108010123B
Authority
CN
China
Prior art keywords
image
dimensional
point cloud
dimensional point
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711178471.1A
Other languages
Chinese (zh)
Other versions
CN108010123A (en
Inventor
张小国
王小虎
郭恩惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201711178471.1A priority Critical patent/CN108010123B/en
Publication of CN108010123A publication Critical patent/CN108010123A/en
Application granted granted Critical
Publication of CN108010123B publication Critical patent/CN108010123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional point cloud obtaining method for retaining topological information, which comprises the steps of firstly, obtaining an image by surrounding or low-altitude aerial photography by utilizing a camera, and carrying out data preprocessing such as graying, Gaussian denoising, photo alignment and the like on the image; secondly, extracting and matching feature points for retaining topology information; and then, resolving the three-dimensional point cloud and mapping the two-dimensional topological relation to a three-dimensional space, wherein the obtained point cloud data result can be used for constructing a three-dimensional model. Compared with the conventional three-dimensional point cloud obtaining method based on the sequence image, the method has the advantages of uniform point cloud distribution and self-contained three-dimensional topological information, and can remarkably improve the precision of constructing the three-dimensional model.

Description

Three-dimensional point cloud obtaining method capable of retaining topology information
Technical Field
The invention relates to image processing in the field of computer vision and three-dimensional point cloud reconstruction based on sequence images, in particular to a three-dimensional point cloud acquisition method for retaining topological information.
Background
In computer vision, three-dimensional reconstruction refers to the process of reconstructing three-dimensional information from single-view or multi-view images. Since the information of a single view is incomplete, the three-dimensional reconstruction needs to utilize empirical knowledge. The method comprises the steps of firstly calibrating a camera to obtain camera internal parameters, then calculating the motion parameters of the camera through matched characteristic point pairs, combining the camera internal parameters and the motion parameters to calculate the relation between an image coordinate system and a world coordinate system of the camera, and finally reconstructing three-dimensional information by utilizing information in a plurality of two-dimensional images.
The three-dimensional point cloud acquisition is based on the key technology and difficulty of multi-view three-dimensional reconstruction, and the quality of the three-dimensional point cloud determines the precision of a subsequently constructed three-dimensional model. The existing three-dimensional point cloud obtaining step: the method comprises the steps of image preprocessing, feature point extraction and matching and three-dimensional point cloud calculation, wherein the feature point extraction and matching are the parts with the largest resource consumption, and are research hotspots for relevant scholars to optimize and improve. The existing feature point extraction algorithms comprise SIFT, SURF, ORB and the like, and the algorithms achieve better results in the aspects of overcoming image scale and rotation change, illumination change, image deformation and the like. However, the feature points extracted by the existing methods have the defects of redundancy, uneven distribution and no two-dimensional topological information, and the success or failure and the precision of the subsequent construction of the three-dimensional model are influenced.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the limitations of the prior art, the invention aims to provide a three-dimensional point cloud obtaining method for retaining contour texture topological information, which overcomes the defects of point cloud data redundancy, uneven distribution and no contour texture topological information, is used for subsequently constructing a three-dimensional model with topological constraint and improves the model precision.
The technical scheme is as follows: a three-dimensional point cloud obtaining method for retaining topology information comprises the following main processes: firstly, acquiring an image through surrounding or low-altitude aerial photography by using a camera, and carrying out data preprocessing such as graying, Gaussian denoising, photo alignment and the like on the image; secondly, extracting and matching feature points for retaining topology information; and then, resolving the three-dimensional point cloud and mapping the two-dimensional topological relation to a three-dimensional space, wherein the obtained point cloud data result can be used for constructing a three-dimensional model. In the process of acquiring the three-dimensional point cloud based on the sequence image, the method provided by the invention comprises the following steps:
1. calibrating a camera, acquiring internal parameters of the camera, and storing the internal parameters in a matrix form;
2. acquiring image data of a target area in a surrounding or low-altitude aerial photographing mode;
3. carrying out image graying, Gaussian denoising and photo alignment pretreatment on the acquired image data, wherein the specific photo alignment steps are as follows:
3.1, extracting feature points by using a FAST operator, and calculating a descriptor;
3.2, utilizing FLANN to realize efficient feature point matching and reducing mismatching by a bidirectional matching technology;
3.3, filtering mismatching by using RANSAC;
3.4, calculating a basic matrix by using the characteristic point pairs acquired in the step 3.3 by using an 8-point method, and acquiring a essence matrix of the image pair by combining the existing camera calibration matrix in the step 1;
3.5, estimating the matching images of all the images by using the essence matrix of the image pair acquired in the step 3.4:
3.5a, decomposing the essential matrix of the image pair into a rotation part and a translation part to obtain the position conversion relation between the two cameras, and obtaining the position conversion relation of any pair of cameras by analogy;
3.5b, selecting a first image, and selecting a matched image with the smallest rotation and translation scale between the image pairs as the image according to the relative position transformation relation between the image and other images;
3.5c, taking the matched image of the first image as a second image to be matched, executing 3.5a and 3.5b, and determining the matched images of all the images in the same way;
3.6, assuming that the camera matrix of the first image is fixed and standard, obtaining the camera matrix of the other image in the matched image pair by using the position conversion relation between the cameras obtained in the step 3.5a, and obtaining the camera matrix of all the images;
4. extracting feature points of the topological relation of the retained contour texture;
4.1, extracting all contour texture features of a target ground object in each image by using a Canny edge extraction algorithm, wherein the detected contour textures do not establish a hierarchical relationship, and contour texture data of each image are stored in a two-dimensional container, wherein each contour texture data is stored in a point data format;
4.2, simplifying each contour texture point data of each image by using a Douglas-Puck algorithm:
4.2a, reserving the simplified contour texture points as feature points to be matched, marking the image and contour number to which the points belong, and independently storing the processing result of each image in a two-dimensional container;
and 4.2b, reserving the contour texture points before simplification as a feature point library matched with the feature points.
5. Matching the characteristic points and filtering mismatching;
5.1, carrying out feature description on the feature points to be matched by utilizing an SIFT operator;
5.2, feature point matching:
5.2a, selecting a first image, and determining a matching image of the image according to the image pair to be matched, which is obtained in the step 3.5;
5.2b, in the matching image pair, the characteristic point to be matched on one image and the characteristic point library on the matching image are reduced by using the epipolar constraint relation determined by the essence matrix obtained in the step 3.4;
5.2c, realizing efficient matching by using FLANN;
and 5.3, filtering the mismatching by using a RANSAC algorithm.
6. Resolving the three-dimensional point cloud according to a triangle method by using the matching characteristic point pairs obtained in the step 5;
6.1, approximating a three-dimensional point by two-dimensional points in a matched image pair, namely resolving the three-dimensional coordinate of the space point by using the constraint relation between the camera matrix obtained in the step 3.6 and the matched characteristic point pair obtained in the step 5.3;
6.2, the operation of the step 6.1 is carried out on the matching point pairs through a cycle to realize a complete triangle method, and three-dimensional point clouds reconstructed by two images are obtained and are used as an initial structure of the three-dimensional reconstruction of the sequence images;
and 6.3, adding the rest images into the initial structure one by one, namely finding an image matched with the second image in the rest images as a third reconstructed image, and repeatedly executing the step 6.2 to obtain the three-dimensional point cloud of the sequence image.
7. And (4) mapping the two-dimensional contour texture topological information contained in the characteristic points obtained in the step (4) to a three-dimensional point cloud, and converting the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.
7.1, in the step 6.1, in the process of solving by the triangle method, when a three-dimensional point coordinate is solved through two-dimensional matching points in two images, keeping the images and the outline information of the two characteristic points to the solved three-dimensional point information, namely completing the mapping from the two-dimensional topological information to the three-dimensional topological information;
7.2, in the three-dimensional point cloud obtained in the step 7.1, each three-dimensional point includes which two images the point comes from and the specific contour number on the image, so that the point cloud can be classified:
7.2a, performing primary classification on the obtained point cloud according to the image number of the three-dimensional point;
7.2b, performing secondary classification on the point cloud after the primary classification according to the three-dimensional point corresponding to the contour number on the image.
Has the advantages that: compared with the traditional three-dimensional point cloud acquisition technology based on the sequence image, the method supports the surrounding type and the tiled type, the ordered type and the unordered photographing modes, the acquired feature points are distributed uniformly, the feature point matching efficiency is improved, and the most important advantage is that the method keeps two-dimensional topological information and maps the two-dimensional topological information to the three-dimensional point cloud, can be used for subsequently constructing a three-dimensional model with topological constraint and improves the model precision.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a three-dimensional point cloud of a small house reconstructed by a conventional three-dimensional point cloud acquisition method;
FIG. 3 is a three-dimensional point cloud of a small house reconstructed by the three-dimensional point cloud obtaining method for retaining topology information according to the present invention;
FIG. 4(a) is a point cloud network without constraints;
FIG. 4(b) is a representation of contour texture constraint information;
fig. 4(c) is a point cloud network with contour texture constraints.
Detailed Description
Fig. 1 shows a main flow of a three-dimensional point cloud obtaining method for retaining topology information according to the present invention. By the three-dimensional point cloud obtaining method for reserving topological information, the obtained three-dimensional point cloud data can be used for subsequently constructing a three-dimensional model with topological constraint, and the model precision is improved. Taking the example of obtaining three-dimensional point cloud data of a house, the following steps are described in detail with reference to fig. 1:
1. calibrating a camera, acquiring internal parameters of the camera, and storing the internal parameters in a matrix K form;
2. acquiring image data of a target area in a surrounding type photographing mode;
3. carrying out image graying, Gaussian denoising and photo alignment pretreatment on the acquired image data, wherein the specific photo alignment step is as follows;
3.1, extracting feature points by using a FAST operator (a FAST operator threshold value is set to be 20, and the maximum number of extracted points is set to be 1000), and calculating descriptors;
3.2, utilizing FLANN to realize efficient feature point matching and reducing mismatching by a bidirectional matching technology;
3.3, filtering mismatching by using RANSAC;
3.4, calculating a basic matrix F by using the characteristic point pairs acquired in the step 3.3 by using an 8-point method, and acquiring an essential matrix E of the image pair by combining the existing camera calibration matrix K in the step 1;
3.5, estimating the matching images of all the images by using the intrinsic matrix E of the image pair acquired in the step 3.4:
3.5a, decomposing the intrinsic matrix E of the image pair into two parts of rotation R and translation t to obtain the position conversion relation between the two cameras, and obtaining the position conversion relation of any pair of cameras by analogy;
3.5b, selecting a first image, and selecting a matched image with the smallest rotation and translation scale between the image pairs as the image according to the relative position transformation relation between the image and other images;
3.5c, taking the matched image of the first image as a second image to be matched, executing 3.5a and 3.5b, and determining the matched images of all the images in the same way;
3.6 Camera matrix P assuming first image0Fixed and standard, and obtaining a camera matrix P matching the other image in the image pair by using the position conversion relation between the cameras obtained in the step 3.5a1And thus obtain the camera matrix of all pictures;
4. extracting feature points of the topological relation of the retained contour texture;
4.1, extracting all contour texture features of a target ground object in each image by using a Canny edge extraction algorithm, wherein the detected contour textures do not establish a hierarchical relationship, and contour texture data of each image are stored in a two-dimensional container, wherein each contour texture data is stored in a point data format;
4.2, simplifying each contour texture point data of each image by using a Douglas-Puck algorithm (the suggested threshold is 5):
4.2a, reserving the simplified contour texture points as feature points to be matched, marking the image and contour number to which the points belong, and independently storing the processing result of each image in a two-dimensional container;
and 4.2b, reserving the contour texture points before simplification as a feature point library matched with the feature points.
5. Matching the characteristic points and filtering mismatching;
5.1, carrying out feature description on the feature points to be matched by utilizing an SIFT operator;
5.2, feature point matching:
5.2a, selecting a first image, and determining a matching image of the image according to the image pair to be matched, which is obtained in the step 3.5;
5.2b, in the matching image pair, the characteristic point to be matched on one image and the characteristic point library on the matching image are reduced by using the epipolar constraint relation determined by the essence matrix obtained in the step 3.4;
5.2c, realizing efficient matching by using FLANN;
and 5.3, filtering the mismatching by using a RANSAC algorithm.
6. Resolving the three-dimensional point cloud according to a triangle method by using the matching characteristic point pairs obtained in the step 5;
6.1, approximating a three-dimensional point by two-dimensional points in a matched image pair, namely resolving the three-dimensional coordinates of the space points by using the constraint relation between the camera matrix P obtained in the step 3.6 and the matched characteristic point pair obtained in the step 5.3;
6.2, the operation of the step 6.1 is carried out on the matching point pairs through a cycle to realize a complete triangle method, and three-dimensional point clouds reconstructed by two images are obtained and are used as an initial structure of the three-dimensional reconstruction of the sequence images;
and 6.3, adding the rest images into the initial structure one by one, namely finding an image matched with the second image in the rest images as a third reconstructed image, and repeatedly executing the step 6.2 to obtain the three-dimensional point cloud of the sequence image.
7. And (4) mapping the two-dimensional contour texture topological information contained in the characteristic points obtained in the step (4) to a three-dimensional point cloud, and converting the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.
7.1, in the step 6.1, in the process of solving by the triangle method, when a three-dimensional point coordinate is solved through two-dimensional matching points in two images, keeping the images and the outline information of the two characteristic points to the solved three-dimensional point information, namely completing the mapping from the two-dimensional topological information to the three-dimensional topological information;
7.2, in the three-dimensional point cloud obtained in the step 7.1, each three-dimensional point includes which two images the point comes from and the specific contour number on the image, so that the point cloud can be classified:
7.2a, performing primary classification on the obtained point cloud according to the image number of the three-dimensional point;
7.2b, performing secondary classification on the point cloud after the primary classification according to the three-dimensional point corresponding to the contour number on the image.
The comparison between the attached drawings 2 and 3 shows that the obtained point cloud is more uniform in distribution and data of door frames and other details are richer compared with the conventional method. Fig. 4(a) is a point cloud network without constraint, 4(b) is outline texture constraint information schematic, and 4(c) is a point cloud network with outline texture constraint, which embodies the advantages of the point cloud data acquired by the three-dimensional point cloud acquisition method for retaining topology information provided by the invention in constructing a model surface mesh, even if the model is closer to a real scene, and the model precision is improved.

Claims (7)

1. A three-dimensional point cloud obtaining method for retaining topology information is characterized by comprising the following steps:
step 1, calibrating a camera, acquiring internal parameters of the camera, and storing the internal parameters in a matrix form;
step 2, acquiring image data of a target area in a surrounding or low-altitude aerial photographing mode;
step 3, preprocessing the acquired image data;
step 4, extracting feature points of the retained contour texture topological relation;
step 5, matching the characteristic points and filtering mismatching;
step 6, resolving the three-dimensional point cloud according to a triangle method by using the matching characteristic point pairs obtained in the step 5;
and 7, mapping the two-dimensional contour texture topological information contained in the characteristic points obtained in the step 4 to a three-dimensional point cloud, and converting the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.
2. The three-dimensional point cloud obtaining method retaining topology information according to claim 1, characterized in that: the data preprocessing in the step 3 comprises the following steps: image graying, gaussian denoising and photo alignment.
3. The method for acquiring the three-dimensional point cloud with the preserved topology information as claimed in claim 2, wherein the photo alignment comprises the following steps:
step 3.1, extracting feature points by using a FAST operator, and calculating a descriptor;
3.2, utilizing FLANN to realize efficient feature point matching and reducing mismatching by a bidirectional matching technology;
3.3, filtering mismatching by using RANSAC;
step 3.4, calculating a basic matrix by using the characteristic point pairs acquired in the step 3.3 by using an 8-point method, and acquiring a essence matrix of the image pair by combining the existing camera calibration matrix in the step 1;
and 3.5, estimating the matching images of all the images by using the essence matrix of the image pair acquired in the step 3.4:
3.5a, decomposing the essential matrix of the image pair into a rotation part and a translation part to obtain the position conversion relation between the two cameras, and obtaining the position conversion relation of any pair of cameras by analogy;
3.5b, selecting a first image, and selecting a matched image with the smallest rotation and translation scale between the image pairs as the image according to the relative position transformation relation between the image and other images;
3.5c, taking the matched image of the first image as a second image to be matched, executing 3.5a and 3.5b, and determining the matched images of all the images in the same way;
and 3.6, assuming that the camera matrix of the first image is fixed and standard, obtaining the camera matrix of the other image in the matched image pair by using the position conversion relation between the cameras obtained in the step 3.5a, and obtaining the camera matrices of all the images.
4. The method for acquiring three-dimensional point cloud with retained topology information according to claim 1, wherein the step 4 comprises the following steps:
step 4.1, extracting all contour texture features of the target ground object in each image by using a Canny edge extraction algorithm, wherein the detected contour textures do not establish a hierarchical relationship, and contour texture data of each image is stored in a two-dimensional container, wherein each contour texture data is stored in a point data format;
step 4.2, simplifying each contour texture point data of each image by using a Douglas-Puck algorithm:
4.2a, reserving the simplified contour texture points as feature points to be matched, marking the images and contour numbers to which the contour texture points belong, and independently storing the processing result of each image in a two-dimensional container;
and 4.2b, reserving the contour texture points before simplification as a feature point library matched with the feature points.
5. The method for acquiring the three-dimensional point cloud with the preserved topology information as recited in claim 3, wherein the step 5 comprises the following steps:
step 5.1, carrying out feature description on feature points to be matched of each image by utilizing an SIFT operator;
step 5.2, feature point matching:
5.2a, selecting a first image, and determining a matching image of the image according to the image pair to be matched, which is obtained in the step 3.5;
5.2b, in the matching image pair, the characteristic point to be matched on one image and the characteristic point library on the matching image are reduced by using the epipolar constraint relation determined by the essence matrix obtained in the step 3.4;
5.2c, realizing efficient matching by using FLANN;
and 5.3, filtering mismatching by using a RANSAC algorithm.
6. The three-dimensional point cloud obtaining method with retained topology information according to claim 5, wherein the step 6 specifically includes:
step 6.1, approximating a three-dimensional point by two-dimensional points in a matching image pair, namely resolving the three-dimensional coordinates of the space points by using the constraint relation between the camera matrix obtained in the step 3.6 and the matching characteristic point pair obtained in the step 5.3;
6.2, performing the operation of the step 6.1 on the matching point pairs through a cycle to realize a complete triangle method, and acquiring three-dimensional point clouds reconstructed by two images as an initial structure of the three-dimensional reconstruction of the sequence image;
and 6.3, adding the rest images into the initial structure one by one, namely finding an image matched with the second image in the rest images as a third reconstructed image, and repeatedly executing the step 6.2 to obtain the three-dimensional point cloud of the sequence image.
7. The three-dimensional point cloud obtaining method with retained topology information according to claim 6, wherein the step 7 specifically includes:
7.1, in the step 6.1, in the process of solving by the triangle method, when a three-dimensional point coordinate is solved through two-dimensional matching points in two images, keeping the images and the outline information of the two characteristic points to the solved three-dimensional point information, namely completing the mapping from the two-dimensional topological information to the three-dimensional topological information;
7.2, in the three-dimensional point cloud obtained in the step 7.1, each three-dimensional point includes which two images the point comes from and the specific contour number on the image, so that the point cloud can be classified:
7.2a, performing primary classification on the obtained point cloud according to the image number of the three-dimensional point;
7.2b, performing secondary classification on the point cloud after the primary classification according to the three-dimensional point corresponding to the contour number on the image.
CN201711178471.1A 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information Active CN108010123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711178471.1A CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711178471.1A CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Publications (2)

Publication Number Publication Date
CN108010123A CN108010123A (en) 2018-05-08
CN108010123B true CN108010123B (en) 2021-02-09

Family

ID=62053322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711178471.1A Active CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Country Status (1)

Country Link
CN (1) CN108010123B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN108765574A (en) * 2018-06-19 2018-11-06 北京智明星通科技股份有限公司 3D scenes intend true method and system and computer readable storage medium
CN109598783A (en) * 2018-11-20 2019-04-09 西南石油大学 A kind of room 3D modeling method and furniture 3D prebrowsing system
CN109472802B (en) * 2018-11-26 2021-10-19 东南大学 Surface mesh model construction method based on edge feature self-constraint
CN109816771B (en) * 2018-11-30 2022-11-22 西北大学 Cultural relic fragment automatic recombination method combining feature point topology and geometric constraint
CN111325854B (en) * 2018-12-17 2023-10-24 三菱重工业株式会社 Shape model correction device, shape model correction method, and storage medium
CN109951342B (en) * 2019-04-02 2021-05-11 上海交通大学 Three-dimensional matrix topology representation and route traversal optimization realization method of spatial information network
CN110443785A (en) * 2019-07-18 2019-11-12 太原师范学院 The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest
WO2021160071A1 (en) * 2020-02-11 2021-08-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Feature spatial distribution management for simultaneous localization and mapping
CN118154460A (en) * 2024-05-11 2024-06-07 成都大学 Processing method of three-dimensional point cloud data of asphalt pavement

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074015A (en) * 2011-02-24 2011-05-25 哈尔滨工业大学 Two-dimensional image sequence based three-dimensional reconstruction method of target
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106651942A (en) * 2016-09-29 2017-05-10 苏州中科广视文化科技有限公司 Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7856125B2 (en) * 2006-01-31 2010-12-21 University Of Southern California 3D face reconstruction from 2D images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074015A (en) * 2011-02-24 2011-05-25 哈尔滨工业大学 Two-dimensional image sequence based three-dimensional reconstruction method of target
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106651942A (en) * 2016-09-29 2017-05-10 苏州中科广视文化科技有限公司 Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points

Also Published As

Publication number Publication date
CN108010123A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108010123B (en) Three-dimensional point cloud obtaining method capable of retaining topology information
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN110264416B (en) Sparse point cloud segmentation method and device
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN107578436B (en) Monocular image depth estimation method based on full convolution neural network FCN
CN110135455A (en) Image matching method, device and computer readable storage medium
WO2015188684A1 (en) Three-dimensional model reconstruction method and system
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
KR20180054487A (en) Method and device for processing dvs events
CN111107337B (en) Depth information complementing method and device, monitoring system and storage medium
CN107369204B (en) Method for recovering basic three-dimensional structure of scene from single photo
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
CN116883588A (en) Method and system for quickly reconstructing three-dimensional point cloud under large scene
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN111080685A (en) Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN113724329A (en) Object attitude estimation method, system and medium fusing plane and stereo information
Berjón et al. Fast feature matching for detailed point cloud generation
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
WO2016058359A1 (en) Method and device for generating three-dimensional image
CN113487741B (en) Dense three-dimensional map updating method and device
CN113066163A (en) Human body three-dimensional reconstruction method based on two-dimensional image
CN112200850A (en) ORB extraction method based on mature characteristic points
CN107451540B (en) Compressible 3D identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant