CN108010123A - A kind of three-dimensional point cloud acquisition methods for retaining topology information - Google Patents
A kind of three-dimensional point cloud acquisition methods for retaining topology information Download PDFInfo
- Publication number
- CN108010123A CN108010123A CN201711178471.1A CN201711178471A CN108010123A CN 108010123 A CN108010123 A CN 108010123A CN 201711178471 A CN201711178471 A CN 201711178471A CN 108010123 A CN108010123 A CN 108010123A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- point cloud
- matching
- dimensional point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 30
- 238000005516 engineering process Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of three-dimensional point cloud acquisition methods for retaining topology information, first, obtain image by taking photo by plane around formula or low latitude using camera, and the data predictions such as gray processing, Gauss denoising, photo alignment are carried out to image;Secondly, feature point extraction and the matching for retaining topology information are carried out;Then, resolve three-dimensional point cloud and can be used for building threedimensional model by two-dimensional topology relationship map to three dimensions, the cloud data achievement of acquisition.It is of the invention compared with the current conventional three-dimensional point cloud acquisition methods based on sequence image to there is point cloud to be evenly distributed, carry the advantage of three-dimensional topology information, it is remarkably improved the precision of structure threedimensional model.
Description
Technical field
Image procossing the present invention relates to computer vision field and the three-dimensional point cloud based on sequence image are rebuild, and are especially related to
A kind of and three-dimensional point cloud acquisition methods for retaining topology information.
Background technology
In computer vision, three-dimensional reconstruction refers to the mistake of the image reconstruction three-dimensional information according to single-view or multi views
Journey.Due to the INFORMATION OF INCOMPLETE of single-view, three-dimensional reconstruction needs to utilize Heuristics.And three-dimensional reconstruction (the class of multi views
Like people binocular position) relatively easily, its method be first to video camera carry out calibration obtain camera internal reference, then by
The characteristic point allotted can calculate image coordinate system and the world of video camera to calculating the kinematic parameter of camera, both combinations
The relation of coordinate system, finally goes out three-dimensional information using the information reconstruction in multiple two dimensional images.
It is the key technology and difficult point rebuild based on multiview three-dimensional that three-dimensional point cloud, which obtains, and the quality of three-dimensional point cloud determines
The precision of subsequent builds threedimensional model.Existing three-dimensional point cloud obtaining step:Image preprocessing, feature point extraction and matching, solution
Calculate three-dimensional point cloud, the wherein extraction of characteristic point and matching is wherein resource cost the best part, and related scholar is directed to
Optimization, improved research hotspot.Existing feature point extraction algorithm has SIFT, SURF, ORB etc., these algorithms are overcoming image
Scale and rotationally-varying, illumination variation and anamorphose etc. obtain preferable achievement.But the spy that current various methods extract
Sign point all there are redundancy, skewness, not comprising two-dimensional topology information the defects of, influence subsequent builds threedimensional model success or failure and
Precision.
The content of the invention
Goal of the invention:For the limitation of the above-mentioned prior art, the object of the present invention is to provide one kind to retain profile texture
The three-dimensional point cloud acquisition methods of topology information, solve cloud data redundancy, skewness and not comprising profile texture topology information
The defects of, for threedimensional model of the subsequent builds with topological constraints, improve model accuracy.
Technical solution:A kind of three-dimensional point cloud acquisition methods for retaining topology information, the main flow of this method are as follows:It is first
First, image is obtained by taking photo by plane around formula or low latitude using camera, and gray processing, Gauss denoising, photo alignment is carried out to image
Deng data prediction;Secondly, feature point extraction and the matching for retaining topology information are carried out;Then, three-dimensional point cloud is resolved and by two
Dimension topological relation is mapped to three dimensions, and the cloud data achievement of acquisition can be used for building threedimensional model.Based on sequence image
Three-dimensional point cloud acquisition process in, method provided by the present invention, comprises the following steps:
1st, camera calibration, obtains camera internal parameter, preserves with a matrix type;
2nd, the view data of target area is obtained by way of taking photo by plane around formula or low latitude;
3rd, image gray processing, Gauss denoising, photo alignment pretreatment, specific photo alignment are carried out to the view data of acquisition
Step is as follows:
3.1st, using FAST operator extraction characteristic points, and description is calculated;
3.2nd, realize that the matching of efficient feature point and bi-directional matching technology reduce error hiding using FLANN;
3.3rd, error hiding is filtered using RANSAC;
3.4th, using the characteristic point obtained in step 3.3 to resolving basis matrix using 8 methods, with reference to existing in step 1
Camera calibration matrix obtain image pair essential matrix;
3.5th, using the essential matrix of the image pair obtained in step 3.4, the matching image of all images is estimated:
3.5a, by the way that the essential matrix of image pair is decomposed into rotation and translation two parts, obtain between two cameras
Evolution relation, and so on can obtain the evolution relation of any pair of camera;
3.5b, choose first image, according to the image and other image relative position transformation relations, chooses image to it
Between rotation and translation scale minimum the matching image as the image;
3.5c, using the matching image of first image as second image to be matched and perform 3.5a and 3.5b, with this
Analogize the matching image of definite all images;
3.6th, assume that the camera matrix of first image is fixed and is standard type, using the camera obtained in step 3.5a it
Between position transformation relation obtain the camera matrix of matching another image of image pair, and obtain with this camera square of all images
Battle array;
4th, the feature point extraction of profile texture topological relation is retained;
4.1st, all profile textural characteristics of Target scalar in being extracted using Canny Boundary extracting algorithms per piece image, inspection
The profile texture of survey does not establish hierarchical relationship, and the profile data texturing per piece image is all stored in two-dimensional container, wherein often
Bar profile data texturing saves as point data form;
4.2nd, each profile texture point data of every piece image is simplified using Douglas-Pu Ke algorithms:
4.2a, retain the profile Texture Points after simplifying, and as characteristic point to be matched, and marks the image and wheel belonging to the point
Exterior feature numbering, the handling result per piece image are individually stored in a two-dimensional container;
4.2b, retain the profile Texture Points before simplifying, the characteristic point storehouse as Feature Points Matching.
5th, Feature Points Matching and filtering error hiding;
5.1st, treat matching characteristic point using SIFT operators and carry out feature description;
5.2nd, Feature Points Matching:
5.2a, choose first image, according to the image pair to be matched obtained in step 3.5, determines the matching of the image
Image;
5.2b, the characteristic point to be matched in matching image pair a, image are with matching the characteristic point Cooley on image
" epipolar-line constraint " relation diminution search range determined with essential matrix is obtained in step 3.4;
5.2c, using FLANN realize efficient matchings;
5.3rd, error hiding is filtered using RANSAC algorithms.
6th, using the matching characteristic point pair obtained in step 5, three-dimensional point cloud is resolved according to triangulation method;
6.1st, image pair is matched at one by two two-dimensional points come approximate a three-dimensional point, i.e., using in step 3.6
Restriction relation between the matching characteristic point pair obtained in the camera matrix and step 5.3 of acquisition, resolves the three-dimensional seat of spatial point
Mark;
6.2nd, by a circulation matching double points are performed with step 6.1 operation to realize complete triangulation method, obtains two
The three-dimensional point cloud of width image reconstruction, in this, as the initial configuration of three-dimensional reconstruction of sequence image;
6.3rd, remaining image is added in this initial configuration one by one, i.e., is found in remaining image and second figure
As image of the matched image as the 3rd reconstruction, step 6.2 is repeated, you can obtain the three-dimensional point cloud of sequence image.
The 7th, step 4 is obtained to the two-dimensional silhouette texture topology information included in characteristic point be mapped to three-dimensional point cloud, will be without group
The three-dimensional point cloud knitted is changed into three-dimensional point cloud classifiable, with profile texture topology information.
7.1st, in step 6.1 triangulation method solution process, one is resolved by two two-dimentional match points in two images
During a three-dimensional point coordinate, the image belonging to two characteristic points and profile information are remained into the three-dimensional point information calculated, i.e.,
Complete mapping of the two-dimensional topology information to three-dimensional topology information;
7.2nd, in the three-dimensional point cloud obtained in step 7.1, each three-dimensional point contains which two width the point comes from
Image and its specific profile numbering on this image, accordingly can classify a cloud:
7.2a, according to the picture number of the three-dimensional point carry out first-level class to institute's invocation point cloud;
7.2b, according to the three-dimensional point correspond on the image profile numbering to after first-level class point cloud carry out two fractions
Class.
Beneficial effect:Compared to traditional based on sequence image three-dimensional point cloud acquiring technology, this method support around formula and
Flush system, orderly and unordered shooting style, the characteristic point of acquisition are evenly distributed, and improve Feature Points Matching efficiency, most
Important advantage is that this method remains two-dimensional topology information and maps that to three-dimensional point cloud, available for subsequent builds band topology
The threedimensional model of constraint, improves model accuracy.
Brief description of the drawings
Fig. 1 is the flow diagram of the method for the present invention;
Fig. 2 is the three-dimensional point cloud of the closet reconstructed of conventional three-dimensional point cloud acquisition methods;
Fig. 3 is the three-dimensional of the closet reconstructed of the three-dimensional point cloud acquisition methods of reservation topology information provided by the invention
Point cloud;
Fig. 4 (a) is the point cloud network forming to throw the reins to;
Fig. 4 (b) illustrates for profile texture constraint information;
Fig. 4 (c) is the point cloud network forming of belt profile texture constraint.
Embodiment
Fig. 1 show a kind of main flow for the three-dimensional point cloud acquisition methods for retaining topology information of the present invention.By this hair
Bright provided reservation topology information three-dimensional point cloud acquisition methods, obtained three dimensional point cloud can be used for subsequent builds band topology
The threedimensional model of constraint, improves model accuracy.Exemplified by obtaining the three dimensional point cloud in a house, with reference to attached drawing 1 to following
Each step is described in detail:
1st, camera calibration, obtains camera internal parameter, is preserved in the form of matrix K;
2nd, the view data around formula shooting style acquisition target area is passed through;
3rd, image gray processing, Gauss denoising, photo alignment pretreatment, specific photo alignment are carried out to the view data of acquisition
Step is as follows;
3.1st, (it is recommended that it is 20 to set Fast operators threshold value, at most extraction points are set using FAST operator extractions characteristic point
Measure as 1000), and calculate description;
3.2nd, realize that the matching of efficient feature point and bi-directional matching technology reduce error hiding using FLANN;
3.3rd, error hiding is filtered using RANSAC;
3.4th, using the characteristic point obtained in step 3.3 to resolving basis matrix F using 8 methods, with reference to existing in step 1
Camera calibration matrix K obtain image pair essential matrix E;
3.5th, using the essential matrix E of the image pair obtained in step 3.4, the matching image of all images is estimated:
3.5a, by by the essential matrix E of image pair be decomposed into rotation R and translation t two parts, obtain two cameras between
Evolution relation, and so on can obtain the evolution relation of any pair of camera;
3.5b, choose first image, according to the image and other image relative position transformation relations, chooses image to it
Between rotation and translation scale minimum the matching image as the image;
3.5c, using the matching image of first image as second image to be matched and perform 3.5a and 3.5b, with this
Analogize the matching image of definite all images;
3.6th, the camera matrix P of first image is assumed0It is fixed and be standard type, utilize the camera obtained in step 3.5a
Between evolution relation obtain matching another image of image pair camera matrix P1, and obtain with this phase of all images
Machine matrix;
4th, the feature point extraction of profile texture topological relation is retained;
4.1st, all profile textural characteristics of Target scalar in being extracted using Canny Boundary extracting algorithms per piece image, inspection
The profile texture of survey does not establish hierarchical relationship, and the profile data texturing per piece image is all stored in two-dimensional container, wherein often
Bar profile data texturing saves as point data form;
4.2nd, using Douglas-Pu Ke algorithms (it is recommended that threshold value is 5) to each profile Texture Points of every piece image
Data are simplified:
4.2a, retain the profile Texture Points after simplifying, and as characteristic point to be matched, and marks the image and wheel belonging to the point
Exterior feature numbering, the handling result per piece image are individually stored in a two-dimensional container;
4.2b, retain the profile Texture Points before simplifying, the characteristic point storehouse as Feature Points Matching.
5th, Feature Points Matching and filtering error hiding;
5.1st, treat matching characteristic point using SIFT operators and carry out feature description;
5.2nd, Feature Points Matching:
5.2a, choose first image, according to the image pair to be matched obtained in step 3.5, determines the matching of the image
Image;
5.2b, the characteristic point to be matched in matching image pair a, image are with matching the characteristic point Cooley on image
" epipolar-line constraint " relation diminution search range determined with essential matrix is obtained in step 3.4;
5.2c, using FLANN realize efficient matchings;
5.3rd, error hiding is filtered using RANSAC algorithms.
6th, using the matching characteristic point pair obtained in step 5, three-dimensional point cloud is resolved according to triangulation method;
6.1st, image pair is matched at one by two two-dimensional points come approximate a three-dimensional point, i.e., using in step 3.6
Restriction relation between the matching characteristic point pair obtained in the camera matrix P and step 5.3 of acquisition, resolves the three-dimensional seat of spatial point
Mark;
6.2nd, by a circulation matching double points are performed with step 6.1 operation to realize complete triangulation method, obtains two
The three-dimensional point cloud of width image reconstruction, in this, as the initial configuration of three-dimensional reconstruction of sequence image;
6.3rd, remaining image is added in this initial configuration one by one, i.e., is found in remaining image and second figure
As image of the matched image as the 3rd reconstruction, step 6.2 is repeated, you can obtain the three-dimensional point cloud of sequence image.
The 7th, step 4 is obtained to the two-dimensional silhouette texture topology information included in characteristic point be mapped to three-dimensional point cloud, will be without group
The three-dimensional point cloud knitted is changed into three-dimensional point cloud classifiable, with profile texture topology information.
7.1st, in step 6.1 triangulation method solution process, one is resolved by two two-dimentional match points in two images
During a three-dimensional point coordinate, the image belonging to two characteristic points and profile information are remained into the three-dimensional point information calculated, i.e.,
Complete mapping of the two-dimensional topology information to three-dimensional topology information;
7.2nd, in the three-dimensional point cloud obtained in step 7.1, each three-dimensional point contains which two width the point comes from
Image and its specific profile numbering on this image, accordingly can classify a cloud:
7.2a, according to the picture number of the three-dimensional point carry out first-level class to institute's invocation point cloud;
7.2b, according to the three-dimensional point correspond on the image profile numbering to after first-level class point cloud carry out two fractions
Class.
For the contrast of attached drawing 2 and attached drawing 3 can embody the more conventional method of the present invention, the point cloud distribution of acquisition is more equal
Even, data are more rich at the details such as doorframe.Fig. 4 (a) is the point cloud network forming to throw the reins to, and 4 (b) shows for profile texture constraint information
Meaning, 4 (c) are the point cloud network forming of belt profile texture constraint, embody the three-dimensional using reservation topology information provided by the present invention
Advantage of the cloud data that point cloud acquisition methods obtain when building model surface grid, even if model is more close to true field
Scape, and improve model accuracy.
Claims (7)
1. a kind of three-dimensional point cloud acquisition methods for retaining topology information, it is characterised in that comprise the following steps:
Step 1, camera calibration, obtain camera internal parameter, preserve with a matrix type;
Step 2, the view data for obtaining by way of taking photo by plane around formula or low latitude target area;
Step 3, the view data to acquisition pre-process;
Step 4, the feature point extraction for retaining profile texture topological relation;
Step 5, Feature Points Matching and filtering error hiding;
Step 6, using the matching characteristic point pair obtained in step 5, three-dimensional point cloud is resolved according to triangulation method;
Step 4, is obtained the two-dimensional silhouette texture topology information included in characteristic point and is mapped to three-dimensional point cloud by step 7, will be without group
The three-dimensional point cloud knitted is changed into three-dimensional point cloud classifiable, with profile texture topology information.
2. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that:The step 3
In data prediction include:Image gray processing, Gauss denoising and photo alignment.
3. the three-dimensional point cloud acquisition methods according to claim 2 for retaining topology information, it is characterised in that the photo pair
Comprise the following steps together:
Step 3.1, using FAST operator extraction characteristic points, and calculate description son;
Step 3.2, realize the matching of efficient feature point and bi-directional matching technology reduction error hiding using FLANN;
Step 3.3, utilize RANSAC filtering error hidings;
Step 3.4, using the characteristic point obtained in step 3.3 to resolving basis matrixes using 8 methods, with reference to existing in step 1
Camera calibration matrix obtain image pair essential matrix;
Step 3.5, the essential matrix using the image pair obtained in step 3.4, estimate the matching image of all images:
3.5a, by the way that the essential matrix of image pair is decomposed into rotation and translation two parts, obtain the position between two cameras
Transformation relation, and so on can obtain the evolution relation of any pair of camera;
3.5b, choose first image, according to the image and other image relative position transformation relations, is revolved between selection image pair
Turn and translate the matching image as the image of scale minimum;
3.5c, using the matching image of first image as second image to be matched and perform 3.5a and 3.5b, and so on
Determine the matching image of all images;
Step 3.6, assume that the camera matrix of first image is fixed and is standard type, using the camera obtained in step 3.5a it
Between position transformation relation obtain the camera matrix of matching another image of image pair, and obtain with this camera square of all images
Battle array.
4. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that the step 4
Include the following steps:
Step 4.1, utilize all profile textural characteristics of Target scalar in the every piece image of Canny Boundary extracting algorithms extraction, inspection
The profile texture of survey does not establish hierarchical relationship, and the profile data texturing per piece image is all stored in two-dimensional container, wherein often
Bar profile data texturing saves as point data form;
Step 4.2, using Douglas-Pu Ke algorithms simplify each profile texture point data of every piece image:
4.2a, retain the profile Texture Points after simplifying, and as characteristic point to be matched, and marks the image and profile volume belonging to it
Number, the handling result per piece image is individually stored in a two-dimensional container;
4.2b, retain the profile Texture Points before simplifying, the characteristic point storehouse as Feature Points Matching.
5. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that the step 5
Include the following steps:
Step 5.1, utilize to be matched characteristic point progress feature description of the SIFT operators to every piece image;
Step 5.2, Feature Points Matching:
5.2a, choose first image, according to the image pair to be matched obtained in step 3.5, determines the matching image of the image;
5.2b, the characteristic point to be matched in matching image pair a, image are with matching the step of the characteristic point Cooley on image
" epipolar-line constraint " relation that essential matrix determines is obtained in rapid 3.4 and reduces search range;
5.2c, using FLANN realize efficient matchings;
Step 5.3, utilize RANSAC algorithms filtering error hiding.
6. the three-dimensional point cloud acquisition methods according to claim 3 for retaining topology information, it is characterised in that the step 6
Specifically include:
Step 6.1, at one match image pair by two two-dimensional points come an approximate three-dimensional point, i.e., using in step 3.6
Restriction relation between the matching characteristic point pair obtained in the camera matrix and step 5.3 of acquisition, resolves the three-dimensional seat of spatial point
Mark;
Step 6.2, by a circulation perform matching double points step 6.1 operation to realize complete triangulation method, obtains two
The three-dimensional point cloud of width image reconstruction, in this, as the initial configuration of three-dimensional reconstruction of sequence image;
Step 6.3, add remaining image in this initial configuration one by one, i.e., is found in remaining image and second figure
As image of the matched image as the 3rd reconstruction, step 6.2 is repeated, you can obtain the three-dimensional point cloud of sequence image.
7. the three-dimensional point cloud acquisition methods according to claim 6 for retaining topology information, it is characterised in that have in step 7
Body includes:
7.1st, in step 6.1 triangulation method solution process, one three is resolved by two two-dimentional match points in two images
When tieing up point coordinates, the image belonging to two characteristic points and profile information are remained into the three-dimensional point information calculated, that is, completed
Mapping of the two-dimensional topology information to three-dimensional topology information;
7.2nd, in the three-dimensional point cloud obtained in step 7.1, each three-dimensional point contains which two images the point comes from
And its specific profile numbering on this image, it can classify accordingly to a cloud:
7.2a, according to the picture number of the three-dimensional point carry out first-level class to institute's invocation point cloud;
7.2b, according to the three-dimensional point correspond on the image profile numbering to after first-level class point cloud carry out secondary classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711178471.1A CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711178471.1A CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108010123A true CN108010123A (en) | 2018-05-08 |
CN108010123B CN108010123B (en) | 2021-02-09 |
Family
ID=62053322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711178471.1A Active CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108010123B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN108765574A (en) * | 2018-06-19 | 2018-11-06 | 北京智明星通科技股份有限公司 | 3D scenes intend true method and system and computer readable storage medium |
CN109472802A (en) * | 2018-11-26 | 2019-03-15 | 东南大学 | A kind of surface grid model construction method constrained certainly based on edge feature |
CN109598783A (en) * | 2018-11-20 | 2019-04-09 | 西南石油大学 | A kind of room 3D modeling method and furniture 3D prebrowsing system |
CN109816771A (en) * | 2018-11-30 | 2019-05-28 | 西北大学 | A kind of automatic recombination method of cultural relic fragments of binding characteristic point topology and geometrical constraint |
CN109951342A (en) * | 2019-04-02 | 2019-06-28 | 上海交通大学 | The three-dimensional matrice topological representation and routing traversal optimization implementation method of Information Network |
CN110443785A (en) * | 2019-07-18 | 2019-11-12 | 太原师范学院 | The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest |
CN111325854A (en) * | 2018-12-17 | 2020-06-23 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
WO2021160071A1 (en) * | 2020-02-11 | 2021-08-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Feature spatial distribution management for simultaneous localization and mapping |
CN118154460A (en) * | 2024-05-11 | 2024-06-07 | 成都大学 | Processing method of three-dimensional point cloud data of asphalt pavement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106651942A (en) * | 2016-09-29 | 2017-05-10 | 苏州中科广视文化科技有限公司 | Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points |
-
2017
- 2017-11-23 CN CN201711178471.1A patent/CN108010123B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106651942A (en) * | 2016-09-29 | 2017-05-10 | 苏州中科广视文化科技有限公司 | Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN108765574A (en) * | 2018-06-19 | 2018-11-06 | 北京智明星通科技股份有限公司 | 3D scenes intend true method and system and computer readable storage medium |
CN109598783A (en) * | 2018-11-20 | 2019-04-09 | 西南石油大学 | A kind of room 3D modeling method and furniture 3D prebrowsing system |
CN109472802A (en) * | 2018-11-26 | 2019-03-15 | 东南大学 | A kind of surface grid model construction method constrained certainly based on edge feature |
CN109472802B (en) * | 2018-11-26 | 2021-10-19 | 东南大学 | Surface mesh model construction method based on edge feature self-constraint |
CN109816771B (en) * | 2018-11-30 | 2022-11-22 | 西北大学 | Cultural relic fragment automatic recombination method combining feature point topology and geometric constraint |
CN109816771A (en) * | 2018-11-30 | 2019-05-28 | 西北大学 | A kind of automatic recombination method of cultural relic fragments of binding characteristic point topology and geometrical constraint |
CN111325854A (en) * | 2018-12-17 | 2020-06-23 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
CN111325854B (en) * | 2018-12-17 | 2023-10-24 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
CN109951342A (en) * | 2019-04-02 | 2019-06-28 | 上海交通大学 | The three-dimensional matrice topological representation and routing traversal optimization implementation method of Information Network |
CN109951342B (en) * | 2019-04-02 | 2021-05-11 | 上海交通大学 | Three-dimensional matrix topology representation and route traversal optimization realization method of spatial information network |
CN110443785A (en) * | 2019-07-18 | 2019-11-12 | 太原师范学院 | The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest |
WO2021160071A1 (en) * | 2020-02-11 | 2021-08-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Feature spatial distribution management for simultaneous localization and mapping |
CN118154460A (en) * | 2024-05-11 | 2024-06-07 | 成都大学 | Processing method of three-dimensional point cloud data of asphalt pavement |
Also Published As
Publication number | Publication date |
---|---|
CN108010123B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108010123A (en) | A kind of three-dimensional point cloud acquisition methods for retaining topology information | |
US10217293B2 (en) | Depth camera-based human-body model acquisition method and network virtual fitting system | |
CN110135455A (en) | Image matching method, device and computer readable storage medium | |
Bevan et al. | Computer vision, archaeological classification and China's terracotta warriors | |
Dall'Asta et al. | A comparison of semiglobal and local dense matching algorithms for surface reconstruction | |
CN103745498B (en) | A kind of method for rapidly positioning based on image | |
CN107169475B (en) | A kind of face three-dimensional point cloud optimized treatment method based on kinect camera | |
CN104616345B (en) | Octree forest compression based three-dimensional voxel access method | |
CN109461180A (en) | A kind of method for reconstructing three-dimensional scene based on deep learning | |
CN106780751A (en) | Three-dimensional point cloud method for reconstructing based on improved shielding Poisson algorithm | |
CN104599314A (en) | Three-dimensional model reconstruction method and system | |
CN107833250B (en) | Semantic space map construction method and device | |
CN105493078B (en) | Colored sketches picture search | |
CN110070567B (en) | Ground laser point cloud registration method | |
CN107610131A (en) | A kind of image cropping method and image cropping device | |
CN110490917A (en) | Three-dimensional rebuilding method and device | |
CN104050666B (en) | Brain MR image method for registering based on segmentation | |
CN110349247A (en) | A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding | |
CN113409384A (en) | Pose estimation method and system of target object and robot | |
CN108921895A (en) | A kind of sensor relative pose estimation method | |
CN105574527A (en) | Quick object detection method based on local feature learning | |
CN106023147B (en) | The method and device of DSM in a kind of rapidly extracting linear array remote sensing image based on GPU | |
CN110246181A (en) | Attitude estimation model training method, Attitude estimation method and system based on anchor point | |
CN113838005B (en) | Intelligent identification and three-dimensional reconstruction method and system for rock mass fracture based on dimension conversion | |
CN111145338B (en) | Chair model reconstruction method and system based on single-view RGB image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |