CN114332510B - Hierarchical image matching method - Google Patents

Hierarchical image matching method Download PDF

Info

Publication number
CN114332510B
CN114332510B CN202210001464.9A CN202210001464A CN114332510B CN 114332510 B CN114332510 B CN 114332510B CN 202210001464 A CN202210001464 A CN 202210001464A CN 114332510 B CN114332510 B CN 114332510B
Authority
CN
China
Prior art keywords
feature
matching
points
image
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210001464.9A
Other languages
Chinese (zh)
Other versions
CN114332510A (en
Inventor
曹明伟
赵海峰
付燕平
曹瑞芬
孙登第
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202210001464.9A priority Critical patent/CN114332510B/en
Publication of CN114332510A publication Critical patent/CN114332510A/en
Application granted granted Critical
Publication of CN114332510B publication Critical patent/CN114332510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a hierarchical image matching method, which comprises the steps of obtaining feature descriptors of feature points of a query image and a reference image, and searching two most similar candidate matching feature points for each local feature point in the query image from the local feature points contained in the reference image; and sequentially screening out the optimal feature matching result. The method can rapidly calculate the feature matching points between two images, and is then applied to a series of high-level computer vision tasks based on image matching: three-dimensional reconstruction based on images, simultaneous localization and map construction, image retrieval, map navigation, digital twin, image stitching, mixed reality, virtual reality, augmented reality and the like.

Description

Hierarchical image matching method
Technical Field
The invention belongs to the computer image processing technology, and particularly relates to a hierarchical image matching method.
Background
Image matching is a hot research problem in the field of computer vision, and is widely focused by researchers at home and abroad. A complete image matching process includes the steps of: detecting feature points in the image, calculating corresponding feature descriptors, matching features, and eliminating false feature matching points. In practice, the image matching precision in a natural scene is lower due to the influence of factors such as ambiguity of feature descriptors, textures of images, illumination, scale change and the like, so that the research progress and application of a high-level computer vision technology based on image matching are seriously influenced.
Taking three-dimensional reconstruction based on images as an example, if the wrong feature matching points are used as input data of the three-dimensional reconstruction system, the completeness and geometric consistency of the three-dimensional model can be damaged, wrong camera attitude information and a low-precision three-dimensional model can be obtained, and even the three-dimensional reconstruction process is failed. Therefore, it is highly desirable to provide a high-precision image matching method that can generate a large number of feature matching points and eliminate the wrong feature matching points in the feature matching points, so as to improve the performance of the high-level computer vision application system based on image matching. The current related research has the following problems:
(1) When processing feature matching between two images, a few seconds are required, and the requirements of some real-time processing applications are difficult to meet;
(2) Either the result of feature matching depends on the spatial clustering method selected, or the method is only suitable for feature matching of wide baseline images, namely, the method has no universality;
(3) All false feature matching points due to various factors such as description ambiguity, image scale, texture, and illumination variation cannot be eliminated at the same time.
Disclosure of Invention
The invention aims to: the invention aims to solve the defects in the prior art, and provides a layering image matching method which is used for searching an accurate matching point for each local characteristic point in a query image from a local characteristic point set of a reference image, and then enabling a series of high-level computer vision applications based on image matching to be possible through the calculated accurate characteristic matching points.
The technical scheme is as follows: the invention discloses a layering image matching method, which relates to feature point detection, feature matching and error matching elimination of images, and specifically comprises the following steps:
step S1: for two images of the input, i.e. query image I l And reference image I r The query image I is calculated by using a local feature point detection method and a local feature descriptor calculation method respectively l And reference image I r Each of which includes a local feature point and a corresponding feature descriptor;
wherein, from query image I l The detected local feature point of (1) is K l =k(x i ,y i ) The corresponding feature descriptor is D l =d(k(x i ,y i ));i∈[1,M]M represents the query image I l The number of the local feature points detected in the step (a);
from reference image I r The detected local feature point is K r =k(x′ j ,y′ j ) Corresponding featuresDescriptor D r =d(k(x′ j ,y′ j ));j∈[1,N]N represents the reference picture I r The number of the local feature points detected in the step (a);
step S2: from among the local feature points contained in the reference image, for each local feature point k (x i ,y i ) Searching two most similar candidate matching feature points, namely k (x' j ,y′ j ) And k (x' j+1 ,y′ j+1 );
Step S3: two candidate matching feature points k (x 'are calculated' j ,y′ j ) And k (x' j+1 ,y′ j+1 ) And feature point k (x i ,y i ) The ratio p of the difference in distance between them, and then determining the initial feature matching result Matches according to the distance difference ratio rho 1
Step S4: matches for initial feature matching results 1 Using a cross-validation method to eliminate the false feature matching caused by ambiguity of the feature descriptors, thereby obtaining a feature matching result of Matches 2
Step S5: matches for feature matching results 2 Eliminating all error feature matching points caused by unsatisfied geometric consistency to obtain accurate feature matching result Matches 3
Step S6: matches for feature matching results 3 The error feature matching points caused by the influence of noise data are eliminated by using a clustering-based method, so that a final feature matching result is obtained, namely Matches Final
Further, the specific method for calculating the local feature points and the corresponding feature descriptors in the step S1 is as follows:
1) Pre-training the feature points, namely obtaining a two-dimensional image by manufacturing a plurality of three-dimensional objects and carrying out picture interception of a visual angle on the objects; the true values of all feature points in the obtained two-dimensional image are known, so that the two-dimensional image is used for network training;
2) Characteristic points are self-marked, and ImageNet is adopted as a training data set and a testing data set of the part; training by using a synthetic scene to obtain a basic feature point detection network model, and extracting feature points on an ImageNet data set by using the basic feature point detection network model, namely, self-labeling the feature points;
3) And (3) joint training, namely performing geometric transformation on the pictures used in the previous step to obtain a plurality of picture pairs, inputting two pictures of the same picture pair into a network, extracting feature points and descriptors, performing joint training to obtain local features based on deep learning, and detecting the local feature points and calculating feature descriptors.
Further, in the step S2, a hierarchical local perceptual hash method is used to calculate and obtain k (x' j ,y′ j ) And k (x' j+1 ,y′ j+1 ) The specific process is as follows:
1) Constructing hash indexes
The notation g= { G: s→u } represents a family of functions, where S is the input, S is the output, G (v) = (h) 1 (v),…,h k (v)),h i E H, representing the random independent selection of L hash functions G from a family G of functions 1 ,…,g L The method comprises the steps of carrying out a first treatment on the surface of the For any point v in the dataset, store it to bucket g i (v) Wherein i=1, …, L;
2) Hash search
For a query point q=k (x i ,y i ) And a given distance threshold r, from the search bucket g 1 (q),…,g L (q) taking out all points v therein 1 ,…,v n As candidate approximate nearest neighbors; for any v j If D (q, v j ) R is less than or equal to r, return v j Where D (·) is a similarity metric function. Through the above process, two best candidate matching points can be found, q is the query point.
Further, the feature point k (x i ,y i ) Local feature points k (x 'matching two candidates' j ,y′ j ) And k (x' j+1 ,y′ j+1 ) The calculation method of the difference ratio rho between the two is as follows:
if it isThen the k (x' j ,y′ j ) For local feature point k (x i ,y i ) To obtain initial feature matching result Matches 1
Matches 1 ={<k(x i ,y i ),k(x′ j ,y′ j )>|i∈[1,M-,j∈[1,N-};
Wherein d (k (x) i ,y i ) Represents a local feature point k (x) i ,y i ) Is a feature descriptor of (1); d (k (x)' j ,y′ j ) Representing the candidate matching feature point local feature point k (x' j ,y′ j ) Is a feature descriptor of (1); d (k (x)' j+1 ,y′ j+1 ) (x ') represents the local feature point k (x' j+1 ,y′ j+1 ) Is a feature descriptor of (1).
Further, in the step S4, a cross-validation method is adopted to eliminate the false feature matching points, and the specific method is as follows:
first, the feature point k (x i ,y i ) Feature point k (x' j ,y′ j ) Matching is performed, and then the reference feature point k (x' j ,y′ j ) And query feature point k (x i ,y i ) Matching is carried out, if the results of the two matching are inconsistent, the matching is carried out<k(x i ,y i ),k(x′ j ,y′ j )>From Matches 1 Deleting, thereby obtaining more accurate feature matching result Matches 2
Matches 2 ={<k(x′ jj ,y′ jj ),k(x ii ,y ii )>|ii∈[1,M-,jj∈[1,N]}。
Further, in the step S5, the geometric constraint method Matches is adopted 2 The specific method for the error feature matching points which do not meet the geometric consistency is as follows:
first choice meterCalculate the query feature point k (x i ,y i ) And a reference feature point k (x' j ,y′ j ) Homography matrix H between the two, and then calculates query feature point k (x i ,y i ) And Hk (x' j ,y′ j ) The distance between τ, if τ<0.5, then consider<k(x i ,y i ),k(x′ j ,y′ j )>Is a pair of correct specific matching points; otherwise, the specific matching point is wrong, which should be selected from Matches 2 Cancellation to obtain accurate feature matching results Matches 3
Further, the detailed method of step S6 is as follows:
s6.1, adopting a hierarchical clustering method to perform Matches 3 Clustering the reference feature points to find a center point C 1
S6.2, C 1 Circumscribed rectangle Rect for calculating reference feature point for center 1
S6.3, detecting the Rect according to the homography transformation 1 In querying image I r Corresponding region Rect 'in (3)' 1
S6.4 from Matches 3 Delete in region Rect' 1 Matching points outside the matching points to obtain the final feature matching result Matches Final
The beneficial effects are that: the hierarchical image matching method provided by the invention simultaneously considers various factors (such as ambiguity of feature descriptors, illumination of images, texture and scale change and the like) influencing the local feature matching precision, adopts a hierarchical feature matching error elimination method to gradually eliminate the wrong feature matching points, is quick and accurate, and can be applied to high-level computer vision application systems such as three-dimensional reconstruction, image retrieval, simultaneous positioning and map construction, digital twin and the like.
Drawings
FIG. 1 is a schematic diagram of the overall process flow of the present invention;
FIG. 2 is a schematic diagram of an image processing flow according to an embodiment of the invention;
FIG. 3 is an input primitive diagram in an embodiment of the present invention;
FIG. 4 shows the final output result according to an embodiment of the present invention;
fig. 3 (a) is a query image, and fig. 3 (b) is a reference image.
Detailed Description
The technical scheme of the present invention is described in detail below, but the scope of the present invention is not limited to the embodiments.
As shown in fig. 1, the hierarchical image matching method of the present invention includes the following steps:
step S1, for given input image, i.e. query image I l And reference image I r Respectively using a local feature detection method to calculate local feature points and corresponding feature descriptors contained in the query image and the reference image;
wherein query image I l Is K l =k(x i ,y i ) The corresponding feature descriptor is D l =d(k(x i ,y i ));i∈[1,M];
Reference image I r The local feature point in (a) is K r =k(x′ j ,y′ j ) The corresponding feature descriptor is D r =d(k(x′ j ,y′ j ));j∈[1,N];
M and N respectively represent the number of local feature points in the query image and the reference image;
step S2: from among the local feature points contained in the reference image, for each local feature point k (x i ,y i ) Searching two most similar candidate matching feature points k (x' j ,y′ j ) And k (x' j+1 ,y′ j+1 );
Step S3: calculating the distance difference ratio rho between two candidate matching feature points which are most similar, and further determining an initial feature matching result Matches according to the ratio 1
Step S4: matches for initial feature matching results 1 Eliminating all false feature matching caused by ambiguity of feature descriptors to obtain accurate feature matching result Matches 2
Step S5: matches for feature matching results 2 Eliminating all error feature matching points caused by unsatisfied geometric consistency to obtain accurate feature matching result Matches 3
Step S6: matches for feature matching results 3 Eliminating all error feature matching points caused by the influence of noise data on the feature matching method, thereby obtaining a final feature matching result Matches Final
Example 1:
as shown in fig. 2, step (a) uses a scale test method to eliminate mismatching points due to ambiguity of descriptors; step (b) eliminating asymmetric feature matching points using a cross-validation method; step (c) eliminating feature matching points (such as homography matrix constraint, basic matrix constraint, epipolar constraint and the like) which do not meet a specific geometric model by using a geometric constraint method; step (d) eliminates mismatching points caused by noise data using statistical optimization and feature clustering methods.
The specific steps of the embodiment are as follows:
step1: the user inputs two images, namely query image I l And reference image I r . The input image may contain certain occlusion, blurring, specific shapes and objects, and color differences; and does not require any preprocessing operation of the input image by the user.
Step2: respectively detecting inquiry images I l And reference image I r And calculating corresponding feature descriptors.
Query image I l The query feature point in the method is K l ={k(x i ,y i )|i∈[1,M]Corresponding feature descriptor D l ={d(k(x i ,y i ))|i∈[1,M]}。
Reference image I r The reference feature point in (a) is K r ={k(x′ j ,y′ j )|j∈[1,N]Corresponding feature descriptor D r ={d(k(x′ j ,y′ j ))|j∈[1,N]}。
Wherein M and N respectively represent the number of feature points in the query image and the reference image; d (k (x) i ,y i ) Represents feature point k (x) i ,y i ) Is a feature descriptor of (1); d (k (x)' j ,y′ j ) Represents feature point k (x' j ,y′ j ) Is a feature descriptor of (1).
Step3: query feature point k (x) i ,y i ) Finding two candidate matching points k (x' j ,y′ j ) And k (x' j+1 ,y′ j+1 ) And further, the difference ratio between the two candidate matching points can be calculated:
wherein d (k (x) i ,y i ) Represents a local feature point k (x) i ,y i ) Is a feature descriptor of (1); d (k (x)' j ,y′ j ) (x ') represents the local feature point k (x' j ,y′ j ) Is a feature descriptor of (1); d (k (x)' j+1 ,y′ j+1 ) (x ') represents the local feature point k (x' j+1 ,y′ j+1 ) Is a feature descriptor of (1). If it isThen consider the feature point k (x' j ,y′ j ) Is the feature point k (x i ,y i ) Is a correct matching point of (a); thus, the initial feature matching result Matches can be obtained 1
Matches 1 ={<k(x i ,y i ),k(x′ j ,y′ j )>|i∈[1,M],j∈[1,N]} (2)
Step4: with reference to image I r Characteristic point k (x' jj ,y′ jj ) As a query feature point, from query image I l The middle is the feature point k (x' jj ,y′ jj ) Finding two candidate matching points k (x ii ,y ii ) And k (x) ii+1 ,y ii+1 ) The method comprises the steps of carrying out a first treatment on the surface of the According to(1) The feature matching result can be calculated as follows:
Matches 2 ={<k(x′ jj ,y′ jj ),k(x ii ,y ii )>|ii∈[1,M],jj∈[1,N]} (3)
merging Matches 1 And Matches 2 When i=ii and j=jj,<k(x i ,y i ),k(x′ j ,y′ j )>considered as a pair of correct matching points, otherwise the wrong matching points should be deleted, so that the feature matching result as shown below can be obtained:
Matches 3 ={<k(x p ,y p ),k(x′ q ,y′ q )>|p∈[1,M],q∈[1,N]}; (4)
through the cross verification of the process, the verification result meets the consistency.
Step5: according to Matches 3 Calculating query image I l And reference image I r Homography matrix H between lr The following is shown:
record k (x' q ,y′ q 1) is k (x' q ,y′ q ) Then can calculate the characteristic point k (x' q ,y′ q ) In querying image I l Corresponding points of (a):
if the feature point k (x' p ,y′ p ) And feature point k (x p ,y p ) A distance between them of less than 0.5, then<k(x p ,y p ),k(x′ q ,y′ q )>Is a pair of correct feature matching points, which should otherwise be matched from Matches 3 And then the feature matching result shown below can be obtained.
Matches 4 ={<k(x g ,y g ),k(x′ s ,y′ s )>|g∈[1,M],s∈[1,N]} (7)
Wherein,<k(x g ,y g )、k(x′ s ,y′ s )>is a pair of feature matching results, k (x g ,y g ) Represents the query feature points, k (x' s ,y′ s ) Representing the reference feature points.
Step6: in practice, since the homography matrix H is solved lr Is influenced by noise data, resulting in feature matching results of Matches 4 For these erroneous matching points, either a statistical-based or a cluster-based error elimination method may be used to match them to the match 4 And thus obtain a completely correct feature matching result, as follows:
Matches Final ={<k(x v ,y v ),k(x′ c ,y′ c )>|v∈[1,M],c∈[1,N]} (8)
wherein M and N are respectively query images I l And reference image I r The number of feature points in the model (a).
In fig. 2, the following operations are performed sequentially from step b to step c:
firstly, initial matching of feature points is carried out, and then, respectively adopting: and (3) respectively eliminating wrong feature matching layer by a ratio test, cross validation, geometric constraint, statistics and clustering method, so as to obtain a correct feature matching result.
In the above embodiment, the input image is shown in fig. 3, fig. 3 (a) is a query image, and fig. 3 (b) is a reference image; in the calculation of the image matching, no preprocessing operation is required for the input image.
As shown in fig. 4, the initial output matching result of the present embodiment is shown in fig. 4, in which a green straight line represents the correct feature matching point; both ends of the line represent the positions of the feature points in the image.
According to the embodiment, the high-hierarchy feature matching method is provided by analyzing various factors causing the feature matching error, the wrong feature matching points are eliminated layer by layer, and a foundation is laid for improving the performance of the high-hierarchy computer vision application system based on image matching. Corresponding feature matching points can be calculated quickly and efficiently, and the method is further applied to a series of high-level computer vision application systems based on image matching.

Claims (6)

1. A hierarchical image matching method is characterized in that: the method comprises the following steps:
step S1: for two images of the input, i.e. query imagesAnd reference image->A local feature point detection method and a local feature descriptor calculation method are respectively used for the query image to calculate +.>And reference image->Each of which includes a local feature point and a corresponding feature descriptor;
wherein, from the query imageThe detected local feature point of (2) is +.>The corresponding feature descriptor is +.>,/>Representing +.>The number of the local feature points detected in the step (a);
from reference imagesThe detected local feature point of (a) is +.>The corresponding feature descriptor is +.>,/>Representing +.>The number of the local feature points detected in the step (a);
step S2: from the local feature points contained in the reference image, for each local feature point in the query imageSearching two most similar candidate matching feature points, namely +.>And->
Step S3: calculating two candidate matching feature pointsAnd->And feature point->Distance difference ratio between->Further according to the distance difference ratio->Determining initial feature match results->The method comprises the steps of carrying out a first treatment on the surface of the Characteristic points->Local feature point matched with two candidates +.>And->Difference ratio between->The calculation method of (1) is as follows:
if it is,/>Then->Is local feature point->To obtain an initial feature matching result +.>
Wherein,representing local feature points +.>Is a feature descriptor of (1); />Representing local feature points +.>Is a feature descriptor of (1); />Representing local feature points +.>Is a feature descriptor of (1);
step S4: for initial feature matching resultsUsing cross-validation method to eliminate error feature matching due to ambiguity of feature descriptors, thereby obtaining feature matching result +.>
Step S5: for special purposeSign matching resultsEliminating all error feature matching points caused by unsatisfied geometric consistency to obtain feature matching result +.>
Step S6: for feature matching resultsEliminating false feature matching points caused by influence of noise data by using a clustering-based method so as to obtain a final feature matching result, namely +.>
2. The hierarchical image matching method according to claim 1, wherein: the specific method for calculating the local feature points and the corresponding feature descriptors by adopting the local feature detection method based on deep learning in the step S1 is as follows:
step S1.1, pre-training of local feature points: obtaining two-dimensional images by manufacturing corresponding three-dimensional objects and carrying out picture interception of a visual angle on the objects, and using all known local characteristic points in the two-dimensional images for network training;
step S1.2, self-labeling of characteristic points: adopting ImageNet as a training data set and a test data set; training by using a synthetic scene to obtain a basic feature point detection network model, and extracting feature points on an ImageNet data set by using the basic feature point detection network model, namely, self-labeling the feature points;
step S1.3, joint training: and carrying out geometric transformation on the pictures used in the previous step, inputting a plurality of picture pairs into a basic feature point detection network by the corresponding picture pairs, extracting feature points and descriptors, and carrying out joint training to obtain local features based on deep learning so as to detect the local feature points and calculate feature descriptors.
3. The hierarchical image matching method according to claim 1, wherein: the step S2 uses hierarchical local perception hash method to calculate and obtainAnd->The specific process is as follows:
s2.1, constructing Hash index
Recording deviceRepresenting a family of functions, wherein->Representing input->Representing an output; />Representing a random independent from the family of functions +.>Is selected from->Hash function->The method comprises the steps of carrying out a first treatment on the surface of the For any point in the dataset->Store it to barrel->Wherein, is->
S2.2, hash search procedure
For a query pointAnd a given distance threshold +.>From search bucket->All points are taken out +.>As candidate approximate nearest neighbors; for arbitrary +.>If->Return +.>Wherein->For similarity measure function, ++>Representing a query point;
the two best candidate matching points are found through the above process.
4. The hierarchical image matching method according to claim 1, wherein: in the step S4, a cross-validation method is adopted to eliminate the error feature matching points, and the specific method is as follows:
first, the feature points of the query imageFeature points in reference image +.>Matching is carried out, and then the reference characteristic points are added>Query feature points->Matching is performed, if the results of the two matching are inconsistent, then +.>From->Deletion, thereby obtaining a more accurate feature matching result +.>
5. The hierarchical image matching method according to claim 1, wherein: in the step S5, a geometric constraint method is adoptedThe specific method for the error feature matching points which do not meet the geometric consistency is as follows:
preferred calculation of query feature pointsAnd reference feature point->Homography matrix between->Further calculate query feature points +.>And->Distance between->If->Consider->Is a pair of correct specific matching points; otherwise the specific matching point is wrong, from +.>Eliminating, thereby obtaining accurate characteristic matching result +.>
Wherein,means feature point +.>Locations in the query image.
6. The hierarchical image matching method according to claim 1, wherein: step S6 adopts a feature matching error elimination method based on clustering to eliminate error feature matching points layer by layer, and comprises the following specific contents:
s6.1, adopting a hierarchical clustering method to perform clustering onReference feature pointsClustering to find out center point->
S6.2, in order toCircumscribed rectangle of reference feature point calculated for center +.>
S6.3, detecting according to homography transformationIn query image +.>Corresponding region->
S6.4 slaveDelete in area +.>Matching points outside, the final feature matching result can be obtained +.>
CN202210001464.9A 2022-01-04 2022-01-04 Hierarchical image matching method Active CN114332510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210001464.9A CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210001464.9A CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Publications (2)

Publication Number Publication Date
CN114332510A CN114332510A (en) 2022-04-12
CN114332510B true CN114332510B (en) 2024-03-22

Family

ID=81022371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210001464.9A Active CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Country Status (1)

Country Link
CN (1) CN114332510B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109852B (en) * 2023-04-13 2023-06-20 安徽大学 Quick and high-precision image feature matching error elimination method
CN116342826B (en) * 2023-05-25 2023-10-10 上海维智卓新信息科技有限公司 AR map construction method and device
CN116385665A (en) * 2023-06-02 2023-07-04 合肥吉麦智能装备有限公司 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN116543187A (en) * 2023-07-04 2023-08-04 合肥吉麦智能装备有限公司 Image matching method for dual-mode G-type arm X-ray machine
CN117150698B (en) * 2023-11-01 2024-02-23 广东新禾道信息科技有限公司 Digital twinning-based smart city grid object construction method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4115145A4 (en) * 2020-03-05 2023-08-02 Magic Leap, Inc. Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Also Published As

Publication number Publication date
CN114332510A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN114332510B (en) Hierarchical image matching method
Zhang et al. Reference pose generation for long-term visual localization via learned features and view synthesis
Feng et al. 2d3d-matchnet: Learning to match keypoints across 2d image and 3d point cloud
Pham et al. Lcd: Learned cross-domain descriptors for 2d-3d matching
CN109147038A (en) Pipeline three-dimensional modeling method based on three-dimensional point cloud processing
CN108921895B (en) Sensor relative pose estimation method
Torii et al. Are large-scale 3d models really necessary for accurate visual localization?
Cui et al. Efficient large-scale structure from motion by fusing auxiliary imaging information
CN109523582B (en) Point cloud coarse registration method considering normal vector and multi-scale sparse features
CN108830888B (en) Coarse matching method based on improved multi-scale covariance matrix characteristic descriptor
CN110111375B (en) Image matching gross error elimination method and device under Delaunay triangulation network constraint
CN111739079B (en) Multisource low-altitude stereopair fast matching method based on semantic features
Song et al. Fast estimation of relative poses for 6-dof image localization
CN113838005B (en) Intelligent identification and three-dimensional reconstruction method and system for rock mass fracture based on dimension conversion
Hu et al. Efficient and automatic plane detection approach for 3-D rock mass point clouds
CN103927785A (en) Feature point matching method for close-range shot stereoscopic image
CN112184783A (en) Three-dimensional point cloud registration method combined with image information
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN115147599A (en) Object six-degree-of-freedom pose estimation method for multi-geometric feature learning of occlusion and truncation scenes
CN112001954B (en) Underwater PCA-SIFT image matching method based on polar curve constraint
Ye et al. Ec-sfm: Efficient covisibility-based structure-from-motion for both sequential and unordered images
Zheng et al. The augmented homogeneous coordinates matrix-based projective mismatch removal for partial-duplicate image search
Chater et al. Robust Harris detector corresponding and calculates the projection error using the modification of the weighting function
Abdel-Wahab et al. Efficient reconstruction of large unordered image datasets for high accuracy photogrammetric applications
Budianti et al. Background blurring and removal for 3d modelling of cultural heritage objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant