CN114332510A - Hierarchical image matching method - Google Patents

Hierarchical image matching method Download PDF

Info

Publication number
CN114332510A
CN114332510A CN202210001464.9A CN202210001464A CN114332510A CN 114332510 A CN114332510 A CN 114332510A CN 202210001464 A CN202210001464 A CN 202210001464A CN 114332510 A CN114332510 A CN 114332510A
Authority
CN
China
Prior art keywords
feature
matching
points
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210001464.9A
Other languages
Chinese (zh)
Other versions
CN114332510B (en
Inventor
曹明伟
赵海峰
付燕平
曹瑞芬
孙登第
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202210001464.9A priority Critical patent/CN114332510B/en
Publication of CN114332510A publication Critical patent/CN114332510A/en
Application granted granted Critical
Publication of CN114332510B publication Critical patent/CN114332510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a hierarchical image matching method, which comprises the steps of obtaining a feature descriptor of a query image and a feature descriptor of a reference image, and searching two most similar candidate matching feature points for each local feature point in the query image from the local feature points contained in the reference image; and screening out the best feature matching results in sequence. The invention can quickly calculate the feature matching points between two images, and then is applied to a series of high-level computer vision tasks based on image matching: the method comprises the steps of three-dimensional reconstruction based on images, simultaneous positioning and map construction, image retrieval, map navigation, digital twinning, image splicing, mixed reality, virtual reality, augmented reality and the like.

Description

Hierarchical image matching method
Technical Field
The invention belongs to the computer image processing technology, and particularly relates to a hierarchical image matching method.
Background
Image matching is a hot research problem in the field of computer vision, and is widely concerned by researchers at home and abroad. A complete image matching procedure comprises the following steps: detecting the feature points in the image, calculating the corresponding feature descriptors, matching the features, and eliminating the wrong feature matching points. In practice, due to the influence of factors such as ambiguity of the feature descriptors, texture of an image, illumination, scale change and the like, the image matching precision under a natural scene is low, and the research process and application of the high-level computer vision technology based on image matching are seriously influenced.
Taking three-dimensional reconstruction based on images as an example, if an incorrect feature matching point is used as input data of a three-dimensional reconstruction system, the completeness and geometric consistency of a three-dimensional model may be damaged, and incorrect camera pose information and a low-precision three-dimensional model are obtained, even resulting in failure of the three-dimensional reconstruction process. Therefore, it is desirable to provide a high-precision image matching method, which can generate a large number of feature matching points and eliminate the wrong feature matching points, so as to improve the performance of the high-level computer vision application system based on image matching. The existing related researches have the following problems:
(1) the time of several seconds is needed when the feature matching between two images is processed, and the requirements of some real-time processing applications are difficult to meet;
(2) either the result of feature matching depends on the selected spatial clustering method, or the feature matching is only suitable for the wide baseline image, i.e. the feature matching has no universality;
(3) all false feature matching points due to many factors such as description ambiguity, image scale, texture and illumination variation cannot be simultaneously eliminated.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the defects in the prior art, and provides a hierarchical image matching method, which is characterized in that an accurate matching point is searched for each local feature point in a query image from a local feature point set of a reference image, and then a series of high-level computer vision applications based on image matching become possible through the calculated accurate feature matching points.
The technical scheme is as follows: the invention discloses a hierarchical image matching method, which relates to the detection of feature points, the matching of features and the elimination of error matching of images, and specifically comprises the following steps:
step S1: for two images input, i.e. query image IlAnd a reference picture IrFor which a local feature point detection method and a local feature descriptor calculation method are respectively used to calculate the query image IlAnd a reference picture IrThe local feature points and the corresponding feature descriptors contained in each of the two groups;
wherein the query image IlDetected local feature point of (2) is Kl=k(xi,yi) The corresponding feature descriptor is Dl=d(k(xi,yi));i∈[1,M]M denotes a query image IlThe number of detected local feature points;
from reference picture IrThe detected local feature point in (1) is Kr=k(x′j,y′j) The corresponding feature descriptor is Dr=d(k(x′j,y′j));j∈[1,N]N denotes a slave reference picture IrThe number of detected local feature points;
step S2: from the local feature points contained in the reference image, for each local feature point k (x) in the query imagei,yi) Finding two most similar candidate matching feature points, namely k (x'j,y′j) And k (x'j+1,y′j+1);
Step S3: calculating two candidate matching feature points k (x'j,y′j) And k (x'j+1,y′j+1) And a feature point k (x)i,yi) The distance difference ratio rho between the two, and then determining the initial feature matching result Matches according to the distance difference ratio rho1
Step S4: for initial feature matching results Matches1Using cross-validation to eliminate features due to the fact thatDescribing the error feature matching generated by the ambiguity, and further obtaining the feature matching result as Matches2
Step S5: for feature matching results Matches2Eliminating all error feature matching points caused by not meeting geometric consistency and obtaining accurate feature matching result Matches3
Step S6: for feature matching results Matches3Eliminating error feature matching points caused by the influence of noise data by using a clustering-based method so as to obtain a final feature matching result, namely MatchesFinal
Further, the specific method for calculating the local feature points and the corresponding feature descriptors in step S1 is as follows:
1) pre-training feature points, namely, making three-dimensional objects and carrying out picture interception of a visual angle on the objects to obtain a two-dimensional image; true values of all feature points in the obtained two-dimensional image are known, so that the true values are used for network training;
2) self-labeling the characteristic points, and adopting ImageNet as a training data set and a testing data set of the part; training by using a synthetic scene to obtain a basic characteristic point detection network model, and extracting characteristic points on the ImageNet data set by using the basic characteristic point detection network model to obtain characteristic point self-labeling;
3) and (3) performing joint training, namely performing geometric transformation on the pictures used in the previous step to obtain a plurality of picture pairs, inputting two pictures of the same picture pair into a network, extracting feature points and descriptors, and performing joint training to obtain local features based on deep learning, so as to detect the local feature points and calculate the feature descriptors.
Further, k (x ') is obtained through calculation in the step S2 by using a hierarchical partially perceptual hashing method'j,y′j) And k (x'j+1,y′j+1) The specific process is as follows:
1) constructing hash indexes
Let G ═ { G: S → U } denote a family of functions, where S is the input, S is the output, and G (v) ═ h1(v),…,hk(v)),hiE H, representing the random independent selection of L hash functions G from the family of functions G1,…,gL(ii) a For any point v in the data set, store it in bucket gi(v) Wherein i is 1, …, L;
2) hash search
For a query point q ═ k (x)i,yi) And given a distance threshold r, from the search bucket g1(q),…,gL(q) taking out all points v therein1,…,vnAs candidate approximate nearest neighbors; for arbitrary vjIf D (q, v)j) R is less than or equal to vjWhere D (-) is a similarity measure function. Through the process, two optimal candidate matching points can be found, and q is a query point.
Further, the feature point k (x) in the step S3i,yi) Local feature point k (x ') matched with two candidates'j,y′j) And k (x'j+1,y′j+1) The calculation method of the difference ratio rho between the two is as follows:
Figure BDA0003454485730000031
if it is
Figure BDA0003454485730000032
Then k (x'j,y′j) Is a local feature point k (x)i,yi) To obtain initial feature matching results from the correct matching points1
Matches1={<k(xi,yi),k(x′j,y′j)>|i∈[1,M-,j∈[1,N-};
Wherein d (k (x)i,yi) Denotes a local feature point k (x)i,yi) The feature descriptor of (1); d (k (x'j,y′j) Is used for representing candidate matching feature point local feature point k (x'j,y′j) The feature descriptor of (1); d (k (x'j+1,y′j+1) Denotes local feature point k (x'j+1,y′j+1) The feature descriptors of (1).
Further, in the step S4, a cross validation method is adopted to eliminate the wrong feature matching point, and the specific method is as follows:
firstly, the characteristic point k (x) of the query image is searchedi,yi) And feature point k (x ') of reference image'j,y′j) Matching is carried out, and the characteristic points k (x'j,y′j) And query feature point k (x)i,yi) Matching is carried out, if the results of the two matching are not consistent, the results will be<k(xi,yi),k(x′j,y′j)>From Matches1Deleting to obtain more accurate feature matching results of Matches2
Matches2={<k(x′jj,y′jj),k(xii,yii)>|ii∈[1,M-,jj∈[1,N]}。
Further, in the step S5, geometry constraint mathes is adopted2The method for matching the error features of the image data, which do not meet the geometric consistency, comprises the following steps:
first-choice computing query feature point k (x)i,yi) And reference characteristic point k (x'j,y′j) Homography matrix H between, and then calculates query feature point k (x)i,yi) And Hk (x'j,y′j) If τ is the distance between<0.5, then it is considered<k(xi,yi),k(x′j,y′j)>Is a correct pair of specific matching points; otherwise, it is a wrong specific matching point, which should be selected from Matches2Eliminate to obtain accurate feature matching results Matches3
Further, the detailed method of step S6 is:
s6.1, adopting a hierarchical clustering method to match the Matches3Clustering the reference characteristic points to find out a central point C1
S6.2, with C1Computing circumscribed rectangle Rect of reference feature point for center1
S6.3, detecting Rect according to homography transformation1In querying image IrOf (1) corresponding region Rect'1
S6.4, from Matches3Delete in region Rect'1Obtaining final feature matching results from the other matching pointsFinal
Has the advantages that: the hierarchical image matching method provided by the invention simultaneously considers a plurality of factors (such as ambiguity of a feature descriptor, illumination of an image, texture and scale change and the like) influencing the local feature matching precision, adopts a hierarchical feature matching error elimination method to gradually eliminate wrong feature matching points, is quick and accurate, and can be used for high-level computer vision application systems such as three-dimensional reconstruction, image retrieval, simultaneous positioning and map construction, digital twinning and the like.
Drawings
FIG. 1 is a schematic overall process flow diagram of the present invention;
FIG. 2 is a schematic diagram of an image processing flow according to an embodiment of the present invention;
FIG. 3 is an input primitive diagram in accordance with an embodiment of the present invention;
FIG. 4 is a diagram illustrating a final output result according to an embodiment of the present invention;
fig. 3(a) is a query image, and fig. 3(b) is a reference image.
Detailed Description
The technical solution of the present invention is described in detail below, but the scope of the present invention is not limited to the embodiments.
As shown in fig. 1, a hierarchical image matching method of the present invention includes the following steps:
step S1, for a given input image, i.e. query image IlAnd a reference picture IrRespectively using a local feature detection method to calculate local feature points and corresponding feature descriptors contained in the query image and the reference image;
in which the image I is queriedlHas a local characteristic point of Kl=k(xi,yi),The corresponding feature descriptor is Dl=d(k(xi,yi));i∈[1,M];
Reference picture IrHas a local feature point of Kr=k(x′j,y′j) The corresponding feature descriptor is Dr=d(k(x′j,y′j));j∈[1,N];
M and N respectively represent the number of local feature points in the query image and the reference image;
step S2: from the local feature points contained in the reference image, for each local feature point k (x) in the query imagei,yi) Finding two most similar candidate matching feature points k (x'j,y′j) And k (x'j+1,y′j+1);
Step S3: calculating the distance difference ratio rho between the two most similar candidate matching feature points, and further determining the initial feature matching result Matches according to the ratio1
Step S4: for initial feature matching results Matches1Eliminating all error feature matching generated by the ambiguity of the feature descriptor to obtain accurate feature matching result Matches2
Step S5: for feature matching results Matches2Eliminating all error feature matching points caused by not meeting geometric consistency and obtaining accurate feature matching result Matches3
Step S6: for feature matching results Matches3Eliminating all error feature matching points caused by the influence of noise data on the feature matching method, thereby obtaining the final feature matching result MatchesFinal
Example 1:
as shown in fig. 2, step (a) eliminates false matching points due to ambiguity of descriptors using a scale test method; step (b) eliminating asymmetric feature matching points by using a cross-validation method; using a geometric constraint method to eliminate feature matching points (such as homography matrix constraint, basic matrix constraint, epipolar constraint and the like) which do not meet a specific geometric model; and (d) eliminating the error matching points caused by the influence of the noise data by using a statistical optimization and feature clustering method.
The specific steps of this embodiment are:
step 1: the user entering two images, i.e. query image IlAnd a reference picture Ir. The input image can contain certain occlusion, blurring, specific shapes and objects, and color difference; and does not require any pre-processing operation by the user on the input image.
Step 2: separately detecting query images IlAnd a reference picture IrAnd calculating corresponding feature descriptors.
Recording query image IlThe query feature point in (1) is Kl={k(xi,yi)|i∈[1,M]D for a corresponding feature descriptorl={d(k(xi,yi))|i∈[1,M]}。
Reference picture IrThe reference characteristic point in (1) is Kr={k(x′j,y′j)|j∈[1,N]D for a corresponding feature descriptorr={d(k(x′j,y′j))|j∈[1,N]}。
Wherein M and N respectively represent the number of feature points in the query image and the reference image; d (k (x)i,yi) Represents a characteristic point k (x)i,yi) The feature descriptor of (1); d (k (x'j,y′j) Denotes a characteristic point k (x'j,y′j) The feature descriptors of (1).
Step 3: query feature point k (x) by using hierarchical local perception hash methodi,yi) Finding two candidate matching points k (x'j,y′j) And k (x'j+1,y′j+1) Then, the difference ratio between the two candidate matching points can be calculated:
Figure BDA0003454485730000061
wherein d (k (x)i,yi) Is shown in (a)Local feature point k (x)i,yi) The feature descriptor of (1); d (k (x'j,y′j) Denotes local feature point k (x'j,y′j) The feature descriptor of (1); d (k (x'j+1,y′j+1) Denotes local feature point k (x'j+1,y′j+1) The feature descriptors of (1). If it is not
Figure BDA0003454485730000062
Then the characteristic point k (x'j,y′j) Is the characteristic point k (x)i,yi) The correct matching point of (1); to this end, initial feature matching results Matches may be obtained1
Matches1={<k(xi,yi),k(x′j,y′j)>|i∈[1,M],j∈[1,N]} (2)
Step 4: with reference to picture IrCharacteristic point k (x)'jj,y′jj) As query feature points, from the query image IlIs a characteristic point k (x'jj,y′jj) Find two candidate matching points k (x)ii,yii) And k (x)ii+1,yii+1) (ii) a From equation (1), the feature matching result shown below can be calculated:
Matches2={<k(x′jj,y′jj),k(xii,yii)>|ii∈[1,M],jj∈[1,N]} (3)
merging Matches1And Matches2The matching result in (1), when i ═ ii and j ═ jj,<k(xi,yi),k(x′j,y′j)>the matching points that are considered to be a pair of correct matching points, otherwise the wrong matching points should be deleted, so that the feature matching result shown below can be obtained:
Matches3={<k(xp,yp),k(x′q,y′q)>|p∈[1,M],q∈[1,N]}; (4)
through the cross validation of the process, the validation result meets the consistency.
Step 5: according to Matches3Computing queriesImage IlAnd a reference picture IrHomography matrix H betweenlrAs follows:
Figure BDA0003454485730000071
note k (x'q,y′qAnd 1) is k (x'q,y′q) Further, the homogeneous coordinate of (c) can be calculated as the feature point k (x'q,y′q) In querying image IlThe corresponding point in (1):
Figure BDA0003454485730000072
if characteristic point k (x'p,y′p) And a feature point k (x)p,yp) Is less than 0.5, then it is considered that<k(xp,yp),k(x′q,y′q)>Is a correct pair of feature matching points that otherwise should be removed from the Matches3And then the feature matching result shown below can be obtained.
Matches4={<k(xg,yg),k(x′s,y′s)>|g∈[1,M],s∈[1,N]} (7)
Wherein,<k(xg,yg)、k(x′s,y′s)>is a pair of feature matching results, k (x)g,yg) Representing query feature Point, k (x's,y′s) Reference feature points are indicated.
Step 6: in practice, due to the solving of the homography matrix HlrAffected by noisy data, resulting in feature matching results Matches4There still exist erroneous feature matching points, and for these erroneous matching points, both statistical-based and clustering-based error elimination methods can be used to remove them from the mathes4So as to obtain a completely correct feature matching result, as follows:
MatchesFinal={<k(xv,yv),k(x′c,y′c)>|v∈[1,M],c∈[1,N]} (8)
wherein M and N are respectively query images IlAnd a reference picture IrThe number of feature points in (1).
In fig. 2, the following operations are performed in sequence from step b to step c:
firstly, carrying out initial matching of feature points, and then respectively adopting: and the method comprises the steps of proportional test, cross validation, geometric constraint, statistics and clustering, and the wrong feature matching is eliminated layer by layer respectively, so that a correct feature matching result is obtained.
In the above embodiment, the input image is shown in fig. 3, where fig. 3(a) is a query image and fig. 3(b) is a reference image; in the calculation process of image matching, no preprocessing operation is needed to be carried out on the input image.
As shown in fig. 4, the initial output matching result of the present embodiment is shown in fig. 4, where the green straight line represents the correct feature matching point; both ends of the connecting line represent the positions of the feature points in the image.
Through the embodiment, the invention provides a high-level feature matching method by analyzing various factors causing the feature matching error, eliminates wrong feature matching points layer by layer and lays a foundation for improving the performance of a high-level computer vision application system based on image matching. The corresponding feature matching points can be calculated quickly and efficiently, and then the method is applied to a series of high-level computer vision application systems based on image matching.

Claims (7)

1. A hierarchical image matching method is characterized in that: the method comprises the following steps:
step S1: for two images input, i.e. query image IlAnd a reference picture IrFor which a local feature point detection method and a local feature descriptor calculation method are respectively used to calculate the query image IlAnd a reference picture IrThe local feature points and the corresponding feature descriptors contained in each of the two groups;
wherein the query image IlDetected local feature point of (2) is Kl=k(xi,yi) The corresponding feature descriptor is Dl=d(k(xi,yi));i∈[1,M]M denotes a query image IlThe number of detected local feature points;
from reference picture IrThe detected local feature point in (1) is Kr=k(x′j,y′j) The corresponding feature descriptor is Dr=d(k(x′j,y′j));j∈[1,N]N denotes a slave reference picture IrThe number of detected local feature points;
step S2: from the local feature points contained in the reference image, for each local feature point k (x) in the query imagei,yi) Finding two most similar candidate matching feature points, namely k (x'j,y′j) And k (x'j+1,y′j+1);
Step S3: calculating two candidate matching feature points k (x'j,y′j) And k (x'j+1,y′j+1) And a feature point k (x)i,yi) The distance difference ratio rho between the two, and then determining the initial feature matching result Matches according to the distance difference ratio rho1
Step S4: for initial feature matching results Matches1Eliminating error feature matching generated by ambiguity of feature descriptor by using cross-validation method, and obtaining feature matching result as Matches2
Step S5: for feature matching results Matches2Eliminating all error feature matching points caused by not meeting geometric consistency to obtain feature matching result Matches3
Step S6: for feature matching results Matches3Eliminating error feature matching points caused by the influence of noise data by using a clustering-based method so as to obtain a final feature matching result, namely MatchesFinal
2. The hierarchical image matching method according to claim 1, wherein: in step S1, the specific method for calculating the local feature points and the corresponding feature descriptors by using the local feature detection method based on deep learning is as follows:
step S1.1, pre-training local feature points: the method comprises the steps of obtaining two-dimensional images by making corresponding three-dimensional objects and carrying out picture interception of a visual angle on the objects, and using all known local feature points in the two-dimensional images for network training;
s1.2, self-labeling of feature points: adopting ImageNet as a training data set and a testing data set; training by using a synthetic scene to obtain a basic characteristic point detection network model, and extracting characteristic points on the ImageNet data set by using the basic characteristic point detection network model to obtain characteristic point self-labeling;
step S1.3, combined training: and performing geometric transformation on the pictures used in the last step, inputting the corresponding picture pairs into a basic feature point detection network, extracting feature points and descriptors, performing joint training to obtain local features based on deep learning, and thus performing detection on the local feature points and calculating the feature descriptors.
3. The hierarchical image matching method according to claim 1, wherein: k (x ') is obtained through calculation in the step S2 by using a hierarchical partial perceptual hashing method'j,y′j) And k (x'j+1,y′j+1) The specific process is as follows:
s2.1, constructing Hash index
Note G ═ G: s → U represents a family of functions, where S represents input and U represents output; g (v) ═ h1(v),…,hk(v)),hiE H, representing the random independent selection of L hash functions G from the family of functions G1,…,gL(ii) a For any point v in the data set, store it in bucket gi(v) Wherein i is 1, …, L;
s2.2 Hash search process
For a query point q ═ k (x)i,yi) To be provided withAnd given a distance threshold r, from the search bucket g1(q),…,gL(q) taking out all points v therein1,…,vnAs candidate approximate nearest neighbors; for arbitrary vjIf D (q, v)j) R is less than or equal to vjWhere D (-) is a similarity measure function and q represents a query point;
and finding two optimal candidate matching points through the process.
4. The hierarchical image matching method according to claim 1, wherein: the feature point k (x) in step S3i,yi) Local feature point k (x ') matched with two candidates'j,y′j) And k (x'j+1,y′j+1) The calculation method of the difference ratio rho between the two is as follows:
Figure FDA0003454485720000021
if it is
Figure FDA0003454485720000022
Then k (x'j,y′j) Is a local feature point k (x)i,yi) To obtain initial feature matching results from the correct matching points1
Matches1={<k(xi,yi),k(x′j,y′j)>|i∈[1,M],j∈[1,N]};
Wherein d (k (x)i,yi) Denotes a local feature point k (x)i,yi) The feature descriptor of (1); d (k (x'j,y′j) Denotes local feature point k (x'j,y′j) The feature descriptor of (1); d (k (x'j+1,y′j+1) Denotes local feature point k (x'j+1,y′j+1) The feature descriptors of (1).
5. The hierarchical image matching method according to claim 1, wherein: in the step S4, a cross-validation method is used to eliminate the wrong feature matching points, and the specific method is as follows:
firstly, the characteristic point k (x) of the query image is searchedi,yi) And feature point k (x ') in reference image'j,y′j) Matching is carried out, and the characteristic points k (x'j,y′j) And query feature point k (x)i,yi) Matching is carried out, if the results of the two matching are not consistent, the results will be<k(xi,yi),k(x′j,y′j)>From Matches1Deleting to obtain more accurate feature matching results of Matches2
6. The hierarchical image matching method according to claim 1, wherein: in the step S5, geometry constraint method Matches is adopted2The method for matching the error features of the image data, which do not meet the geometric consistency, comprises the following steps:
first-choice computing query feature point k (x)i,yi) And reference characteristic point k (x'j,y′j) Homography matrix H between, and then calculates query feature point k (x)i,yi) And Hk (x'j,y′j) τ between, and if τ < 0.5, then it is considered to be<k(xi,yi),k(x′j,y′j)>Is a correct pair of specific matching points; otherwise, it is a wrong specific matching point, which should be selected from Matches2Eliminate to obtain accurate feature matching results Matches3
Wherein, Hk (x'j,y′j) Is a characteristic point k (x'j,y′j) Location in the query image.
7. The hierarchical image matching method according to claim 1, wherein: in step S6, a feature matching error elimination method based on clustering is used to eliminate the error feature matching points layer by layer, and the specific content is as follows:
s6.1, layeringClustering method for Matches3Clustering the reference characteristic points to find out a central point C1
S6.2, with C1Computing circumscribed rectangle Rect of reference feature point for center1
S6.3, detecting Rect according to homography transformation1In querying image IrOf (1) corresponding region Rect'1
S6.4, from Matches3Delete in region Rect'1Obtaining final feature matching results from the other matching pointsFinal
CN202210001464.9A 2022-01-04 2022-01-04 Hierarchical image matching method Active CN114332510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210001464.9A CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210001464.9A CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Publications (2)

Publication Number Publication Date
CN114332510A true CN114332510A (en) 2022-04-12
CN114332510B CN114332510B (en) 2024-03-22

Family

ID=81022371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210001464.9A Active CN114332510B (en) 2022-01-04 2022-01-04 Hierarchical image matching method

Country Status (1)

Country Link
CN (1) CN114332510B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method
CN116342826A (en) * 2023-05-25 2023-06-27 上海维智卓新信息科技有限公司 AR map construction method and device
CN116385665A (en) * 2023-06-02 2023-07-04 合肥吉麦智能装备有限公司 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN116543187A (en) * 2023-07-04 2023-08-04 合肥吉麦智能装备有限公司 Image matching method for dual-mode G-type arm X-ray machine
CN117150698A (en) * 2023-11-01 2023-12-01 广东新禾道信息科技有限公司 Digital twinning-based smart city grid object construction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains
US20210279904A1 (en) * 2020-03-05 2021-09-09 Magic Leap, Inc. Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains
US20210279904A1 (en) * 2020-03-05 2021-09-09 Magic Leap, Inc. Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method
CN116342826A (en) * 2023-05-25 2023-06-27 上海维智卓新信息科技有限公司 AR map construction method and device
CN116342826B (en) * 2023-05-25 2023-10-10 上海维智卓新信息科技有限公司 AR map construction method and device
CN116385665A (en) * 2023-06-02 2023-07-04 合肥吉麦智能装备有限公司 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN116543187A (en) * 2023-07-04 2023-08-04 合肥吉麦智能装备有限公司 Image matching method for dual-mode G-type arm X-ray machine
CN117150698A (en) * 2023-11-01 2023-12-01 广东新禾道信息科技有限公司 Digital twinning-based smart city grid object construction method and system
CN117150698B (en) * 2023-11-01 2024-02-23 广东新禾道信息科技有限公司 Digital twinning-based smart city grid object construction method and system

Also Published As

Publication number Publication date
CN114332510B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN114332510A (en) Hierarchical image matching method
CN113240691B (en) Medical image segmentation method based on U-shaped network
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
Ikeuchi et al. The great buddha project: Digitally archiving, restoring, and analyzing cultural heritage objects
CN102236794B (en) Recognition and pose determination of 3D objects in 3D scenes
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
Irschara et al. Towards wiki-based dense city modeling
CN108369741A (en) Method and system for registration data
CN110110116B (en) Trademark image retrieval method integrating deep convolutional network and semantic analysis
CN108830888B (en) Coarse matching method based on improved multi-scale covariance matrix characteristic descriptor
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
CN105654483A (en) Three-dimensional point cloud full-automatic registration method
CN109523582B (en) Point cloud coarse registration method considering normal vector and multi-scale sparse features
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN113393439A (en) Forging defect detection method based on deep learning
CN112288758A (en) Infrared and visible light image registration method for power equipment
Song et al. Fast estimation of relative poses for 6-dof image localization
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN111739079A (en) Multi-source low-altitude stereo pair fast matching method based on semantic features
CN110111375A (en) A kind of Image Matching elimination of rough difference method and device under Delaunay triangulation network constraint
CN110838146A (en) Homonymy point matching method, system, device and medium for coplanar cross-ratio constraint
CN115147599A (en) Object six-degree-of-freedom pose estimation method for multi-geometric feature learning of occlusion and truncation scenes
CN111354076A (en) Single-image three-dimensional part combined modeling method based on embedding space
CN114155406A (en) Pose estimation method based on region-level feature fusion
IL123566A (en) Detecting of relief contours in a pair of stereoscopic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant