CN117253063A - Two-stage multimode image matching method based on dotted line feature description - Google Patents

Two-stage multimode image matching method based on dotted line feature description Download PDF

Info

Publication number
CN117253063A
CN117253063A CN202311388180.0A CN202311388180A CN117253063A CN 117253063 A CN117253063 A CN 117253063A CN 202311388180 A CN202311388180 A CN 202311388180A CN 117253063 A CN117253063 A CN 117253063A
Authority
CN
China
Prior art keywords
feature
image
feature point
calculating
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311388180.0A
Other languages
Chinese (zh)
Inventor
王正兵
张科琪
冯旭刚
张哲贤
赵远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN202311388180.0A priority Critical patent/CN117253063A/en
Publication of CN117253063A publication Critical patent/CN117253063A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a two-stage multimode image matching method based on dotted line feature description, which comprises the following steps: respectively calculating the minimum moment and the maximum moment of phase consistency at each pixel point in the reference image and the image to be matched; extracting characteristic points and straight line segments of an original input image from the minimum moment diagram and the maximum moment diagram respectively; constructing a linear segment context descriptor of each characteristic point in the image, and clustering; class matching is achieved by comparing the similarity of the center description vectors of the feature point classes in the two images; constructing a histogram descriptor of the maximum response direction of the phase consistency of each feature point in the two matched groups of feature point classes; and calculating the similarity of the feature descriptors and performing bidirectional feature matching. The invention can effectively solve the problems of large imaging gray scale and resolution difference of the multimode images, difficult description of the characteristic points and low matching efficiency, and the two-stage matching strategy can improve the matching efficiency and reliability of the characteristic points of the multimode images.

Description

Two-stage multimode image matching method based on dotted line feature description
Technical Field
The invention belongs to the technical field of image feature extraction and matching, and particularly relates to a two-stage multimode image matching method based on dotted line feature description.
Background
Multimode image matching is an important precondition for multimode image fusion processing, and aims to correspond two or more images of the same scene, which are acquired by different imaging devices under different imaging conditions. Because the imaging mechanism and the imaging condition of the multimode images are different, larger imaging gray scale and resolution exist between the images, and even the background content of the images is different, the matching of the multimode images is provided with serious challenges.
The feature-based matching method determines the corresponding relation between the reference image and the image to be matched by extracting the image features from the two images as the matching primitives, has high calculation efficiency, has high robustness to rotation and scale change between the images, and is widely focused and studied.
The most representative feature matching algorithm is the SIFT algorithm proposed by Lowe (d.g. Lowe, distinctive Image Features from Scale-Invariant Keypoints, international Journal of Computer Vision 60 (2) (2004) 91-110.), which is widely used in natural image matching, providing a basic idea for subsequent feature-based image matching algorithms, but has poor performance in multimode image matching applications. Based on this algorithm, chen et al propose a partial gray scale invariant feature descriptor (J.Chen, J.Tian, N.Lee, J.Zheng, R.T.Smith, A.F.Laine, apartial intensity invariant feature descriptor for multimodal retinal image registration, IEEE Transactions on Biomedical Engineering,57 (7) (2010) 1707-1718.) to address gray scale difference problems between multi-mode images, which has been widely used and improved in multi-mode retinal image matching. Considering the problem of difficulty in feature point extraction and description caused by imaging gray scale differences between multimode images, sui et al employ more stable straight line features and propose an iterative feature extraction strategy to optimize extracted image features (H.Sui, C.Xu, J.Liu, F.Hua, automatic Optical-to-SAR Image Registration by Iterative Line Extraction and Voronoi Integrated Spectral Point Matching, IEEE Transactions on Geoscience and Remote Sensing,53 (11) (2015) 6058-6072.). Wang et al propose a linear segment context descriptor to globally describe the extracted linear features and construct an iterative optimization strategy to overcome the problem of incomplete linear extraction of images (Z.Wang, X.Feng, Y.Wu, G.Xu, M.Qian, an automatic method for matching salient structures in optical remote sensing images, international Journal of Remote Sensing,42 (21) (2021) 8298-8317.).
The above-mentioned existing feature-based matching method obtains a better matching result in a multimode image matching task under a specific scene, but still has the following drawbacks: firstly, most of the existing methods only extract characteristic points of multimode images as matching primitives, and adopt local information of the characteristic points to construct characteristic descriptors, and the matching effect is greatly influenced by imaging gray level differences among multimode images; secondly, in the matching method proposed by researchers in recent years, more stable straight-line segments extracted from images are considered to be adopted as matching primitives, but the problem that a conversion matrix between images is difficult to accurately calculate through the matched straight-line segments is solved; thirdly, the interference factors influencing the matching among the multimode images are more, and the better image matching effect is difficult to obtain by adopting the image features and the descriptors thereof which are extracted at one time.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defects in the prior art and provides a two-stage multimode image matching method based on dotted line feature description, which solves the problems of difficult description of image feature points and low matching efficiency in multimode image matching tasks and improves the efficiency and reliability of multimode image feature point matching.
2. Technical proposal
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
the invention discloses a two-stage multimode image matching method based on dotted line feature description, which comprises the following steps:
step 1, respectively calculating the minimum moment and the maximum moment of phase consistency at each pixel point in a reference image and an image to be matched;
step 2, extracting characteristic points and straight line segments of the original input image from the minimum moment diagram and the maximum moment diagram respectively;
step 3, constructing a linear segment context descriptor of each characteristic point in the image, and clustering to form a characteristic point class;
step 4, matching the feature point classes by comparing the similarity of the center description vectors of the feature point classes in the two images;
step 5, constructing a histogram descriptor of the maximum response direction of the phase consistency of each feature point in the two matched groups of feature point classes;
and 6, calculating the similarity of the feature descriptors for each pair of feature points in the two matched groups of feature point classes, and performing bidirectional feature matching to obtain a final image matching result.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) In view of the problems that the existing majority of image matching methods based on characteristics only extract characteristic points of multimode images as matching primitives, and local information of the characteristic points is adopted to construct characteristic descriptors, the characteristic descriptors are easily affected by imaging gray level differences among multimode images, and feature mismatching is caused, the characteristic points and straight line segments of the images are extracted simultaneously on the basis of a phase consistency model, and the influence of the imaging gray level differences on feature extraction is effectively overcome.
(2) According to the two-stage multimode image matching method based on the dotted line feature description, feature matching is divided into two stages, firstly, the linear segment context descriptors of the feature points are constructed by utilizing the position information of the extracted linear segments relative to the feature points to carry out global description, the feature point classes are formed through a clustering algorithm, further, coarse matching of the feature point classes is achieved, secondly, in the corresponding feature point classes, a histogram descriptor with the maximum response direction of phase consistency is constructed to achieve accurate matching of the feature points, on one hand, the constructed descriptor has higher robustness on imaging gray level differences, and on the other hand, the two-stage matching strategy is beneficial to improving the feature point matching efficiency.
Drawings
FIG. 1 is a schematic flow chart of two-stage multimode image matching according to the present invention;
FIG. 2 is a schematic diagram of a straight line segment context feature descriptor computation method of the present invention; specifically, (a) in fig. 2 is a feature point c i Relative to straight line segment l j Is a schematic diagram of the positional relationship of (a); fig. 2 (b) is a straight line segment l in constructing a straight line segment context descriptor j The weight calculation method is schematically shown.
Detailed Description
In view of the existing majority of feature-based image matching methods, only feature points of multimode images are extracted to serve as matching primitives, and feature descriptors are constructed by adopting local information of the feature points, so that the problem of feature mismatching is easily caused by the influence of imaging gray level differences among multimode images, and geometric changes such as translation, rotation, scale and the like can exist among the multimode images in practical application, so that the efficiency and reliability of image feature point matching are further affected. According to the invention, the characteristic points and the straight line segments of the image are extracted simultaneously on the basis of the phase consistency model, and the influence of imaging gray level difference on the characteristic extraction is effectively overcome. The constructed linear segment context descriptor and the phase consistency maximum response direction histogram descriptor have higher robustness on imaging gray scale difference between multimode images, and in addition, the adopted two-stage image matching strategy is beneficial to improving the characteristic point matching efficiency.
For a further understanding of the present invention, the present invention will be described in detail with reference to the drawings and examples.
Example 1
1-2, the two-stage multimode image matching method based on dotted line feature description of the embodiment comprises the following steps:
and step 1, respectively calculating the minimum moment and the maximum moment of the phase consistency at each pixel point in the reference image and the image to be matched.
The specific process of phase consistency minimum moment and maximum moment calculation is as follows:
step 1-1, convolving an input image by using a Log-Gabor filter, and calculating phase consistency of each pixel point of the input image in each direction based on a Kovesi algorithm:
wherein o and n are the direction and scale marks of the Log-Gabor filter, respectively, PC (x, y, θ o ) For phase consistency in direction o of image pixel coordinates (x, y), θ o For the corresponding angle of direction o, W o Weight coefficient representing frequency spread, A no The Log-Gabor filter representing the direction o is convolved with the image on the scale n with an amplitude, ΔΦ no T is the noise threshold, ε is a very small constant to avoid divisor 0;
step 1-2, calculating the following variables at each pixel point of the image:
further, the minimum moment and the maximum moment of the phase consistency of each pixel point are calculated:
where M and M represent the minimum and maximum moments of phase consistency, respectively.
And 2, extracting characteristic points from the minimum moment diagram by adopting a Harris-Affine algorithm, and extracting straight line segments from the maximum moment diagram by adopting a LSD (Line Segment Detector) algorithm.
And 3, constructing a linear segment context descriptor of each feature point in the image, and clustering the feature points according to the linear segment context descriptors to form feature point classes. The specific process of calculating the context descriptors of the straight line segments and clustering the feature points is as follows:
step 3-1, the feature point set and the straight line segment set in the original input image obtained in step 2 are respectively recorded asAnd->Wherein n is c And n l The number of extracted feature points and the number of straight line segments are represented. As shown in fig. 2 (a), for any feature point c in the feature point set i Straight line segment l j The position relative to the feature point can be expressed as w ij =(α ijij ) Wherein alpha is ij Is the characteristic point c i To straight line segment l j Perpendicular to (c) and characteristic point c i Angle beta of main direction ij Is straight line segment l j And feature point c i An included angle of the main direction;
step 3-2, for all straight line segments l j (j=1,2,…,n l ) Calculate the description vector w ij Feature point c i The positional relationship with all straight line segments can be described as
Step 3-3, uniformly dividing the value range [0,2 pi ] of alpha into 8 angle intervals, uniformly dividing the value range [0, pi ] of beta into 4 angle intervals, and calculating W i The histogram descriptor of (a) is as follows:
wherein,for the kth element value in the histogram, bin (k) is the kth angle bin, Γ (a) ij )-Γ(b ij ) Is straight line segment l j As shown in (b) of FIG. 2, Γ (·) is a distribution function of a standard normal distribution, represented by a feature point c i To straight line segment l j Is taken as an origin along a straight line section l j A) establishing a coordinate system of the direction of a ij And b ij Respectively straight line segment l j From which the feature point c can be obtained i The straight line segment context descriptor of (1), i.e. histogram +.>
And 3-4, respectively calculating linear segment context descriptors of all the feature points in the reference image and the image to be matched by the step 3-3, respectively adopting a k-means algorithm in the two images, and clustering the feature points according to the linear segment context descriptors of the feature points. The value of the initial cluster center number K of the K-means algorithm can be set to be between 10 and 20, and the best matching effect can be obtained when the K is set to be 15 through actual experiments.
And 4, matching the feature point classes by comparing the similarity of the center description vectors of the feature point classes in the two images. The specific process of feature point class matching is as follows:
step 4-1, calculating the average value of the linear segment context descriptors corresponding to all the feature points in each class of the reference image and the image to be matched, namely, the class center description vector;
step 4-2, for each pair of feature point classes in the two images, calculating the similarity of the class center description vectors, and determining the corresponding relationship between the feature point classes as follows through bidirectional matchingWherein P is f And Q f And f is the feature point class matched with the f pairs in the two images.
And 5, constructing a histogram descriptor of the maximum response direction of the phase consistency of each feature point in the two matched groups of feature point classes.
The specific process of calculating the histogram descriptor of the maximum response direction of the phase consistency is as follows:
step 5-1, for feature point c in the image i Step 2, extracting the characteristic points and synchronously calculating the main direction phi of the characteristic points i And dimension s i In the corresponding maximum moment diagram, along the feature point c i Is selected from its surrounding 6As i ×6As i And the area is uniformly divided into 6 x 6 sections by using a grid. The value of the parameter A can be set to be 5-10, and the best matching effect can be obtained when the parameter A is set to be 8 through an actual test;
step 5-2, in each interval, statistically accumulating the phase consistency amplitude values in different directions, and recording the maximum accumulated value and the corresponding direction thereofWhere t=1, 2, …,36, then the feature point c can be obtained i The phase consistency maximum response direction histogram descriptor of (c) is as follows:
and 6, calculating the similarity of the feature descriptors for each pair of feature points in the two matched groups of feature point classes, and performing bidirectional feature matching.
The specific process of feature point matching is as follows:
step 6-1, calculating phase consistency maximum response direction histogram descriptors of all feature points in the reference image and the image to be matched by the step 5;
step 6-2, for each pair of the matched pair of corresponding feature point classes in step 4, calculating the similarity as follows:
wherein c i And c g Characteristic points in the reference image and the image to be matched respectively,and->A straight line segment context descriptor for two feature points. The value of the parameter lambda can be set between 0.8 and 1, the value of the parameter rho can be set between 3 and 5, and the best matching effect can be obtained when the actual test determines that the parameters lambda and rho are respectively 0.9 and 4;
and 6-3, calculating the similarity of all the characteristic point pairs in the corresponding characteristic point classes of the two images according to the step 6-2, screening out the characteristic point pairs corresponding to the matching in the characteristic point pairs by adopting a two-way matching method, calculating a conversion matrix between the reference image and the image to be matched according to the similarity, and carrying out consistency test to remove the point pair with larger matching error, wherein after removing the point pair with larger matching error, the rest point pairs are the obtained image characteristic point matching result.
According to the embodiment, before the characteristics of the multimode images are extracted, the image preprocessing is performed through the phase consistency model, so that the influence of imaging gray level differences on the characteristic extraction is effectively overcome. The adoption of the two-stage image matching strategy is beneficial to improving the characteristic point matching efficiency, and the linear segment context descriptor and the histogram descriptor with the maximum response direction of the phase consistency constructed in the two-stage image matching strategy have higher robustness on the imaging gray level difference between the multimode images, so that the reliability of the multimode image matching is ensured.
The invention and its embodiments have been described above by way of illustration and not limitation, and the invention is illustrated in the accompanying drawings as one of its embodiments and is not limited to the embodiments shown. Therefore, if one of ordinary skill in the art is informed by this disclosure, the structural mode and the embodiments similar to the technical scheme are not creatively designed without departing from the gist of the present invention.

Claims (9)

1. The two-stage multimode image matching method based on the dotted line feature description is characterized by comprising the following steps of:
step 1, respectively calculating the minimum moment and the maximum moment of phase consistency at each pixel point in a reference image and an image to be matched;
step 2, extracting characteristic points and straight line segments of the original input image from the minimum moment diagram and the maximum moment diagram respectively;
step 3, constructing a linear segment context descriptor of each characteristic point in the image, and clustering the characteristic points to form characteristic point classes;
step 4, matching the feature point classes by comparing the similarity of the center description vectors of the feature point classes in the two images;
step 5, constructing a histogram descriptor of the maximum response direction of the phase consistency of each feature point in the two matched groups of feature point classes;
and 6, calculating the similarity of the feature descriptors for each pair of feature points in the two matched groups of feature point classes, and performing bidirectional feature matching to obtain a final image matching result.
2. The two-stage multimode image matching method based on the dotted line feature description according to claim 1, wherein in the step 1, the specific process of calculating the minimum moment and the maximum moment of phase consistency is as follows:
step 1-1, convolving an input image by using a Log-Gabor filter, and calculating phase consistency of each pixel point of the input image in each direction based on a Kovesi algorithm;
step 1-2, calculating variables a, b and c of each pixel point of the image, and further calculating the minimum moment and the maximum moment of phase consistency of each pixel point.
3. The two-stage multimode image matching method based on the dotted line feature description according to claim 2, wherein in the step 2, the Harris-Affine algorithm is adopted to extract feature points from the minimum moment diagram, and the LSD algorithm is adopted to extract straight line segments from the maximum moment diagram.
4. A two-stage multimode image matching method based on dotted line feature description according to claim 3, wherein in the step 3, the specific process of calculating the context descriptor of the straight line segment and performing feature point clustering is as follows:
step 3-1, the feature point set and the straight line segment set in the original input image obtained in step 2 are respectively recorded asAnd->Wherein n is c And n l Representing the number of the extracted characteristic points and the number of the straight line segments; for any feature point c in the set of feature points i Straight line segment l j The position relative to the feature point is denoted as w ij =(α ijij ) Wherein alpha is ij Is the characteristic point c i To straight line segment l j Perpendicular to (c) and characteristic point c i Angle beta of main direction ij Is straight line segment l j And feature point c i An included angle of the main direction;
step 3-2, for all straight line segments l j (j=1,2,…,n l ) Calculate the description vector w ij Feature point c i The positional relationship with all straight line segments is that
Step 3-3, uniformly dividing the value range [0,2 pi ] of alpha into 8 angle intervals, uniformly dividing the value range [0, pi ] of beta into 4 angle intervals, and calculating W i From which a feature point c can be derived i Straight line segment context descriptors of (2);
and 3-4, respectively calculating linear segment context descriptors of all the characteristic points in the reference image and the image to be matched by the step 3-3, and respectively clustering the characteristic points in the two images by adopting a k-means algorithm.
5. The two-stage multimode image matching method based on the dotted line feature description of claim 4, wherein the value range of the initial cluster center number K set by the K-means algorithm is 10-20.
6. The two-stage multimode image matching method based on the dotted line feature description according to claim 4, wherein in the step 4, the specific process of feature point class matching is as follows:
step 4-1, calculating a class center description vector of each class of feature points in the reference image and the image to be matched;
step 4-2, for each pair of feature point classes in the two images, calculating the similarity of the class center description vectors, and determining the corresponding relationship between the feature point classes as follows through bidirectional matchingWherein P is f And Q f And f is the feature point class matched with the f pairs in the two images.
7. The two-stage multi-mode image matching method based on the dotted line feature description according to claim 6, wherein in the step 5, the specific process of calculating the phase consistency maximum response direction histogram descriptor is as follows:
step 5-1, for feature point c in the image i The main direction phi of the material can be obtained from the step 2 i And dimension s i In the corresponding maximum moment diagram, along the feature point c i Is selected from its surrounding 6As i ×6As i Uniformly dividing the rectangular region into 6×6 sections by using a grid;
step 5-2, at each timeIn each interval, the phase consistency amplitude values in different directions are statistically accumulated, and the maximum accumulated value and the corresponding direction are recordedWhere t=1, 2, …,36, then the feature point c can be obtained i Is described in the phase consistency maximum response direction histogram descriptor.
8. The two-stage multimode image matching method based on the dotted line feature description of claim 7, wherein the value range of the parameter A is 5-10.
9. The two-stage multi-mode image matching method based on the dotted line feature description of claim 7, wherein in said step 6, the specific process of feature point matching is as follows:
step 6-1, calculating phase consistency maximum response direction histogram descriptors of all feature points in the reference image and the image to be matched by the step 5;
step 6-2, calculating the similarity of a pair of feature points in each pair of corresponding feature point classes matched in the step 4;
and 6-3, calculating the similarity of all the characteristic point pairs in the corresponding characteristic point classes of the two images according to the step 6-2, screening out the corresponding matched characteristic point pairs by adopting a bidirectional matching method, calculating a conversion matrix between the reference image and the image to be matched according to the similarity, and carrying out consistency test to remove the point pairs with larger matching errors.
CN202311388180.0A 2023-10-24 2023-10-24 Two-stage multimode image matching method based on dotted line feature description Pending CN117253063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311388180.0A CN117253063A (en) 2023-10-24 2023-10-24 Two-stage multimode image matching method based on dotted line feature description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311388180.0A CN117253063A (en) 2023-10-24 2023-10-24 Two-stage multimode image matching method based on dotted line feature description

Publications (1)

Publication Number Publication Date
CN117253063A true CN117253063A (en) 2023-12-19

Family

ID=89135101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311388180.0A Pending CN117253063A (en) 2023-10-24 2023-10-24 Two-stage multimode image matching method based on dotted line feature description

Country Status (1)

Country Link
CN (1) CN117253063A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792788A (en) * 2021-09-14 2021-12-14 安徽工业大学 Infrared and visible light image matching method based on multi-feature similarity fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792788A (en) * 2021-09-14 2021-12-14 安徽工业大学 Infrared and visible light image matching method based on multi-feature similarity fusion
CN113792788B (en) * 2021-09-14 2024-04-16 安徽工业大学 Infrared and visible light image matching method based on multi-feature similarity fusion

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
KR100986809B1 (en) The Method of Automatic Geometric Correction for Multi-resolution Satellite Images using Scale Invariant Feature Transform
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
Bouchiha et al. Automatic remote-sensing image registration using SURF
CN109919960B (en) Image continuous edge detection method based on multi-scale Gabor filter
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN103400388A (en) Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus)
CN103400384A (en) Large viewing angle image matching method capable of combining region matching and point matching
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN102915540A (en) Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor
CN117253063A (en) Two-stage multimode image matching method based on dotted line feature description
CN109523585A (en) A kind of multi-source Remote Sensing Images feature matching method based on direction phase equalization
CN107895375A (en) The complicated Road extracting method of view-based access control model multiple features
CN102938147A (en) Low-altitude unmanned aerial vehicle vision positioning method based on rapid robust feature
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN103136525A (en) Hetero-type expanded goal high-accuracy positioning method with generalized Hough transposition
CN108229500A (en) A kind of SIFT Mismatching point scalping methods based on Function Fitting
CN103077528A (en) Rapid image matching method based on DCCD (Digital Current Coupling)-Laplace and SIFT (Scale Invariant Feature Transform) descriptors
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN113095385B (en) Multimode image matching method based on global and local feature description
CN109766850B (en) Fingerprint image matching method based on feature fusion
CN103186899B (en) A kind of Feature Points Extraction of affine Scale invariant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination