CN113792788B - Infrared and visible light image matching method based on multi-feature similarity fusion - Google Patents

Infrared and visible light image matching method based on multi-feature similarity fusion Download PDF

Info

Publication number
CN113792788B
CN113792788B CN202111074441.2A CN202111074441A CN113792788B CN 113792788 B CN113792788 B CN 113792788B CN 202111074441 A CN202111074441 A CN 202111074441A CN 113792788 B CN113792788 B CN 113792788B
Authority
CN
China
Prior art keywords
feature
points
point
matching
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111074441.2A
Other languages
Chinese (zh)
Other versions
CN113792788A (en
Inventor
王正兵
聂建华
冯旭刚
吴玉秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN202111074441.2A priority Critical patent/CN113792788B/en
Publication of CN113792788A publication Critical patent/CN113792788A/en
Application granted granted Critical
Publication of CN113792788B publication Critical patent/CN113792788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared and visible light image matching method based on multi-feature similarity fusion, which comprises the following steps: carrying out contour extraction on given infrared and visible light images, and detecting salient corner points on the contour as characteristic points; calculating the main direction of the feature points by using the left and right contour information of the feature points; for each feature point, determining feature description parameters of the point, and constructing PIIFD feature descriptors of the feature points; constructing a global context feature descriptor according to the position relation between the point and other feature points; and calculating the similarity of two feature descriptors for each pair of feature points in the two images, carrying out weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and eliminating abnormal matching point pairs. The invention effectively solves the problems of large shooting visual angle and large imaging resolution difference between infrared and visible light images in practical application, and difficult description and matching of the characteristic points, and improves the accuracy of matching the characteristic points of the images.

Description

Infrared and visible light image matching method based on multi-feature similarity fusion
Technical Field
The invention belongs to the technical field of image feature extraction and matching, and particularly relates to an infrared and visible light image matching method based on multi-feature similarity fusion.
Background
Because of the different imaging mechanisms of the infrared and visible light images, the gray scale difference of the corresponding areas between the images is larger, stable feature descriptors are difficult to extract for image matching, and in practical application, the shooting visual angle, imaging resolution and the like between the infrared and visible light images have larger difference, and the characteristics bring serious challenges to the matching of the infrared and visible light images.
The infrared and visible light image matching algorithms are mainly classified into region-based matching methods and feature-based matching methods. Compared with the region-based matching method, the feature-based matching method is higher in calculation efficiency and better in robustness to rotation and scale change between images, so that the feature-based matching method is widely studied and applied in recent years.
Feature-based image matching algorithms have been developed for decades, wherein the most representative matching algorithm is the SIFT algorithm (D.G.Lowe,Distinctive Image Features from Scale-Invariant Keypoints,International Journal of Computer Vision 60(2)(2004)91-110.), proposed by Lowe, which, although performing poorly in infrared and visible light image matching applications, provides a basic research idea for subsequent feature-based image matching algorithms. On the basis of the algorithm, chen et al propose a partial gray level invariant feature descriptor (J.Chen,J.Tian,N.Lee,J.Zheng,R.T.Smith,A.F.Laine,A partial intensity invariant feature descriptor for multimodal retinal image registration,IEEE Transactions on Biomedical Engineering,57(7)(2010)1707-1718.), to overcome the influence of gray level differences between heterologous images on feature description in consideration of gray level differences existing between infrared and visible light images, and the method is widely applied to multimode retinal image matching. Aguilera et al propose an edge orientation histogram descriptor (C.Aguilera,F.Barrera,F.Lumbreras,A.D.Sappa,R.Toledo,Multispectral image feature points,Sensors,12(9)(2012)12661-12672.), that describes it using edge pixel information near feature points. Li et al propose a radiation invariant feature transform algorithm (J.Li,Q.Hu,M.Ai,Rift:Multi-modal image matching based on radiation-invariant feature transform,arXiv preprint arXiv:1804.09493(2018).), to extract key points in the phase consistency map and construct a maximum index map for feature description.
The above-mentioned existing feature-based matching method mostly adopts local information of feature points to construct feature descriptors, and because of different imaging mechanisms, local information of infrared and visible light images may have larger differences, so that the descriptors constructed by the existing method are greatly affected by the difference of the local information of the images, and in practical application, the infrared and visible light images often have different imaging resolutions, while the existing feature matching method lacks sufficient robustness for scale variation between the infrared and visible light images, and the accuracy of image matching is not high.
Through retrieval, the Chinese patent application number ZL202110344953.X, the application date is: 2021.03.31, the name of the invention is: a multimode image matching method based on global and local feature descriptions, the application comprising the steps of: for a reference image and an image to be matched, respectively detecting characteristic points in the image, and determining a main direction of the characteristic points; for each feature point, constructing PIIFD a descriptor and a global context feature descriptor, respectively; for each pair of feature points, calculating the similarity of the two feature descriptors, carrying out weighted fusion, and carrying out preliminary matching by comparing the similarity of each pair of feature points; and for the primary matching result, extracting local context feature vectors of the feature points, and comparing to eliminate abnormal matching point pairs in the feature points to obtain a final matching result. The application can effectively solve the problems of large local gray scale difference of the multimode images and difficult description and matching of the characteristic points, and improves the accuracy of matching of the characteristic points of the multimode images. However, the application adopts Harris algorithm to detect the characteristic points in the infrared and visible light images, and the repeatability of the characteristic points detected in the two images is not high; in addition, the dimensional change between infrared and visible light images in practical application is not considered when the feature descriptor is constructed; and it uses fixed weighting parameters when weighting feature similarity, which makes the application unable to match accurately when matching infrared and visible images.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defects in the prior art, and provides an infrared and visible light image matching method based on multi-feature similarity fusion, so as to solve the problem that image feature points are difficult to describe and match in an infrared and visible light image matching task.
2. Technical proposal
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
the invention provides an infrared and visible light image matching method based on multi-feature similarity fusion, which comprises the following steps:
step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting salient corner points on the contour as characteristic points;
step 2, for each feature point in the two images, calculating the main direction of the feature point by using the coordinates of the point and the left and right contour points of the point;
Step 3, for each feature point in the two images, determining feature description parameters of the point, and constructing PIIFD feature descriptors of the feature points;
Step 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relation between the feature point and other feature points;
And 5, calculating the similarity of the two feature descriptors for each pair of feature points in the two images, carrying out weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and eliminating abnormal matching point pairs.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) In view of the problem that the existing majority of feature-based image matching methods adopt local information of feature points to construct feature descriptors, and the feature descriptors constructed in this way can cause feature mismatching due to the difference of local information of images, the invention provides an infrared and visible light image matching method based on multi-feature similarity fusion, which extracts PIIFD feature descriptors of feature points and global context feature descriptors at the same time, and realizes feature point matching by weighting and fusing the similarity, so that the influence of the local gray difference of images on feature description is effectively overcome. In addition, the similarity weighted fusion method is designed according to the position distribution characteristics of the feature points, so that the robustness of the matching algorithm to the scale change between the infrared and visible light images is greatly improved.
(2) According to the infrared and visible light image matching method based on multi-feature similarity fusion, the PIIFD feature descriptors and the global context feature descriptors are corrected by using the calculated feature description parameters, so that the accuracy of feature descriptor calculation is improved, and the accuracy of image matching is further improved.
(3) According to the infrared and visible light image matching method based on multi-feature similarity fusion, the similarity weighting fusion method is designed to have the same calculation mode of similarity weighting coefficients of all feature point pairs, repeated operation can be carried out by a computer, manual adjustment is not needed, the calculation method is simple, time required by adjustment is saved, and operation efficiency is greatly improved.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention for matching infrared and visible images;
FIG. 2 is a schematic diagram of a synthetic vector calculation method of the present invention;
FIG. 3 is a schematic diagram of the construction of a global context feature descriptor of the present invention.
Detailed Description
In view of the problems that the existing majority of feature-based image matching methods construct feature descriptors by adopting local information of feature points, and because of the fact that local information of infrared and visible light images has large differences due to different imaging mechanisms, the descriptors constructed by the existing methods are greatly influenced by the difference of the local information of the images, and in practical application, the infrared and visible light images often have different imaging resolutions, and the existing feature matching methods lack sufficient robustness for scale change between the infrared and visible light images, so that the accuracy of image matching is not high. In addition, a similarity weighted fusion method is designed according to the position distribution characteristics of the feature points, so that the robustness of a matching algorithm to the scale change between the infrared and visible light images is greatly improved.
For a further understanding of the present invention, the present invention will be described in detail with reference to the drawings and examples.
Examples
Referring to fig. 1-3, the method for matching infrared and visible light images based on multi-feature similarity fusion in this embodiment includes the following steps:
Step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting salient corner points on the contour as characteristic points:
Step 1-1, carrying out edge detection on an image by adopting a Canny algorithm, tracking detected edge pixels to realize contour detection, and recording a detected contour set as follows:
Wherein Γ s is the s-th contour in the set, N s is the number of contours in the set, and is the i-th contour point on the contour Γ s.
And step 1-2, extracting significant corner points on the contour by using a curvature scale space algorithm as characteristic points.
And 2, calculating the main directions of the feature points by using the coordinates of the feature points and the left and right contour points of each feature point in the two images.
The specific process of calculating the main direction of the feature point is as follows:
In step 2-1, let and/> be three continuous feature points on the contour Γ s, in order to calculate the main direction of the feature point , first calculate the composite vector formed by the feature point and its left and right contour points as follows:
Where N t=min(dl,dr),dl is the number of contour points experienced from the feature point to its left feature point/> , d r is the number of contour points experienced from the feature point/> to its right feature point/> , v t is the resultant vector formed by the t-th pair of left and right contour points,/> and/> are the coordinates of feature points/> and/> , respectively.
Step 2-2, accumulating the synthesized vectors obtained in step 2-1 to obtain:
Where x sk and y sk are the element values in vector v sk.
The direction setting of the vector v sk in step 2-3 is the main direction of the feature point , and the value is:
Wherein is the principal direction of the feature point/> .
Step 3, for each feature point in the two images, determining the feature description parameters of the point, and constructing PIIFD feature descriptors thereof:
And 3-1, marking the feature point set extracted from the image in the step 1 as { c 1,c2,…,cN }, wherein N is the number of all the feature points detected in the image. For any feature point c i, selecting N n feature points closest to the feature point c i, setting the value of the parameter N n as to be between 5 and 20, and determining the parameter N n as 10 through an actual test to obtain the best matching effect. For the nearest neighbor set of feature points c i/> where the euclidean distance of each point to c i is noted as , the average distance can be calculated as:
step 3-2, for feature point c i, a rectangular region around the feature point is selected in the image, and PIIFD feature descriptors for the feature point c i are constructed with the main direction of the feature point as the rectangular region direction.
Step 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relation between the feature points and other feature points:
For any feature point c j in the feature point set { c 1,c2,…,cN }, the relative position of the feature point c j with respect to the feature point c i may be represented as w ij=(αijij), where α ij is the angle between the vector and the principal direction of the feature point c i, and β ij is the angle between the principal direction of the feature point c j and the principal direction of the feature point c i. For each feature point c j in the feature point set { c 1,c2,…,cN }, a description vector w ij is calculated (j=1, 2, …, N; j+.i).
Step 4-2, uniformly dividing the value range [0,2 pi ] of alpha and beta into 8 angle intervals, thereby calculating a global context feature description histogram of the feature point c i as follows:
Wherein is the kth element value in the histogram, bin (K) is the kth angle interval, and K is the histogram dimension. w j is the weight of the feature point c j for the feature description, and its numerical value is calculated as:
Where d ij is the euclidean distance between the feature point c j and the feature point c i.
In the embodiment, the PIIFD feature descriptors and the global context feature descriptors are corrected by using the calculated feature description parameters, so that the accuracy of feature descriptor calculation is improved, and the accuracy of image matching is further improved.
And 5, calculating the similarity of the two feature descriptors for each pair of feature points in the two images, carrying out weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and eliminating abnormal matching point pairs.
The specific process of feature point matching is as follows: firstly, carrying out characteristic point matching on the two images,
Step 5-1, for a pair of feature points in the two images, calculating the similarity of the feature descriptors as follows:
Wherein d 1 and d 2 are feature descriptors, the PIIFD feature descriptors of the pair of feature points are substituted into the above formula to obtain local similarity, which is denoted as sim l, and the global context feature descriptors are substituted into the above formula to obtain global similarity, which is denoted as sim g.
Step 5-2, fusing the local similarity and the global similarity of the feature point pairs as follows:
S=(1-γ)siml+γsimg (9)
S is the fusion similarity of the feature point pairs, gamma is the balance parameter between the local similarity and the global similarity, and the numerical value is calculated as follows:
Wherein, (x 1,y1) and (x 2,y2) are coordinates of the pair of feature points in the reference image and the image to be matched, respectively, (w 1,h1) and (w 2,h2) represent sizes of the two images, respectively.
And 5-3, calculating the similarity of all the characteristic point pairs in the reference image and the image to be matched according to the step 5-1 and the step 5-2, screening out the corresponding matched characteristic point pairs in the two images by adopting a bidirectional matching method, and realizing the preliminary matching of the characteristic point pairs.
And then eliminating abnormal matching point pairs to obtain a final matching result:
And 5-4, the transformation relation between the reference image and the image to be matched meets an affine transformation model, a random sampling consistency algorithm is adopted to calculate a transformation matrix between the images according to the primary matching result in the step 5-3, and feature point pairs which are inconsistent with the transformation relation described by the transformation matrix in the primary matching result are removed, so that a final matching result is obtained.
The similarity weighted fusion method designed by the embodiment has the same calculation mode of the similarity weighted coefficients of each characteristic point pair, can be repeatedly operated by a computer, does not need manual adjustment, has simple calculation method, saves time required by adjustment, and greatly improves operation efficiency.
According to the infrared and visible light image matching method based on multi-feature similarity fusion, the PIIFD feature descriptors and the global context feature descriptors of the feature points are extracted at the same time, and feature point matching is achieved by carrying out weighted fusion on the similarity, so that the influence of local gray level differences of images on feature description is effectively overcome. In addition, a similarity weighted fusion method is designed according to the position distribution characteristics of the feature points, so that the robustness of a matching algorithm to the scale change between the infrared and visible light images is greatly improved.
The invention and its embodiments have been described above by way of illustration and not limitation, and the invention is illustrated in the accompanying drawings and described in the drawings in which the actual structure is not limited thereto. Therefore, if one of ordinary skill in the art is informed by this disclosure, the structural mode and the embodiments similar to the technical scheme are not creatively designed without departing from the gist of the present invention.

Claims (7)

1. The infrared and visible light image matching method based on multi-feature similarity fusion is characterized by comprising the following steps:
step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting salient corner points on the contour as characteristic points;
Step 2, for each feature point in the two images, calculating the main direction of the feature point by using the coordinates of the point and the left and right contour points of the point;
Step 3, for each feature point in the two images, determining feature description parameters of the point, and constructing PIIFD feature descriptors of the feature points;
Step 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relation between the feature point and other feature points;
Step 5, calculating the similarity of two feature descriptors for each pair of feature points in the two images, carrying out weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and eliminating abnormal matching point pairs;
In the step 3, the step of calculating the feature descriptors of the feature points PIIFD is:
Step 3-1, marking the feature point set extracted from the step 1 as { c 1,c2,…,cN }, wherein N is the number of all the feature points detected in the image; for any feature point c i, selecting N n feature points closest to the feature point c i, and marking the feature points as a nearest neighbor point set , wherein the Euclidean distance between each point and c i is marked as/> , and the average distance/> is as follows:
step 3-2, selecting a rectangular region around of the feature point in the image for the feature point c i, and constructing PIIFD feature descriptors of the feature point with the main direction of the feature point c i as the rectangular region direction;
in the step 4, the step of calculating the feature point global context feature descriptor is as follows:
Step 4-1, for any feature point cj in the feature point set { c 1,c2,…,cN }, the relative position of the feature point cj with respect to the feature point c i is w ij=(αijij), where α ij is the angle between the vector and the principal direction of the feature point c i, and β ij is the angle between the principal direction of the feature point c j and the principal direction of the feature point c i; for each feature point cj in the feature point set { c 1,c2,…,cN }, calculate a description vector w ij, j=1, 2, …, N; j is not equal to i;
Step 4-2, uniformly dividing the value range [0,2 pi ] of alpha and beta into 8 angle intervals, thereby calculating a global context feature description histogram of the feature point c i as follows:
Wherein is the kth element value in the histogram, bin (K) is the kth angle interval, and K is the dimension of the histogram; w j is the weight of the feature point c j for the feature description, and its value is:
Where d ij is the euclidean distance between the feature point c j and the feature point c i.
2. The method for matching infrared and visible light images based on multi-feature similarity fusion according to claim 1, wherein in the step 1, the specific steps of extracting feature points are as follows:
Step 1-1, carrying out edge detection on an image by adopting a Canny algorithm, and tracking detected edge pixels to realize contour detection, wherein a detected contour set is as follows:
Wherein Γ s is the s-th contour in the set, N s is the number of contours in the set, is the i-th contour point on the contour Γ s;
and step 1-2, extracting significant corner points on the contour by using a curvature scale space algorithm as characteristic points.
3. The method for matching images of infrared and visible light based on multi-feature similarity fusion according to claim 2, wherein in the step 2, the step of calculating the main direction of the feature point is:
Step 2-1, set and/> as three continuous feature points on the contour Γ s, and calculate the synthetic vector formed by the feature point and its left and right contour points by using formula (2):
Where N t=min(dl,dr),dl is the number of contour points seen from the feature point to the left feature point/> thereof, d r is the number of contour points seen from the feature point/> to the right feature point/> thereof, v t is the resultant vector formed by the t-th pair of left and right contour points,/> and/> are the coordinates of the feature points/> and/> , respectively;
step 2-2, accumulating the synthesized vectors obtained in step 2-1 to obtain:
wherein x sk and y sk are the element values in vector v sk;
The direction setting of the vector v sk in step 2-3 is the main direction of the feature point , and the value is:
Wherein is the principal direction of the feature point/> .
4. The method for matching infrared and visible light images based on multi-feature similarity fusion according to claim 3, wherein in the step 5, feature point matching is performed on two images preliminarily, and the steps are as follows:
step 5-1, calculating the similarity of feature descriptors by using the formula (8) for a pair of feature points in the two images:
Wherein d 1 and d 2 are feature descriptors, PIIFD feature descriptors of the pair of feature points are substituted into the local similarity sim l obtained by the above formula, and global context feature descriptors are substituted into the global similarity sim g obtained by the above formula;
step 5-2, fusing the local similarity and the global similarity of the feature point pairs:
S=(1-γ)siml+γsimg (9)
in the formula, S is the fusion similarity of the feature point pairs, gamma is a balance parameter between local similarity and global similarity, and the numerical value is calculated as follows:
Wherein, (x 1,y1) and (x 2,y2) are coordinates of the pair of feature points in the reference image and the image to be matched, respectively, (w 1,h1) and (w 2,h2) represent sizes of the two images, respectively;
And 5-3, calculating the similarity of all the characteristic point pairs in the reference image and the image to be matched according to the step 5-1 and the step 5-2, screening out the characteristic point pairs corresponding to the matching in the two images by adopting a bidirectional matching method, and realizing the preliminary matching of the characteristic point pairs.
5. The method for matching infrared and visible light images based on multi-feature similarity fusion according to claim 4, wherein in the step 5, after preliminary matching, abnormal matching point pairs in the preliminary matching are removed, and a final matching result is obtained:
And 5-4, the transformation relation between the reference image and the image to be matched meets an affine transformation model, a random sampling consistency algorithm is adopted to calculate a transformation matrix between the images according to the primary matching result in the step 5-3, and feature point pairs which are inconsistent with the transformation relation described by the transformation matrix in the primary matching result are removed, so that a final matching result is obtained.
6. The method for matching infrared and visible light images based on multi-feature similarity fusion according to claim 4 or 5, wherein the method comprises the following steps: in the step 3-1, the value range of the parameter N n is 5-20.
7. The method for matching infrared and visible light images based on multi-feature similarity fusion according to claim 6, wherein the method comprises the following steps: in step 3-1, the value of the parameter N n is 10.
CN202111074441.2A 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion Active CN113792788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111074441.2A CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111074441.2A CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Publications (2)

Publication Number Publication Date
CN113792788A CN113792788A (en) 2021-12-14
CN113792788B true CN113792788B (en) 2024-04-16

Family

ID=78880138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111074441.2A Active CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Country Status (1)

Country Link
CN (1) CN113792788B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN109285110A (en) * 2018-09-13 2019-01-29 武汉大学 The infrared visible light image registration method and system with transformation are matched based on robust
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113095384A (en) * 2021-03-31 2021-07-09 安徽工业大学 Remote sensing image matching method based on context characteristics of straight line segments
CN117253063A (en) * 2023-10-24 2023-12-19 安徽工业大学 Two-stage multimode image matching method based on dotted line feature description

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN109285110A (en) * 2018-09-13 2019-01-29 武汉大学 The infrared visible light image registration method and system with transformation are matched based on robust
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113095384A (en) * 2021-03-31 2021-07-09 安徽工业大学 Remote sensing image matching method based on context characteristics of straight line segments
CN117253063A (en) * 2023-10-24 2023-12-19 安徽工业大学 Two-stage multimode image matching method based on dotted line feature description

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
红外与可见光图像特征点边缘描述与匹配算法;朱英宏;李俊山;杨威;张涛;朱艺娟;;计算机辅助设计与图形学学报;20130615(第06期);全文 *

Also Published As

Publication number Publication date
CN113792788A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN112819094B (en) Target detection and identification method based on structural similarity measurement
CN112085772B (en) Remote sensing image registration method and device
CN103729654A (en) Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm
CN107423737A (en) The video quality diagnosing method that foreign matter blocks
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN103606170B (en) Streetscape image feature based on colored Scale invariant detects and matching process
CN105718882A (en) Resolution adaptive feature extracting and fusing for pedestrian re-identification method
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN107066969A (en) A kind of face identification method
CN103065135A (en) License number matching algorithm based on digital image processing
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN104200461A (en) Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN107240130B (en) Remote sensing image registration method, device and system
CN109523585A (en) A kind of multi-source Remote Sensing Images feature matching method based on direction phase equalization
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN113095385B (en) Multimode image matching method based on global and local feature description
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN117253063A (en) Two-stage multimode image matching method based on dotted line feature description
CN104700359A (en) Super-resolution reconstruction method of image sequence in different polar axis directions of image plane
CN105631860A (en) Local sorted orientation histogram descriptor-based image correspondence point extraction method
CN113792788B (en) Infrared and visible light image matching method based on multi-feature similarity fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant