CN113792788A - Infrared and visible light image matching method based on multi-feature similarity fusion - Google Patents

Infrared and visible light image matching method based on multi-feature similarity fusion Download PDF

Info

Publication number
CN113792788A
CN113792788A CN202111074441.2A CN202111074441A CN113792788A CN 113792788 A CN113792788 A CN 113792788A CN 202111074441 A CN202111074441 A CN 202111074441A CN 113792788 A CN113792788 A CN 113792788A
Authority
CN
China
Prior art keywords
feature
points
point
similarity
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111074441.2A
Other languages
Chinese (zh)
Other versions
CN113792788B (en
Inventor
王正兵
聂建华
冯旭刚
吴玉秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN202111074441.2A priority Critical patent/CN113792788B/en
Publication of CN113792788A publication Critical patent/CN113792788A/en
Application granted granted Critical
Publication of CN113792788B publication Critical patent/CN113792788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared and visible light image matching method based on multi-feature similarity fusion, which comprises the following steps: extracting outlines of given infrared and visible light images, and detecting salient corner points on the outlines as feature points; calculating the main direction of the characteristic points by using the left and right contour information of the characteristic points; for each feature point, determining feature description parameters of the point, and constructing a PIIFD feature descriptor of the point; constructing a global context feature descriptor according to the position relation between the point and other feature points; and for each pair of feature points in the two images, calculating the similarity of the two feature descriptors, performing weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and removing abnormal matching point pairs. The invention effectively overcomes the problems of large difference of shooting visual angles and imaging resolutions and difficult description and matching of the characteristic points between the infrared image and the visible image in practical application, and improves the accuracy of matching of the characteristic points of the image.

Description

Infrared and visible light image matching method based on multi-feature similarity fusion
Technical Field
The invention belongs to the technical field of image feature extraction and matching, and particularly relates to an infrared and visible light image matching method based on multi-feature similarity fusion.
Background
Due to different imaging mechanisms of the infrared and visible light images, the gray level difference of corresponding areas between the images is large, stable feature descriptors are difficult to extract for image matching, and in practical application, the shooting visual angle, the imaging resolution and the like between the infrared and visible light images are similarly large, so that the characteristics bring a serious challenge to the matching of the infrared and visible light images.
Infrared and visible image matching algorithms are mainly divided into region-based matching methods and feature-based matching methods. Compared with the matching method based on the region, the matching method based on the characteristics has higher calculation efficiency and better robustness on rotation and scale change between images, and therefore, the method is widely researched and applied in recent years.
Feature-based Image matching algorithms have been developed for decades, and the most representative matching algorithm is the SIFT algorithm proposed by Lowe (d.g. Lowe, discrete Image Features from scales-innovative keys, International Journal of Computer Vision 60(2) (2004) 91-110), which, although it does not perform well in infrared and visible Image matching applications, provides a basic research idea for the subsequent feature-based Image matching algorithms. On the basis of the algorithm, Chen et al propose a partial gray-scale invariant feature descriptor (J.Chen, J.Tian, N.Lee, J.Zheng, R.T.Smith, A.F.Lane, A.partial interest temporal descriptor for a multimodal temporal image registration, IEEE Transactions on biological Engineering,57(7) (2010)1707 (1718)) in consideration of gray-scale differences between infrared and visible light images to overcome the influence of gray-scale differences between heterogeneous images on feature descriptions, which is widely applied in multimodal retinal image matching. Aguilera et al propose an edge-oriented histogram descriptor (c.aguilera, f.barrera, f.lumbreras, a.d.sappa, r.toledo, Multispectral image feature points, Sensors,12(9) (2012) 12661-. Li et al propose a radiation invariant feature transform algorithm (J.Li, Q.Hu, M.ai, Rift: Multi-modal image matched based on radiation-invariant feature transform, arXiv preprogramm: 1804.09493 (2018)), extract key points in the phase consistency map, and construct a maximum index map for feature description.
Most of the existing feature-based matching methods adopt local information of feature points to construct feature descriptors, and due to different imaging mechanisms, local information of infrared and visible light images may have large difference, so that the descriptors constructed by the existing methods are greatly influenced by the local information difference of the images, and in practical application, the infrared and visible light images often have different imaging resolutions, but the existing feature matching methods lack sufficient robustness to scale change between the infrared and visible light images, and therefore accuracy of image matching is not high.
Through search, Chinese patent application number ZL202110344953.X, the application date is: 2021.03.31, title of the invention: a multi-mode image matching method based on global and local feature description, the application comprises the following steps: respectively detecting characteristic points in the images for the reference image and the image to be matched, and determining the main direction of the characteristic points; for each feature point, constructing a PIIFD descriptor and a global context feature descriptor respectively; for each pair of feature points, calculating the similarity of the two feature descriptors, performing weighted fusion, and performing preliminary matching by comparing the similarities of the feature points; and for the preliminary matching result, extracting local context feature vectors of the feature points, and comparing to eliminate abnormal matching point pairs in the feature points to obtain a final matching result. The application can effectively overcome the problems of large local gray difference and difficult description and matching of the characteristic points of the multimode image, and improve the accuracy of matching of the characteristic points of the multimode image. However, in the application, the Harris algorithm is adopted to detect the characteristic points in the infrared image and the visible light image, and the repeatability of the characteristic points detected in the two images is not high; in addition, the method does not take the scale change between the infrared image and the visible light image in practical application into consideration when constructing the feature descriptors; and when the characteristic similarity weighting is carried out, fixed weighting parameters are adopted, so that the application cannot accurately carry out matching when carrying out infrared and visible light image matching.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defects in the prior art and provide an infrared and visible light image matching method based on multi-feature similarity fusion so as to solve the problem that image feature points are difficult to describe and match in an infrared and visible light image matching task.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention provides an infrared and visible light image matching method based on multi-feature similarity fusion, which comprises the following steps:
step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting salient angular points on the contour as characteristic points;
step 2, for each feature point in the two images, calculating the main direction of the feature point by using the coordinates of the feature point and the left and right contour points of the feature point;
step 3, determining the feature description parameters of each feature point in the two images, and constructing a PIIFD feature descriptor of the feature description parameters;
step 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relationship between the feature point and other feature points;
and 5, calculating the similarity of the two feature descriptors for each pair of feature points in the two images, performing weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and removing abnormal matching point pairs.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) in view of the problem that the local information of feature points is adopted to construct feature descriptors in most of the existing feature-based image matching methods, and the descriptors constructed in this way cause feature mismatching due to the difference of the local information of the images, the infrared and visible light image matching method based on multi-feature similarity fusion provided by the invention extracts the PIIFD feature descriptors and the global context feature descriptors of the feature points simultaneously, realizes feature point matching by performing weighted fusion on the similarity of the features, and effectively overcomes the influence of the local gray difference of the images on the feature description. In addition, the similarity weighting fusion method is designed according to the position distribution characteristics of the feature points, and the robustness of the matching algorithm on the scale change between the infrared image and the visible light image is greatly improved.
(2) According to the infrared and visible light image matching method based on multi-feature similarity fusion, the PIIFD feature descriptor and the global context feature descriptor are corrected by using the calculated feature description parameters, the accuracy of feature descriptor calculation is improved, and the accuracy of image matching is further improved.
(3) According to the infrared and visible light image matching method based on multi-feature similarity fusion, the designed similarity weighting fusion method has the same calculation mode on the similarity weighting coefficients of each feature point pair, repeated calculation can be carried out by a computer, manual adjustment is not needed, the calculation method is simple, the time required by adjustment is saved, and the calculation efficiency is greatly improved.
Drawings
FIG. 1 is a block diagram of the flow of the infrared and visible image matching method of the present invention;
FIG. 2 is a schematic diagram of a synthetic vector calculation method according to the present invention;
FIG. 3 is a diagram illustrating the structure of global context feature descriptors according to the present invention.
Detailed Description
Whereas most existing feature-based image matching methods construct feature descriptors using local information of feature points, due to different imaging mechanisms, the local information of the infrared image and the visible light image has larger difference, so that the descriptor constructed by the prior method is greatly influenced by the local information difference of the image, in practical application, the infrared and visible light images often have different imaging resolutions, but the existing feature matching method has insufficient robustness to the scale change between the infrared and visible light images, which causes the problem of low accuracy of image matching, the invention provides an infrared and visible light image matching method based on multi-feature similarity fusion, and meanwhile, PIIFD feature descriptors and global context feature descriptors of the feature points are extracted, feature point matching is realized by performing weighted fusion on the similarity of the PIIFD feature descriptors and the global context feature descriptors, and the influence of local gray difference of the image on feature description is effectively overcome. In addition, a similarity weighting fusion method is designed according to the position distribution characteristics of the feature points, and the robustness of the matching algorithm on the scale change between the infrared image and the visible light image is greatly improved.
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Examples
With reference to fig. 1 to fig. 3, an infrared and visible light image matching method based on multi-feature similarity fusion in this embodiment includes the following steps:
step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting a salient angular point on a contour as a characteristic point:
step 1-1, performing edge detection on an image by adopting a Canny algorithm, tracking detected edge pixels to realize contour detection, and recording a detected contour set as follows:
Figure BDA0003261597590000041
wherein, gamma issFor the s-th contour in the set, NsIs the number of contours in the set,
Figure BDA0003261597590000042
is a profile TsThe ith contour point of (c).
And 1-2, extracting the salient corner points on the contour by adopting a curvature scale space algorithm to serve as feature points.
And 2, calculating the main direction of each characteristic point in the two images by using the coordinates of the characteristic point and the coordinates of the left and right contour points of the characteristic point.
The specific process of calculating the main direction of the feature points comprises the following steps:
step 2-1, setting
Figure BDA0003261597590000043
And
Figure BDA0003261597590000044
is a profile TsThree feature points in succession, in order to calculate feature points
Figure BDA0003261597590000045
First, a composite vector formed by the feature point and the left and right contour points is calculated as follows:
Figure BDA0003261597590000046
wherein N ist=min(dl,dr),dlTo be from a characteristic point
Figure BDA0003261597590000047
To its left characteristic point
Figure BDA0003261597590000048
Number of contour points experienced, drTo be from a characteristic point
Figure BDA0003261597590000049
To its right characteristic point
Figure BDA00032615975900000410
Number of contour points experienced, vtA resultant vector formed for the t-th pair of left and right contour points,
Figure BDA00032615975900000411
and
Figure BDA00032615975900000412
are respectively a characteristic point
Figure BDA00032615975900000413
And
Figure BDA00032615975900000414
the coordinates of (a).
Step 2-2, the synthetic vectors obtained in the step 2-1 are accumulated to obtain:
Figure BDA00032615975900000415
wherein x isskAnd yskIs a vector vskThe value of the element (1).
Step 2-3, vector vskThe direction setting of (1) is a characteristic point
Figure BDA00032615975900000416
Of which the value is:
Figure BDA00032615975900000417
wherein the content of the first and second substances,
Figure BDA0003261597590000051
is a characteristic point
Figure BDA0003261597590000052
Of the main direction of the light beam.
Step 3, determining the feature description parameters of each feature point in the two images, and constructing a PIIFD feature descriptor of the feature description parameters:
step 3-1, recording the feature point set extracted from the image in the step 1 as { c1,c2,…,cNAnd N is the number of all the characteristic points detected in the image. For any one of the characteristic points ciSelecting the nearest NnA characteristic point, is recorded as
Figure BDA0003261597590000053
Parameter NnCan be set to be between 5 and 20, and the setting parameter N is determined through practical testsnThe best matching effect can be obtained when the number is 10. For the feature point ciSet of nearest neighbors of
Figure BDA0003261597590000054
Wherein each point is connected with ciIs recorded as the Euclidean distance
Figure BDA0003261597590000055
The average distance may be calculated as:
Figure BDA0003261597590000056
step 3-2, for the characteristic point ciSelecting the periphery of the feature point in the image
Figure BDA0003261597590000057
And by the feature point ciThe main direction of (2) is taken as the rectangular region direction, and the PIIFD feature descriptor of the feature point is constructed.
And 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relation between the feature point and other feature points:
step 4-1, for the feature point set { c1,c2,…,cNAny one of the feature points cjRelative to the feature point ciCan be represented as wij=(αijij) In which α isijIs a vector
Figure BDA0003261597590000058
And a characteristic point ciAngle of main direction, betaijIs a characteristic point cjPrincipal direction and feature point ciThe angle of the main direction. For a set of feature points { c1,c2,…,cNEach feature point c injComputing a description vector wij(j=1,2,…,N;j≠i)。
Step 4-2, uniformly dividing the value ranges [0,2 pi ] of alpha and beta into 8 angle intervals, and calculating the characteristic point ciThe global context feature description histogram of (a) is:
Figure BDA0003261597590000059
wherein the content of the first and second substances,
Figure BDA00032615975900000510
is the kth element value in the histogram, bin (K) is the kth angle bin, and K is the histogram dimension. w is ajIs a characteristic point cjFor the weight of the feature description, its value is calculated as:
Figure BDA00032615975900000511
wherein d isijIs a characteristic point cjAnd a characteristic point ciThe euclidean distance between.
In the embodiment, the PIIFD feature descriptor and the global context feature descriptor are corrected by using the calculated feature description parameters, so that the accuracy of feature descriptor calculation is improved, and the accuracy of image matching is further improved.
And 5, calculating the similarity of the two feature descriptors for each pair of feature points in the two images, performing weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and removing abnormal matching point pairs.
The specific process of feature point matching is as follows: firstly, preliminarily matching the characteristic points of the two images,
step 5-1, calculating the similarity of the feature descriptors of a pair of feature points in the two images as follows:
Figure BDA0003261597590000061
wherein d is1And d2Substituting the PIIFD feature descriptors of the pair of feature points into the formula to obtain the local similarity, which is denoted as simlThe global similarity can be obtained by substituting the global context feature descriptor into the above formula, and is marked as simg
And 5-2, fusing the local similarity and the global similarity of the feature point pairs as follows:
S=(1-γ)siml+γsimg (9)
wherein, S is the fusion similarity of the feature point pair, γ is a balance parameter between the local similarity and the global similarity, and the numerical calculation is as follows:
Figure BDA0003261597590000062
wherein (x)1,y1) And (x)2,y2) The coordinates of the pair of feature points in the reference image and the image to be matched respectively, (w)1,h1) And (w)2,h2) Representing the size of the two images, respectively.
And 5-3, calculating the similarity of all the characteristic point pairs in the reference image and the image to be matched according to the step 5-1 and the step 5-2, and screening out the corresponding matched characteristic point pairs in the two images by adopting a bidirectional matching method to realize the primary matching of the characteristic point pairs.
And then eliminating the abnormal matching point pairs to obtain a final matching result:
and 5-4, enabling the transformation relation between the reference image and the image to be matched to meet an affine transformation model, calculating a transformation matrix between the images by adopting a random sampling consistency algorithm according to the preliminary matching result in the step 5-3, and eliminating feature point pairs which do not accord with the transformation relation described by the transformation matrix in the preliminary matching result to obtain a final matching result.
The similarity weighting fusion method designed by the embodiment has the same calculation mode for the similarity weighting coefficients of each feature point pair, can be repeatedly calculated by a computer without manual adjustment, is simple in calculation method, saves the time required by adjustment, and greatly improves the calculation efficiency.
According to the infrared and visible light image matching method based on multi-feature similarity fusion, PIIFD feature descriptors and global context feature descriptors of feature points are extracted simultaneously, feature point matching is achieved through weighting and fusion of the similarity of the PIIFD feature descriptors and the global context feature descriptors, and the influence of image local gray difference on feature description is effectively overcome. In addition, a similarity weighting fusion method is designed according to the position distribution characteristics of the feature points, and the robustness of the matching algorithm on the scale change between the infrared image and the visible light image is greatly improved.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (9)

1. An infrared and visible light image matching method based on multi-feature similarity fusion is characterized by comprising the following steps:
step 1, respectively carrying out contour detection on a reference image and an image to be matched, and extracting salient angular points on the contour as characteristic points;
step 2, calculating the main direction of each characteristic point in the two images by using the coordinates of the characteristic point and the left and right contour points of the characteristic point;
step 3, determining the feature description parameters of each feature point in the two images, and constructing a PIIFD feature descriptor of the feature description parameters;
step 4, for each pair of feature points in the two images, constructing a global context feature descriptor according to the position relationship between the feature point and other feature points;
and 5, calculating the similarity of the two feature descriptors for each pair of feature points in the two images, performing weighted fusion according to the position distribution characteristics of the feature points, realizing feature matching by comparing the similarity of each pair of feature points, and removing abnormal matching point pairs.
2. The infrared and visible light image matching method based on multi-feature similarity fusion as claimed in claim 1, wherein in the step 1, the specific steps of extracting the feature points are as follows:
step 1-1, edge detection is carried out on the image by adopting a Canny algorithm, detected edge pixels are tracked to realize contour detection, and a detected contour set is as follows:
Figure FDA0003261597580000011
wherein, gamma issFor the s-th contour in the set, NsIs the number of contours in the set,
Figure FDA0003261597580000012
is a profile TsThe ith contour point of (c);
and 1-2, extracting the salient corner points on the contour by adopting a curvature scale space algorithm to serve as feature points.
3. The infrared and visible light image matching method based on multi-feature similarity fusion as claimed in claim 2, wherein in the step 2, the step of calculating the main direction of the feature point comprises:
step 2-1, setting
Figure FDA0003261597580000013
And
Figure FDA0003261597580000014
is a profile TsThree feature points in succession, the feature points are calculated by using the formula (2)
Figure FDA0003261597580000015
The resultant vector formed with its left and right contour points:
Figure FDA0003261597580000016
in the formula, Nt=min(dl,dr),dlTo be from a characteristic point
Figure FDA0003261597580000017
To its left characteristic point
Figure FDA0003261597580000018
Number of contour points experienced, drTo be from a characteristic point
Figure FDA0003261597580000021
To its right characteristic point
Figure FDA0003261597580000022
Number of contour points experienced, vtA resultant vector formed for the t-th pair of left and right contour points,
Figure FDA0003261597580000023
and
Figure FDA0003261597580000024
are respectively a characteristic point
Figure FDA0003261597580000025
And
Figure FDA0003261597580000026
the coordinates of (a);
step 2-2, the synthetic vectors obtained in the step 2-1 are accumulated to obtain:
Figure FDA0003261597580000027
wherein x isskAnd yskIs a vector vskThe value of the element (1);
step 2-3, vector vskThe direction setting of (1) is a characteristic point
Figure FDA0003261597580000028
Of which the value is:
Figure FDA0003261597580000029
wherein the content of the first and second substances,
Figure FDA00032615975800000210
is a characteristic point
Figure FDA00032615975800000211
Of the main direction of the light beam.
4. The infrared and visible light image matching method based on multi-feature similarity fusion as claimed in claim 3, wherein in the step 3, the step of calculating the feature point PIIFD feature descriptor is as follows:
step 3-1, recording the feature point set extracted from the image in the step 1 as { c1,c2,…,cNN is the number of all the characteristic points detected in the image; for any one of the characteristic points ciSelecting the nearest NnA feature point, which is marked as a nearest neighbor set
Figure FDA00032615975800000212
Wherein each point is connected with ciIs recorded as the Euclidean distance
Figure FDA00032615975800000213
Average distance
Figure FDA00032615975800000214
Comprises the following steps:
Figure FDA00032615975800000215
step 3-2, for the characteristic point ciSelecting the periphery of the feature point in the image
Figure FDA00032615975800000216
And by the feature point ciThe main direction of (2) is taken as the rectangular region direction, and the PIIFD feature descriptor of the feature point is constructed.
5. The infrared and visible light image matching method based on multi-feature similarity fusion according to claim 4, characterized in that: in step 4, the step of calculating the feature point global context feature descriptor includes:
step 4-1, for the feature point set { c1,c2,…,cNAny one of the feature points cjRelative to the feature point ciIs wij=(αijij) In which α isijIs a vector
Figure FDA00032615975800000217
And a characteristic point ciAngle of main direction, betaijIs a characteristic point cjPrincipal direction and feature point ciThe angle of the main direction; for a set of feature points { c1,c2,…,cNEach feature point c injComputing a description vector wij(j=1,2,…,N;j≠i);
Step 4-2, uniformly dividing the value ranges [0,2 pi ] of alpha and beta into 8 angle intervals, and calculating the characteristic point ciThe global context feature description histogram of (a) is:
Figure FDA0003261597580000031
wherein the content of the first and second substances,
Figure FDA0003261597580000032
is the kth element value in the histogram, bin (K) is the kth angle interval, and K is the dimension of the histogram; w is ajIs a characteristic point cjFor the weight of the feature description, the value is:
Figure FDA0003261597580000033
in the formula (d)ijIs a characteristic point cjAnd a characteristic point ciThe euclidean distance between.
6. The infrared and visible light image matching method based on multi-feature similarity fusion as claimed in claim 5, wherein in step 5, the two images are initially subjected to feature point matching, and the steps are as follows:
and 5-1, calculating the similarity of the feature descriptors by using an equation (8) for a pair of feature points in the two images:
Figure FDA0003261597580000034
in the formula (d)1And d2Substituting the PIIFD feature descriptors of the pair of feature points into the above formula to obtain the local similarity simlSubstituting the global context feature descriptor into the above formula can obtain the global similarity simg
And 5-2, fusing the local similarity and the global similarity of the feature point pairs:
S=(1-γ)siml+γsimg (9)
in the formula, S is the fusion similarity of the feature point pair, γ is a balance parameter between the local similarity and the global similarity, and the numerical value is calculated as follows:
Figure FDA0003261597580000035
wherein (x)1,y1) And (x)2,y2) The coordinates of the pair of feature points in the reference image and the image to be matched respectively, (w)1,h1) And (w)2,h2) Respectively representing the sizes of the two images;
and 5-3, calculating the similarity of all the characteristic point pairs in the reference image and the image to be matched according to the step 5-1 and the step 5-2, and screening out the corresponding matched characteristic point pairs in the two images by adopting a bidirectional matching method to realize the primary matching of the characteristic point pairs.
7. The infrared and visible light image matching method based on multi-feature similarity fusion as claimed in claim 6, wherein in step 5, after the preliminary matching, the abnormal matching point pairs in the preliminary matching are removed to obtain a final matching result:
and 5-4, enabling the transformation relation between the reference image and the image to be matched to meet an affine transformation model, calculating a transformation matrix between the images by adopting a random sampling consistency algorithm according to the preliminary matching result in the step 5-3, and eliminating feature point pairs which do not accord with the transformation relation described by the transformation matrix in the preliminary matching result to obtain a final matching result.
8. The infrared and visible light image matching method based on multi-feature similarity fusion according to any one of claims 4-7, characterized in that: in step 3-1, the parameter NnThe value range is 5 to 20.
9. The infrared and visible light image matching method based on multi-feature similarity fusion according to claim 8, characterized in that: in step 3-1, the parameter NnThe value is 10.
CN202111074441.2A 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion Active CN113792788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111074441.2A CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111074441.2A CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Publications (2)

Publication Number Publication Date
CN113792788A true CN113792788A (en) 2021-12-14
CN113792788B CN113792788B (en) 2024-04-16

Family

ID=78880138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111074441.2A Active CN113792788B (en) 2021-09-14 2021-09-14 Infrared and visible light image matching method based on multi-feature similarity fusion

Country Status (1)

Country Link
CN (1) CN113792788B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934571A (en) * 2024-03-21 2024-04-26 广州市艾索技术有限公司 4K high-definition KVM seat management system
CN117934571B (en) * 2024-03-21 2024-06-07 广州市艾索技术有限公司 4K high-definition KVM seat management system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN109285110A (en) * 2018-09-13 2019-01-29 武汉大学 The infrared visible light image registration method and system with transformation are matched based on robust
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113095384A (en) * 2021-03-31 2021-07-09 安徽工业大学 Remote sensing image matching method based on context characteristics of straight line segments
CN117253063A (en) * 2023-10-24 2023-12-19 安徽工业大学 Two-stage multimode image matching method based on dotted line feature description

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN109285110A (en) * 2018-09-13 2019-01-29 武汉大学 The infrared visible light image registration method and system with transformation are matched based on robust
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113095384A (en) * 2021-03-31 2021-07-09 安徽工业大学 Remote sensing image matching method based on context characteristics of straight line segments
CN117253063A (en) * 2023-10-24 2023-12-19 安徽工业大学 Two-stage multimode image matching method based on dotted line feature description

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱英宏;李俊山;杨威;张涛;朱艺娟;: "红外与可见光图像特征点边缘描述与匹配算法", 计算机辅助设计与图形学学报, no. 06, 15 June 2013 (2013-06-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934571A (en) * 2024-03-21 2024-04-26 广州市艾索技术有限公司 4K high-definition KVM seat management system
CN117934571B (en) * 2024-03-21 2024-06-07 广州市艾索技术有限公司 4K high-definition KVM seat management system

Also Published As

Publication number Publication date
CN113792788B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN107316031B (en) Image feature extraction method for pedestrian re-identification
Li et al. LNIFT: Locally normalized image for rotation invariant multimodal feature matching
CN108960211B (en) Multi-target human body posture detection method and system
CN104200495B (en) A kind of multi-object tracking method in video monitoring
WO2017049994A1 (en) Hyperspectral image corner detection method and system
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN112085772B (en) Remote sensing image registration method and device
CN107066969A (en) A kind of face identification method
CN105718882A (en) Resolution adaptive feature extracting and fusing for pedestrian re-identification method
CN112396643A (en) Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
Fan et al. Registration of multiresolution remote sensing images based on L2-siamese model
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
Chen et al. Multiple object tracking using edge multi-channel gradient model with ORB feature
CN113095385B (en) Multimode image matching method based on global and local feature description
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN110309729A (en) Tracking and re-detection method based on anomaly peak detection and twin network
CN117253063A (en) Two-stage multimode image matching method based on dotted line feature description
CN117078726A (en) Different spectrum image registration method based on edge extraction
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
CN113792788A (en) Infrared and visible light image matching method based on multi-feature similarity fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant