CN113128573A - Infrared-visible light heterogeneous image matching method - Google Patents

Infrared-visible light heterogeneous image matching method Download PDF

Info

Publication number
CN113128573A
CN113128573A CN202110348571.4A CN202110348571A CN113128573A CN 113128573 A CN113128573 A CN 113128573A CN 202110348571 A CN202110348571 A CN 202110348571A CN 113128573 A CN113128573 A CN 113128573A
Authority
CN
China
Prior art keywords
gradient
infrared
visible light
image
directional diagram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110348571.4A
Other languages
Chinese (zh)
Inventor
王怀野
郭慧敏
王冬
赵晓霞
叶林
宋敏
龚美玲
彭明松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA AEROSPACE TIMES ELECTRONICS CO LTD
Beijing Aerospace Feiteng Equipment Technology Co ltd
Original Assignee
CHINA AEROSPACE TIMES ELECTRONICS CO LTD
Beijing Aerospace Feiteng Equipment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA AEROSPACE TIMES ELECTRONICS CO LTD, Beijing Aerospace Feiteng Equipment Technology Co ltd filed Critical CHINA AEROSPACE TIMES ELECTRONICS CO LTD
Priority to CN202110348571.4A priority Critical patent/CN113128573A/en
Publication of CN113128573A publication Critical patent/CN113128573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Abstract

An original target template image is selected from a visible light reference image through a task planning system, feature detection and feature description are carried out on the template image to form a reference image feature file, coordinate transformation is carried out on the template image according to real-time attitude information, the target template image of each ballistic trajectory position and an infrared real-time image obtained during actual flight have the same scale and visual angle, and the infrared real-time image feature information is matched with the template image feature information to obtain a target position in the infrared real-time image. The method solves the problem that the common template matching target identification method is not suitable for scale change and view angle change through the integrated design of navigation, trajectory and image processing, simplifies the image information processing of a seeker, improves the matching probability of template matching, can be used for automatically identifying the target at the tail end of an infrared imaging air-ground guided weapon, and has good application prospect.

Description

Infrared-visible light heterogeneous image matching method
Technical Field
The invention relates to an uncooled infrared-visible light heterogeneous image matching method, in particular to an uncooled infrared and visible light image matching method in an air-to-ground guided weapon, and belongs to the field of accurate guidance.
Background
In information reconnaissance and accurate guidance to which a large number of visible light cameras, infrared cameras, Synthetic Aperture Radars (SAR) and satellite remote sensing images have been applied, an image (heterogeneous image) matching technology with different sources is an important subject of accurate guidance. Due to differences of imaging mechanisms, images of different image sensors are easy to have distortion of gray scale nonlinearity, non-monotonicity and non-functional relation, so that the difference of the images obtained by the same object in different imaging modes is very large, the gray scale correlation between the images is seriously reduced, under some special conditions, the non-linear gray scale distortion of the images can also cause the inversion of the gray scale and the structural direction of the images, and great difficulty is brought to image matching.
The uncooled infrared imaging guidance has the advantages of small size, light weight, low power consumption, low cost and the like, becomes an important direction for the development of infrared technology, has huge development space in the military field, and is the key point of recent tactical guidance weapon research. In particular, in recent years, the non-refrigeration infrared imaging device breaks through the monopoly of foreign countries, and the domestic experience can independently develop the non-refrigeration infrared focal plane with higher performance, thereby promoting the rapid application of the non-refrigeration infrared in the military field. However, the quality of uncooled infrared imaging has inherent defects, which bring great difficulty to subsequent image processing, mainly expressed as low sensitivity, NETD is generally above 50mK and far higher than 10mK of a refrigeration detector, resulting in great difference between the uncooled imaging quality and the refrigeration detector, in addition, the temperature-sensitive time constant of the sensitive material adopted by the detector is about 5-10ms, environmental vibration can result in obvious imaging tailing, further worsen the image quality, and bring about the problems of blurred edges, inconspicuous corners and contour features of the image, etc., so that the details of a real-time image and a reference image are greatly different, and it is difficult to obtain ideal matching accuracy and accuracy by using a traditional matching method.
Commonly used image methods can be divided into transform domain-based and feature-based methods. The method based on the transform domain mainly comprises Fourier transform, wavelet transform, wash transform and the like, and is mainly suitable for the condition that rigid transform exists between images. The feature-based matching method is wider in application range, and the detection algorithm mainly comprises corner feature detection, edge feature detection, region feature detection and the like, wherein the corner-based feature detection algorithm comprises a Moravec detection algorithm, a SUSAN detection algorithm, a Maximum Stable Extremum Region (MSER) detection algorithm, SIFT, PCA-SIFT, GLOD, a SURF algorithm and the like; the edge-based method mainly detects the edge, contour or line characteristics of an image, and common methods comprise a Sobel operator, a Roberts operator, a Canny operator, a LOG operator, a watershed algorithm, a level set method, a Mean Shift clustering method and the like, wherein the operators have poor noise robustness and are difficult to meet the actual requirements in the aspects of visual perception and edge determination; a common regional feature is template matching. The algorithm has the advantages of simplicity, independence on image content, good universality and certain robustness on noise and shielding; the defects are that the calculation amount is large, and the image distortion is sensitive. The application of the method in the homologous image based on the transform domain and the characteristic has been successful, but a great deal of literature research finds that the matching algorithms have poor effect in the heterologous image matching, and are not suitable for the problems of gray scale nonlinear distortion, strong noise, local characteristic information loss and the like of the heterologous image.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the defects of the prior art are overcome, and an infrared-visible light heterogeneous image matching method is provided.
The technical solution of the invention is as follows:
an infrared-visible light heterogeneous image matching method comprises the following steps:
(1) converting the visible light reference image into the same visual angle with the infrared real-time image through perspective transformation according to ballistic attitude information at each moment, selecting a plurality of reference object templates from the visible light reference image, and calculating the position relation between the center of each template and a target point;
(2) respectively processing the infrared real-time graph and the visible light reference graph to form an infrared gradient directional diagram and a visible light gradient directional diagram;
(3) respectively calculating gradient values of all pixel points of the infrared gradient directional diagram and the visible light gradient directional diagram;
(4) dividing M × N subblocks in each reference object template gradient directional diagram, calculating the gradient sum of each subblock, and extracting an interest block of the reference object template according to the gradient sum of each subblock;
(5) for each reference object template, traversing the infrared gradient directional diagram in a certain step length according to the size of the reference object template to obtain a plurality of regions to be matched, and executing the following operations on each region to be matched:
dividing the region to be matched of the infrared gradient directional diagram into M × N sub-blocks, matching each sub-block with an interest block in a reference object template to obtain an optimal matching result, and obtaining the position of the center of the reference object template corresponding to the same name point in the infrared gradient directional diagram;
(6) and obtaining the position information of the target point in the infrared real-time image according to the position relationship between each reference object template and the target point and the position of the homonymous point in the infrared gradient directional diagram, and completing the matching of the infrared-visible light heterogeneous image.
In the step (2), in the infrared gradient directional diagram, the gradient G of the pixel point (x, y)1(x, y) is defined as a vector, expressed as:
Figure BDA0003001607890000031
wherein G is1xIs the gradient of the pixel point (x, y) in the x direction, G1yIs the gradient of the pixel point (x, y) in the y direction, I1Is the pixel value of the pixel point (x, y);
the direction of the gradient is defined as
β1(x,y)=arctan(G1y/G1x)
Wherein beta is1(x, y) is the gradient G of the pixel point (x, y)1(x, y) angle relative to the x-axis.
Gradient value | G of pixel point (x, y) in infrared gradient directional diagram1(x, y) | satisfies
Figure BDA0003001607890000032
In the step (2), in the visible light gradient directional diagram, the gradient G of each pixel point (x, y)2(x, y) is defined as a vector, expressed as:
Figure BDA0003001607890000033
wherein G is2xIs the gradient of the pixel point (x, y) in the x direction, G2yIs the gradient of the pixel point (x, y) in the y direction, I2Is the pixel value of the pixel point (x, y);
the direction of the gradient is defined as
β2(x,y)=arctan(G2y/G2x)
Wherein beta is2(x, y) is the gradient G of the pixel point (x, y)2(x, y) angle relative to the x-axis.
In the step (3), in the visible light gradient directional diagram, gradient value | G of pixel point (x, y)2(x, y) | satisfies
Figure BDA0003001607890000041
In the step (4), the sum of the gradients of each sub-block is equal to the sum of the gradient values of all the pixel points in the sub-block.
In the step (4), the method for extracting the reference object template interest block includes:
dividing the reference object template into M × N subblocks, respectively calculating the gradient sum of each subblock, sorting the gradient sum of each subblock from large to small, and selecting k arranged at the top as an interest block, wherein k is more than or equal to 3.
And transmitting the position of the target point in the infrared gradient directional diagram to a servo system or a guidance system for subsequent application.
Compared with the prior art, the invention has the advantages that:
(1) the infrared real-time image and the visible light reference image are converted into a gradient directional diagram, and the common characteristics among the heterogeneous images are reserved;
(2) the interest blocks are automatically selected by a block gradient method, so that the subjectivity of manual selection is avoided, and the algorithm is more objective.
(3) The invention solves the problems that the template matching is not suitable for scale change, visual angle change and the like;
(4) the invention adopts a combined template matching method to realize target positioning and solves the robustness problem of heterogeneous image matching.
Drawings
FIG. 1 is a flow chart of a method implementation of the present invention;
FIG. 2 is a diagram of the coordinate transformation relationship in the present invention;
FIG. 3 is a schematic diagram of an image of the present invention, wherein (a) is an infrared real-time image and (b) is a visible reference image;
FIG. 4 is a schematic diagram of an image of the present invention, wherein (a) is an infrared real-time gradient map and (b) is a visible gradient map;
FIG. 5 is a schematic diagram of an image of the present invention, wherein (a) is an infrared real-time gradient pattern and (b) is a visible gradient pattern;
FIG. 6 is a reference template selected from the visible light reference map of the present invention;
FIG. 7 and FIG. 8 are schematic diagrams of the QATM algorithm of the present invention;
FIG. 9 shows 5 matching cases of the QATM algorithm of the present invention;
FIGS. 10-13 illustrate the infrared corresponding interest block locations matched by the QATM algorithm of the present invention;
FIG. 14 is a target location result obtained by the heterogeneous image matching algorithm of the present invention.
Detailed Description
The invention provides a region matching method based on the gradient direction aiming at the characteristics of uncooled infrared and visible light images, and the gradient direction information of the infrared and visible light images is extracted by constructing the gradient direction image, so that the same characteristics can be obtained when the gray difference of the infrared and visible light images is large; and then finding out the most similar position of the infrared image and the visible light image through a matching algorithm to obtain the accurate position of the target.
The method is suitable for identifying the ground fixed target of the infrared imaging air-ground guided weapon. The method comprises the steps that a visible light reference image used by the method is obtained from detection means such as an unmanned aerial vehicle or a satellite picture, an original target template image is selected from the reference image through a mission planning system, and feature detection and feature description are carried out on the template image to form a reference image feature file which is bound to an infrared seeker; in the flying process of the projectile body, coordinate transformation is carried out on the template images according to real-time attitude information of the projectile body, so that the target template images of all trajectory positions and the infrared real-time images obtained in actual flying have the same size and visual angle, the seeker processes the real-time infrared images by adopting the feature detection and feature description algorithm which is the same as that of the template images, the feature information of the infrared real-time images is obtained, and finally the infrared real-time image feature information is matched with the template image feature information to obtain the target positions in the infrared real-time images. The method solves the problem that the common template matching target identification method is not suitable for scale change and view angle change through the integrated design of navigation, trajectory and image processing, simplifies the image information processing of a seeker, improves the matching probability of template matching, can be used for automatically identifying the target at the tail end of an infrared imaging air-ground guided weapon, and has good application prospect.
As shown in fig. 1, the method of the invention comprises the following steps:
the method comprises the following steps: and converting the visible light reference image into the same visual angle with the infrared real-time image through perspective transformation according to the ballistic attitude information at each moment. And selecting a plurality of reference object templates from the visible light reference map, and calculating the position relation between the center of each template and the target point.
As shown in fig. 2, the process of seeker image acquisition is actually a process of converting to a camera coordinate system through a world coordinate system, converting to an image coordinate system, and finally converting to a computer image coordinate system.
The camera coordinate system selects the origin as the lens center and the sight line direction as zcAxial negative direction, camera down and zcThe direction perpendicular to the axis being ycAxial forward direction, xcAxis and ycAxis, zcThe axes are orthogonal to form a right-hand coordinate system. From the world coordinate system (x)w,yw,zw) To camera coordinate system (x)c,yc,zc) The conversion relationship of (a) can be expressed as follows:
Figure BDA0003001607890000061
wherein T is1Is a translation vector (T)x,Ty,Tz) In relation to the spatial position of the seeker. And R is a coordinate rotation matrix. The rotation matrix R is related to the pose of the seeker and can be obtained by euler's angle representation. As shown in FIG. 4, the yaw angle θ and the pitch angle in the line of sight direction are recorded
Figure BDA0003001607890000062
And roll angle phi, can be found:
Figure BDA0003001607890000063
the image coordinate system is a two-dimensional coordinate system, and the origin is selected as the focal point and z of the lenscIntersection of axes, xuAxis, yuAxes are respectively parallel to xcAxis, ycA shaft. From the camera coordinate system (x)c,yc,zc) To the imageCoordinate system (x)u,yu) The conversion relation of (1) is:
Figure BDA0003001607890000064
where f is the focal length of the camera.
Computer image coordinate systems generally use integer coordinates, and the coordinate range is determined by the resolution of a specific image device. From the image coordinate system (x)u,yu) The conversion relation to the computer image coordinate system (u, v) is:
Figure BDA0003001607890000065
wherein (c)x,cy) Is the image coordinate of the origin point on the image plane, and dx and dy are x on the image plane respectivelyu、yuThe size of the unit pixel in the direction.
Through the simulation of the camera imaging process, the visible light template image can be rotated and converted to the shooting visual angle of the infrared real-time image according to the specified sequence.
A plurality of reference objects are selected from the converted visible light image as templates (as shown in fig. 6), and the positional relationship between the center of each template and the target point is calculated.
FIG. 3 is a schematic diagram of images of the present invention, wherein (a) is an infrared real-time image and (b) is a visible light reference image
Step two: and processing the infrared real-time image and the visible light reference image through a Canny algorithm to form an infrared gradient directional diagram and a visible light gradient directional diagram.
The Canny edge detector is the first derivative of the gaussian function and is the optimal approximator for the product of signal-to-noise ratio and localization [ Canny 1986] we will outline the Canny edge detector algorithm by the following notation. Representing the image by Ix, y, convolving the image with a Gaussian smoothing filter using separable filtering, the result being a smoothed data array
F[x,y]=g[x,y;σ]*I[x,y] (5)
Wherein F [ x, y ] is the smoothed image, g [ x, y; σ ] is a Gaussian filter.
Edge detection is the most basic operation for detecting locally significant changes in an image. In the one-dimensional case, the step edge is related to the local peak of the first derivative of the image. Gradient is a measure of the variation of the function and an image can be viewed as an array of sampling points that is a continuous function of the intensity of the image. Thus, similarly for the same dimension, a significant change in the gray value of the image can be detected using a discrete approximation function of the gradient. The gradient is a two-dimensional equivalent of the first derivative, defined as a vector.
The Canny gradient map is shown in fig. 4, wherein (a) is an infrared real-time map, and (b) is a visible light reference map.
The gradient pattern is shown in fig. 5, in which (a) is an infrared real-time pattern and (b) is a visible light reference pattern.
Specifically, in the infrared gradient directional diagram, the gradient G of the pixel point (x, y)1(x, y) is defined as a vector, expressed as:
Figure BDA0003001607890000071
wherein G is1xGradient G of pixel point (x, y) in infrared gradient directional diagram in x direction1yIs the gradient of pixel points (x, y) in the infrared gradient directional diagram in the y direction, I1The pixel value of a pixel point (x, y) in the infrared gradient directional diagram is obtained;
the direction of the gradient is defined as
β1(x,y)=arctan(G1y/G1x)
Wherein beta is1(x, y) is the gradient G of the pixel point (x, y) in the infrared gradient directional diagram1(x, y) angle relative to the x-axis.
Gradient G of each pixel point (x, y) in the visible light gradient directional diagram2(x, y) is defined as a vector, expressed as:
Figure BDA0003001607890000081
wherein G is2xGradient G of pixel point (x, y) in the gradient directional diagram of visible light in the x direction2yGradient of pixel point (x, y) in the gradient directional diagram of visible light in the y direction, I2The pixel value of a pixel point (x, y) in the visible light gradient directional diagram is obtained;
the direction of the gradient is defined as
β2(x,y)=arctan(G2y/G2x)
Wherein beta is2(x, y) is the gradient G of the pixel point (x, y) in the visible light gradient directional diagram2(x, y) angle relative to the x-axis.
Step three: and respectively calculating gradient values of all pixels of the infrared gradient directional diagram and the visible light gradient directional diagram.
And respectively calculating the gradient amplitude of each pixel point of the infrared real-time gradient directional diagram and the visible light gradient directional diagram.
Gradient value | G of pixel point (x, y) in infrared gradient directional diagram1(x, y) | satisfies
Figure BDA0003001607890000082
Gradient value | G of pixel point (x, y) in visible light gradient directional diagram2(x, y) | satisfies
Figure BDA0003001607890000083
Step four: extracting interest blocks of a visible light template
Dividing M × N subblocks in each reference object template gradient directional diagram, calculating the gradient sum of each subblock, and extracting an interest block of the reference object template according to the gradient sum of each subblock;
the method for extracting the reference object template interest block comprises the following steps:
dividing the reference object template into M × N subblocks, respectively calculating the gradient sum of each subblock, sorting the gradient sum of each subblock from large to small, and selecting k arranged at the top as an interest block, wherein k is more than or equal to 3.
The resulting block of interest is shown in fig. 7.
(5) Obtaining the position of each template center in the infrared real-time image through a matching algorithm
For each reference object template, traversing the infrared gradient directional diagram in a certain step length according to the size of the reference object template to obtain a plurality of regions to be matched, and executing the following operations on each region to be matched:
dividing the region to be matched of the infrared gradient directional diagram into M × N sub-blocks, matching each sub-block with an interest block in the reference object template to obtain an optimal matching result, namely obtaining the position of the center of the reference object template corresponding to the same name point in the infrared gradient directional diagram
In a traditional template matching algorithm such as NCC (matching network center) and the like, all pixels in a template and a to-be-matched area of a search image need to be included in calculation, and when the to-be-matched area is slightly deformed or is shielded, the image matching precision is greatly improved.
The matching algorithm further divides the template and the area to be matched into patches, aims to converge the matching area during calculation, reduces the influence of background change in the template or the area to be matched on matching, highlights a target main body, and combines the area with the optimal matching quality as a final matching position, and has the following detailed thought:
as shown in fig. 7 and 8, the reference template in the visible light image is defined as a template T, the search area to be matched in the infrared image is defined as S, and the patch S with the highest matching degree with the template T is found in the search image for the area to be matchediFinding out the patch t with the highest matching degree with the region S to be matched in the templateiObtaining the optimal matching area s of the template and the area to be matchedi,tiWherein s isiSub-blocks, t, of the area to be searched in the infrared real-time mapiAre sub-blocks in the visible reference map.
The area S to be matched in the search image, the template T and the patch Si,tiAs vector expansion, calculate S and t separately1、t2、……、tiAnd the cosine similarity p (s, T), and T and s1、s2、……、siSubstituting the cosine similarity rho (t, s) into the Softmax function (the prototype is
Figure BDA0003001607890000101
) Likelihood function values L (t | s) and L (s | t) are obtained, respectively, where the α parameter acts to increase the sensitivity of the likelihood function so that a set of t and s likelihood function values with high similarity approach 1, whereas if the similarity is lower than 0. The two are multiplied to obtain a matching quality coefficient QATM.
Figure BDA0003001607890000102
Figure BDA0003001607890000103
QATM(sq,tp)=L(tp|sq)·L(sq|tp) (11)
ρ(tp,sq) Representation patch tpAnd sqThe value of the likelihood function, L(s)q|tp) Representation patch sqAnd tpThe likelihood function values of (1).
As shown in fig. 9, the algorithm is divided into 5 matching cases: if there are only 1 patch S in SiThe matching degree with T is higher, and only one patch T in TiThe matching degree with S is higher, then the matching reliability is determined to be higher, and the matching quality coefficient QATM is 1; if there are only 1 patch S in SiThe matching degree with T is higher, but there are N patches T in TiThe matching degree with S is higher, then the matching reliability is considered to be lower, and the matching quality coefficient QATM is 1/N; if there are M patches S in SiThe matching degree with T is higher, but only 1 patch T in TiThe matching degree with S is higher, then the matching reliability is considered to be lower, and the matching quality coefficient QATM is 1/M; if there are M patches S in SiThe matching degree with T is higher, and N patches T are arranged in TiThe degree of matching with the S is higher,then the matching reliability is considered to be lower, the matching position can be further optimized, and the matching quality coefficient QATM is 1/MN; and if the mutual matching degree of the patches of the S and the T is low, the S is not matched with the T, and the matching quality coefficient QATM is approximate to 0.
And finally, selecting and combining the s and t areas with the highest matching quality coefficient to maximize the overall matching quality to obtain a final matching position, and respectively performing matching calculation on each visible light reference object template to obtain the position of the same name point of each reference object template in the infrared real-time image, as shown in the attached drawings 10-13.
(6) And obtaining the position information of the target point in the infrared real-time image according to the position relationship between each reference object template and the target point and the position of the homonymous point in the infrared gradient directional diagram, and completing the matching of the infrared-visible light heterogeneous image. As shown in fig. 14.
And finally, transmitting the position information of the target point in the infrared real-time image to a servo system or a guidance system.
Aiming at the characteristics of non-linearity, non-monotonicity and non-functional relation of the gray scale of the heterogeneous images, the gray scale images are converted into gradient directional diagrams, the common characteristics among the heterogeneous images are reserved, and target positioning is realized through a combined template matching method.
Relevant experiments show that the method well solves the robustness problem of heterogeneous image matching, and enables the heterogeneous image matching technology to be applied to air-ground guided weapons.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.

Claims (8)

1. An infrared-visible light heterogeneous image matching method is characterized by comprising the following steps:
(1) converting the visible light reference image into the same visual angle with the infrared real-time image through perspective transformation according to ballistic attitude information at each moment, selecting a plurality of reference object templates from the visible light reference image, and calculating the position relation between the center of each template and a target point;
(2) respectively processing the infrared real-time graph and the visible light reference graph to form an infrared gradient directional diagram and a visible light gradient directional diagram;
(3) respectively calculating gradient values of all pixel points of the infrared gradient directional diagram and the visible light gradient directional diagram;
(4) dividing M × N subblocks in each reference object template gradient directional diagram, calculating the gradient sum of each subblock, and extracting an interest block of the reference object template according to the gradient sum of each subblock;
(5) for each reference object template, traversing the infrared gradient directional diagram in a certain step length according to the size of the reference object template to obtain a plurality of regions to be matched, and executing the following operations on each region to be matched:
dividing the region to be matched of the infrared gradient directional diagram into M × N sub-blocks, matching each sub-block with an interest block in a reference object template to obtain an optimal matching result, and obtaining the position of the center of the reference object template corresponding to the same name point in the infrared gradient directional diagram;
(6) and obtaining the position information of the target point in the infrared real-time image according to the position relationship between each reference object template and the target point and the position of the homonymous point in the infrared gradient directional diagram, and completing the matching of the infrared-visible light heterogeneous image.
2. The infrared-visible light heterogeneous image matching method according to claim 1, wherein: in the step (2), in the infrared gradient directional diagram, the gradient G of the pixel point (x, y)1(x, y) is defined as a vector, expressed as:
Figure FDA0003001607880000011
wherein G is1xIs the gradient of the pixel point (x, y) in the x direction, G1yIs the gradient of the pixel point (x, y) in the y direction, I1Is the pixel value of the pixel point (x, y);
the direction of the gradient is defined as
β1(x,y)=arctan(G1y/G1x)
Wherein beta is1(x, y) is the gradient of the pixel (x, y)G1(x, y) angle relative to the x-axis.
3. The infrared-visible light heterogeneous image matching method according to claim 2, wherein: gradient value | G of pixel point (x, y) in infrared gradient directional diagram1(x, y) | satisfies
Figure FDA0003001607880000021
4. The infrared-visible light heterogeneous image matching method according to claim 1, wherein: in the step (2), in the visible light gradient directional diagram, the gradient G of each pixel point (x, y)2(x, y) is defined as a vector, expressed as:
Figure FDA0003001607880000022
wherein G is2xIs the gradient of the pixel point (x, y) in the x direction, G2yIs the gradient of the pixel point (x, y) in the y direction, I2Is the pixel value of the pixel point (x, y);
the direction of the gradient is defined as
β2(x,y)=arctan(G2y/G2x)
Wherein beta is2(x, y) is the gradient G of the pixel point (x, y)2(x, y) angle relative to the x-axis.
5. The infrared-visible light heterogeneous image matching method according to claim 4, wherein: in the step (3), in the visible light gradient directional diagram, gradient value | G of pixel point (x, y)2(x, y) | satisfies
Figure FDA0003001607880000023
6. The infrared-visible light heterogeneous image matching method according to claim 1, wherein: in the step (4), the sum of the gradients of each sub-block is equal to the sum of the gradient values of all the pixel points in the sub-block.
7. The infrared-visible light heterogeneous image matching method according to claim 1, wherein: in the step (4), the method for extracting the reference object template interest block includes:
dividing the reference object template into M × N subblocks, respectively calculating the gradient sum of each subblock, sorting the gradient sum of each subblock from large to small, and selecting k arranged at the top as an interest block, wherein k is more than or equal to 3.
8. The infrared-visible light heterogeneous image matching method according to claim 1, wherein: and transmitting the position of the target point in the infrared gradient directional diagram to a servo system or a guidance system for subsequent application.
CN202110348571.4A 2021-03-31 2021-03-31 Infrared-visible light heterogeneous image matching method Pending CN113128573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110348571.4A CN113128573A (en) 2021-03-31 2021-03-31 Infrared-visible light heterogeneous image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110348571.4A CN113128573A (en) 2021-03-31 2021-03-31 Infrared-visible light heterogeneous image matching method

Publications (1)

Publication Number Publication Date
CN113128573A true CN113128573A (en) 2021-07-16

Family

ID=76775432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110348571.4A Pending CN113128573A (en) 2021-03-31 2021-03-31 Infrared-visible light heterogeneous image matching method

Country Status (1)

Country Link
CN (1) CN113128573A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152102A (en) * 2023-09-07 2023-12-01 南京天创电子技术有限公司 Method and system for detecting working state of coke oven waste gas mound rod

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715325A (en) * 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
CN103093193A (en) * 2012-12-28 2013-05-08 中国航天时代电子公司 Space image guided weapon object identification method
CN105631860A (en) * 2015-12-21 2016-06-01 中国资源卫星应用中心 Local sorted orientation histogram descriptor-based image correspondence point extraction method
CN105657432A (en) * 2016-01-12 2016-06-08 湖南优象科技有限公司 Video image stabilizing method for micro unmanned aerial vehicle
CN107464252A (en) * 2017-06-30 2017-12-12 南京航空航天大学 A kind of visible ray based on composite character and infrared heterologous image-recognizing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715325A (en) * 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
CN103093193A (en) * 2012-12-28 2013-05-08 中国航天时代电子公司 Space image guided weapon object identification method
CN105631860A (en) * 2015-12-21 2016-06-01 中国资源卫星应用中心 Local sorted orientation histogram descriptor-based image correspondence point extraction method
CN105657432A (en) * 2016-01-12 2016-06-08 湖南优象科技有限公司 Video image stabilizing method for micro unmanned aerial vehicle
CN107464252A (en) * 2017-06-30 2017-12-12 南京航空航天大学 A kind of visible ray based on composite character and infrared heterologous image-recognizing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈青;王飞;: "基于SIFT的NSCT域抗几何攻击水印算法", 包装工程, no. 05 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152102A (en) * 2023-09-07 2023-12-01 南京天创电子技术有限公司 Method and system for detecting working state of coke oven waste gas mound rod
CN117152102B (en) * 2023-09-07 2024-04-05 南京天创电子技术有限公司 Method and system for detecting working state of coke oven waste gas mound rod

Similar Documents

Publication Publication Date Title
Brigot et al. Adaptation and evaluation of an optical flow method applied to coregistration of forest remote sensing images
CN107993258B (en) Image registration method and device
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
Teke Satellite image processing workflow for RASAT and Göktürk-2
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
US6507660B1 (en) Method for enhancing air-to-ground target detection, acquisition and terminal guidance and an image correlation system
CN110569861A (en) Image matching positioning method based on point feature and contour feature fusion
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
Cao Applying image registration algorithm combined with CNN model to video image stitching
Heather et al. Multimodal image registration with applications to image fusion
CN114897705A (en) Unmanned aerial vehicle remote sensing image splicing method based on feature optimization
CN115867939A (en) System and method for air-to-ground registration
CN112634130A (en) Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator
Dalmiya et al. A survey of registration techniques in remote sensing images
CN113128573A (en) Infrared-visible light heterogeneous image matching method
Huang et al. SAR and optical images registration using shape context
US10753708B2 (en) Missile targeting
Chen et al. A union matching method for SAR images based on SIFT and edge strength
Božić-Štulić et al. Complete model for automatic object detection and localisation on aerial images using convolutional neural networks
Tahoun et al. Satellite image matching and registration: A comparative study using invariant local features
Arthur et al. Rapid processing of unmanned aerial vehicles imagery for disaster management
Lofy High accuracy registration and targeting
Gotovac et al. A model for automatic geomapping of aerial images mosaic acquired by UAV
Boukerch et al. Multispectral and panchromatic registration Of alsat-2 images using dense vector matching for pan-sharpening process
Cronje et al. A comparison of image features for registering LWIR and visual images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination