CN112489091B - Full strapdown image seeker target tracking method based on direct-aiming template - Google Patents

Full strapdown image seeker target tracking method based on direct-aiming template Download PDF

Info

Publication number
CN112489091B
CN112489091B CN202011508984.6A CN202011508984A CN112489091B CN 112489091 B CN112489091 B CN 112489091B CN 202011508984 A CN202011508984 A CN 202011508984A CN 112489091 B CN112489091 B CN 112489091B
Authority
CN
China
Prior art keywords
target
template
image
matching
seeker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011508984.6A
Other languages
Chinese (zh)
Other versions
CN112489091A (en
Inventor
周伟
卢鑫
陆叶
颜有翔
周波
李路
李显彦
胡军
朱磊
马帅宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Huanan Optoelectronic Group Co ltd
Original Assignee
Hunan Huanan Optoelectronic Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Huanan Optoelectronic Group Co ltd filed Critical Hunan Huanan Optoelectronic Group Co ltd
Priority to CN202011508984.6A priority Critical patent/CN112489091B/en
Publication of CN112489091A publication Critical patent/CN112489091A/en
Application granted granted Critical
Publication of CN112489091B publication Critical patent/CN112489091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/238Analysis of motion using block-matching using non-full search, e.g. three-step search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full strapdown image seeker target tracking method based on a direct-view template, which corrects a target image by using the direct-view template and inertial navigation information, so that the target image is consistent with the view angle and the scale between the target image acquired by a full strapdown image seeker, and has the characteristic of high tracking precision; the target tracking range can be reduced by a target memory tracking method by fusing inertial navigation information and image information, and the problems of translation invariance, scale invariance and rotation invariance are solved; filtering out the interference of similar targets, and processing target tracking under the conditions of short-time visual field emergence, shielding and the like of target nonlinear maneuvering; the method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.

Description

Full strapdown image seeker target tracking method based on direct-aiming template
Technical Field
The invention belongs to the field of image processing and machine vision, relates to a target tracking method of an air-ground full-strapdown image seeker, in particular to a real-time tracking method for attacking a ground static target by a direct-aiming template, is suitable for an air-ground guided weapon with a full-strapdown image seeker and an inertia measuring device and a pod, and is also suitable for all imaging guided weapons according to the principle.
Background
The instantaneous optical view field of the full strapdown image seeker is large, so that the image signal noise and the image processing calculation amount are large; meanwhile, the photoelectric detector is fixedly connected with the reference seat, the disturbance of the carrier is large, and the phenomena of field of view, geometric deformation, shielding and the like of nonlinear maneuvering of the target are easy to occur in the tracking process; the direct aiming template is generated by directly pointing a target by scouting equipment such as a pod distributed aperture photoelectric system, and the like, and has larger difference of visual angle, size, wave band and background with a real-time image acquired by a full strapdown image seeker. The objective factors cause that the traditional target tracking method based on the direct-view template is difficult to adapt to the full strapdown image seeker; the traditional template matching tracking method only has translation invariance and does not have rotation invariance and scale invariance; meanwhile, similarity measurement (absolute difference, product correlation and the like) has the defects of poor geometric distortion resistance and large calculation amount on a search strategy, and the application range of the similarity measurement is influenced.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a full strapdown image seeker target tracking method based on a direct-view template, which can solve the problem of real-time tracking of a ground static target under the conditions of visual angle difference, scale difference, large maneuverability, field of view, shielding and the like between the direct-view template and a target image acquired by a full strapdown image seeker, can realize that the full strapdown image seeker can quickly and accurately give target position information, and provides support for realizing accurate target striking.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a full strapdown image seeker target tracking method based on a direct-view template is realized by adopting the following steps:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and posture data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by utilizing the loaded direct-view template 1 and the corresponding longitude and latitude height and posture data to obtain a target template 2 for tracking;
(2) the target template 2 is matched with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement are adopted during matching, the matching is successful, the initial position of the target in the image acquired by the full strapdown image seeker is obtained, and the target real-time image is continuously issued for matching if the matching is unsuccessful;
taking the initial position of the full-strapdown image seeker obtained in the image as a center, selecting a certain area in a target real-time image as a target template 2 for tracking, matching the target template 2 for tracking with the target real-time image by combining inertial navigation information, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
(3) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
Further, the target template 2 for tracking is obtained after image correction is carried out by using the direct-view template 1 in the step (1), the direct-view template 1 and position and inertial navigation data of corresponding time are obtained through pod equipment, and the target template 2 which is consistent with the view angle and the scale of a target real-time image when the full-strapdown image seeker is used is converted; the transformation process requires that the position (lambda) of the direct targeting template 1 at the moment is known 0 ,L 0 ,h 0 ) And posture
Figure GDA0003722423510000021
Nacelle focal length f 0 Focal length f of full strapdown image seeker 1 And target position data (lambda) t ,L t ,h t ) Correcting the position (lambda) of the full strapdown seeker at the moment 1 ,L 1 ,h 1 ) And posture
Figure GDA0003722423510000022
From high and low angle-theta ', correcting to high and low angle-theta', the transformation is as follows:
Figure GDA0003722423510000023
Figure GDA0003722423510000024
Figure GDA0003722423510000025
Figure GDA0003722423510000026
Figure GDA0003722423510000027
the same principle is that:
Figure GDA0003722423510000028
in the formula, θ 'and R are the altitude angle and the distance to the target at the moment of acquiring the direct-view template 1, θ ", R' is the altitude angle and the distance to the target at the moment of correcting the target template 2, and u, v, u ', v' are the target image coordinates at the moment of acquiring the direct-view template 1 and the target template 2 image coordinates at the correction moment; y is the length of the target PP ', Delta theta ' is the imaging opening angle of the target PP ' in the seeker image coordinate system at the moment of the direct collimation template 1, and Delta theta ' is the imaging opening angle of the target PP ' in the seeker image coordinate system at the correction moment.
Through the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained through correction by using the direct aiming template 1.
Further, the target template 2 in the step (2) is matched with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, the target real-time image is obtained by predicting the target by using inertial navigation information, the position information (λ, L, h) of the full strapdown image seeker at the current moment is obtained, and the coordinates of the current moment position and the coordinates of the target position in the geocentric system are firstly calculated:
Figure GDA0003722423510000031
Figure GDA0003722423510000032
Figure GDA0003722423510000033
Figure GDA0003722423510000034
Figure GDA0003722423510000035
Figure GDA0003722423510000036
in the formula, a e ,b e Is the earth ellipsoid radius; r wt Is the curvature radius of the mortise-unitary ring at the time and the position R N The radius of curvature of the meridian at the current time position; r wt1 The curvature radius of the prime position fourth prime circle is taken as the curvature radius of the prime position fourth prime circle; r N1 The radius of curvature of the meridian at the target position; (lambda t ,L t ,h t ) The longitude and latitude height data of the target position.
Subtracting the two to obtain a relative position vector delta E:
△E=E t -E (13)
and then, carrying out coordinate transformation on the position vector according to a geocentric system → a geographic system → a bomb system → a camera system → an image system, and obtaining the imaging coordinate (uu, vv) of the target in the current full-strapdown image seeker state according to a camera pinhole imaging model and the coordinate transformation relation between the geocentric system and the image coordinate system:
Figure GDA0003722423510000037
in the formula (I), the compound is shown in the specification,
Figure GDA0003722423510000038
the earth center is a cosine matrix to the geographic system,
Figure GDA0003722423510000039
cosine matrix of geographical to missile system, (u) 0 ,v 0 ) The center of the image of the full strapdown image seeker; f. of 1 Is the seeker focal length; Δ E (1), Δ E (2), and Δ E (3) are projection components of formula (13) in three coordinate axis directions under the geocentric system coordinates.
By selecting the target real-time graph through the method, the target can be quickly matched, the false target can be effectively avoided, and the problem of short-time visual field of the target nonlinear maneuvering can be solved.
Further, matching the target template 2 for tracking with the target real-time image by combining the inertial navigation information in the step (2), namely determining a scale and a rotation parameter in the tracking process; selecting a certain area in a target real-time image as a target template 2 for tracking by taking an initial position in the image acquired by a full strapdown image seeker as a center, and correcting the target template 2 for tracking through inertial navigation information; because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template correction scale factor k is as follows:
Figure GDA0003722423510000041
in the formula (di) 1 For matching the distance dis between the seeker and the target in the current frame full strapdown image 0 The distance theta between the seeker and the target in the last full strapdown image frame during matching 1 And theta 2 The opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
during tracking, the rotation angle garmr is corrected for the template:
garmar=garmar1-garmar0 (16)
in the formula, garmr 1 is the leader roll angle for matching the full strapdown image of the current frame, and garmr 0 is the leader roll angle for the full strapdown image of the previous frame when matching.
Further, the normalized gray scale mutual information entropy matching method of the hill-climbing search strategy in the step (2) is to adjust the hill-climbing sequence according to the priority and to reduce the calculation of the correlation by using the sample; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
assuming that the size of the real-time image S is N × N (pixels) and the size of the template image T is M × M (pixels), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
Figure GDA0003722423510000042
in the formula (I), the compound is shown in the specification,
Figure GDA0003722423510000043
the edge entropy of the real-time subgraph S is obtained, and ps (i, j) is the probability of the real-time subgraph gray level;
Figure GDA0003722423510000044
the edge entropy of the template graph, and pt (i, j) is the probability of the gray level of the template graph;
Figure GDA0003722423510000045
the real-time subgraph and the template graph are combined entropy, pst (i, j) is the combined probability of the real-time subgraph and the template graph, and PM is the weighted value of the gradient amplitude and the main direction of the real-time graph and the template graph; NMI (u, v) as a real-time subgraph S u,v The matching metric value is more than or equal to 1 and less than or equal to 2 in NMI (u, v), 0 and less than or equal to u, and v and less than or equal to N-M, the size of the matching metric value reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
Further, in the step (3), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame;
T new =T(NMI max ) (18)
in the formula, T (NMI) max ) Matching a template picture with the maximum position of the mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering a cycle of template matching, calculating the gray-scale mutual information entropy, obtaining the current best matching position, updating the target template and matching the template to realize the stable tracking of the target.
Compared with the prior art, the invention has the advantages that:
1. the method uses inertial navigation information to correct the direct-view template, so that the direct-view template is consistent with a real-time image acquired by a full strapdown image seeker in view angle and scale, and the method has the advantage of high tracking precision.
2. By integrating the inertial navigation information and the image information, the target memory tracking method can reduce the target tracking search range, has the advantages of translation invariance, scale invariance and rotation invariance, filters out the interference of similar targets, and can process the target tracking under the conditions of short-time out of view field, shielding and the like of the target nonlinear maneuvering.
3. The method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the template calibration principle of the present invention;
FIG. 3 is a schematic perspective projection diagram of the present invention;
FIG. 4 is a schematic diagram of an improved hill-climbing search strategy according to the present invention;
FIG. 5 is a schematic diagram of the improved normalized gray scale mutual information entropy matching principle of the present invention;
FIG. 6 is a schematic diagram of the principle of the scale correction factor of the present invention.
Detailed Description
The following describes in detail a specific embodiment of the present invention with reference to the drawings.
As shown in fig. 1, a full strapdown image seeker target tracking method based on a direct-view template includes the following specific steps:
(1) the full strapdown image seeker carries out image correction by utilizing the loaded direct-view template 1 and the corresponding longitude and latitude height and posture data to obtain a target template 2 for tracking.
(2) The target template 2 is matched with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement are adopted during matching, the matching is successful, the initial position of the target in the image acquired by the full strapdown image seeker is obtained, and the target real-time image is continuously issued for matching if the matching is unsuccessful;
taking the initial position of the full-strapdown image seeker obtained in the image as a center, selecting a certain area in a target real-time image as a target template 2 for tracking, matching the target template 2 for tracking with the target real-time image by combining inertial navigation information, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued.
(3) And if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
In the step (1): the direct-aiming template 1 is combined with inertial navigation information to obtain a target template 2 through image correction, the target template 1 and the position and inertial navigation data at the corresponding moment are obtained through pod equipment, and the target template 2 is converted to be consistent with the view angle and the scale of a real-time image when the full-strapdown image seeker is used.
The transformation process requires that the position (lambda) of the direct-view template 1 at the moment is known 0 ,L 0 ,h 0 ) And posture
Figure GDA0003722423510000061
Nacelle focal length f 0 All areFocal length f of strapdown image seeker 1 And target position data (lambda) t ,L t ,h t ) Correcting the position (lambda) of the full strapdown seeker at the moment 1 ,L 1 ,h 1 ) And posture
Figure GDA0003722423510000062
From the high and low angle-theta', correcting to the high and low angle-theta ", the schematic diagram of the transformation process is shown in fig. 2;
and as can be seen from fig. 2:
Figure GDA0003722423510000063
Figure GDA0003722423510000064
Figure GDA0003722423510000065
Figure GDA0003722423510000066
Figure GDA0003722423510000071
the same principle is that:
Figure GDA0003722423510000072
in the formula, θ 'and R are the altitude angle and the distance to the target at the moment of acquiring the direct-view template 1, θ ", R' is the altitude angle and the distance to the target at the moment of correcting the target template 2, and u, v, u ', v' are the target image coordinates at the moment of acquiring the direct-view template 1 and the target template 2 image coordinates at the correction moment; y is the length of the target PP ', Delta theta ' is the imaging opening angle of the target PP ' in the seeker image coordinate system at the moment of the direct collimation template 1, and Delta theta ' is the imaging opening angle of the target PP ' in the seeker image coordinate system at the correction moment.
By the formulas (5) and (6), the target template 2 consistent with the view angle and the scale of the real-time image can be obtained by correcting the direct-view template map 1.
In the step (2), the target template 2 is matched with a real-time image (hereinafter referred to as a real-time image) of an adjacent target and an area, which are acquired by the full strapdown image seeker, and the target is predicted by using inertial navigation information to obtain the real-time image.
The position information (lambda, L, h) of the whole strapdown image seeker at the current moment is obtained by firstly calculating the coordinates of the current moment position and the target position in the geocentric system:
Figure GDA0003722423510000073
Figure GDA0003722423510000074
Figure GDA0003722423510000075
Figure GDA0003722423510000076
Figure GDA0003722423510000077
Figure GDA0003722423510000078
in the formula, a e ,b e Is the earth ellipsoid radius; r wt Is the curvature radius of the mortise-unitary ring at the time and the position R N The radius of curvature of the meridian at the current time position; r wt1 The curvature radius of the prime position fourth prime circle is taken as the curvature radius of the prime position fourth prime circle;R N1 the radius of curvature of the meridian at the target position; (lambda t ,L t ,h t ) The longitude and latitude height data of the target position.
Subtracting the two to obtain a relative position vector delta E:
△E=E t -E (13)
and then, performing coordinate transformation on the position vector according to a geocentric system → a geographic system → a missile system → a camera system → an image system, and obtaining imaging coordinates (uu, vv) of the target in the current missile state according to a camera pinhole imaging model and a coordinate transformation relation between the geocentric system and the image coordinate system as shown in fig. 3:
Figure GDA0003722423510000081
in the formula (I), the compound is shown in the specification,
Figure GDA0003722423510000082
the center of the earth is the cosine matrix of the geographic system,
Figure GDA0003722423510000083
cosine matrix of geographical to missile system, (u) 0 ,v 0 ) The center of the image of the full strapdown image seeker; f. of 1 Is the seeker focal length; Δ E (1), Δ E (2), and Δ E (3) are projection components of formula (13) in three coordinate axis directions under the coordinates of the geocentric system.
(13) The target position is predicted through inertial navigation information, and a real-time image is selected, so that the target can be quickly matched, a false target can be effectively avoided, and the problem of short-time visual field of nonlinear maneuvering of the target can be solved.
And (2) obtaining a real-time image from the target template 2 which is obtained in the step (1) and is consistent with the view angle and the scale of the full strapdown image seeker, and then obtaining the position of the target by adopting a normalized gray scale mutual information entropy matching method based on a hill-climbing search strategy for the target template 2 and the real-time image.
In the invention, the determination of scale and rotation parameters in the tracking process is to match the real-time image obtained in the step (2) with the target template 2, correct the target template 2 and the real-time image by using inertial navigation information, and obtain the target position by using a hill-climbing search strategy and normalized mutual information entropy matching.
The target template 2 is corrected through inertial navigation information, and the normalized mutual information entropy matching method has the advantages of scale invariance and rotation invariance. In the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template correction scale factor k is as follows:
Figure GDA0003722423510000084
in the formula (di) 1 For matching the distance dis between the seeker and the target in the current frame full strapdown image 0 The distance theta between the seeker and the target in the last full strapdown image frame during matching 1 And theta 2 For the opening angle of the previous frame and the next frame of the target in the full strapdown image seeker, the adjacent frames can be considered equal, and a schematic diagram of the scale invariant correction principle is shown in fig. 6.
During tracking, the rotation angle garmr is corrected for the template:
garmar=garmar1-garmar0 (16)
in the formula, garmr 1 is the rolling angle of the seeker with the full strapdown image of the current frame, and garmr 0 is the rolling angle of the seeker with the full strapdown image of the previous frame during matching.
In order to meet the requirements of real-time performance and precision, an improved hill-climbing search strategy and a simplified correlation degree measurement mode are adopted. The hill-climbing search strategy is improved, as shown in fig. 4, mainly by adjusting the hill-climbing order by priority and reducing the calculation of correlation with samples. The adjustment of the hill climbing sequence according to the priority also solves many unnecessary calculation search processes on the premise of ensuring the matching accuracy, such as: a plurality of climbers search the point with the maximum correlation value (gray scale mutual information entropy) and a plurality of climbers search the local maximum value, and certainly, in the searching process, the calculation efficiency can be further improved by arranging a correlation matrix table and a searching position matrix table, and redundant calculation is avoided. The search matching is shown in fig. 5, and the search process from coarse to fine can be realized.
Assuming that the size of the real-time image S is N × N (pixels) and the size of the template image T is M × M (pixels), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
Figure GDA0003722423510000091
in the formula (I), the compound is shown in the specification,
Figure GDA0003722423510000092
the edge entropy of the real-time sub-graph S, and ps (i, j) is the probability of the real-time sub-graph gray level occurring.
Figure GDA0003722423510000093
The edge entropy of the template map and pt (i, j) is the probability of the occurrence of the grayscale of the template map.
Figure GDA0003722423510000094
For the joint entropy of the real-time subgraph and the template graph, pst (i, j) is the joint probability of the gray scale of the real-time subgraph and the template graph, and PM is the weighted value of the gradient amplitude and the principal direction of the real-time graph and the template graph. NMI (u, v) as a real-time subgraph S u,v The matching metric value is more than or equal to 1 and less than or equal to NMI (u, v) and less than or equal to 2, the matching metric value is more than or equal to 0 and less than or equal to u, and v is more than or equal to N-M, the size of the matching metric value reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is.
If the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
In the step (3), if the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during general updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame.
T new =T(NMI max ) (18)
In the formula, T (NMI) max ) Matching mutual information entropy for current framesThe maximum position is the template picture of a certain area at the center. And then repeatedly entering a cycle of 'template matching, calculating gray scale mutual information entropy, obtaining the current best matching position, target template updating and template matching' to realize stable target tracking.

Claims (5)

1. A full strapdown image seeker target tracking method based on a direct-view template is characterized by comprising the following specific implementation steps of:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and posture data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by utilizing the loaded direct-view template 1 and the corresponding longitude and latitude height and posture data to obtain a target template 2 for tracking;
(2) the target template 2 is matched with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement are adopted during matching, the matching is successful, the initial position of the target in the image acquired by the full strapdown image seeker is obtained, and the target real-time image is continuously issued for matching if the matching is unsuccessful;
taking the initial position of the full-strapdown image seeker obtained in the image as a center, selecting a certain area in a target real-time image as a target template 2 for tracking, matching the target template 2 for tracking with the target real-time image by combining inertial navigation information, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
the normalized gray scale mutual information entropy matching method of the hill climbing search strategy is to adjust the hill climbing sequence according to the priority and reduce the calculation of the correlation by using samples; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
assuming that the size of the real-time graph S is nxn pixels and the size of the template graph T is mxm pixels, the metric function of the improved normalized gray scale mutual information entropy matching method is expressed as:
Figure FDA0003722423500000011
in the formula (I), the compound is shown in the specification,
Figure FDA0003722423500000012
the edge entropy of the real-time subgraph S is shown, and ps (i, j) is the probability of the real-time subgraph gray level;
Figure FDA0003722423500000013
the edge entropy of the template graph, and pt (i, j) is the probability of the gray level of the template graph;
Figure FDA0003722423500000014
the real-time subgraph and the template graph are combined entropy, pst (i, j) is the combined probability of the real-time subgraph and the template graph, and PM is the weighted value of the gradient amplitude and the main direction of the real-time graph and the template graph; NMI (u, v) as a real-time subgraph S u,v The matching metric value is more than or equal to 1 and less than or equal to 2 in NMI (u, v), 0 and less than or equal to u, and v and less than or equal to N-M, the size of the matching metric value reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is greater than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed;
(3) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
2. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (1) of obtaining the target template 2 for tracking after image correction by using the direct-view template 1 is to obtain the direct-view template 1 and position and inertial navigation data of corresponding time through pod equipment, and to convert the position and inertial navigation data into the target template 2 which is consistent with the view angle and the scale of a target real-time image when the full strapdown image seeker is used; the transformation process requires that the position (lambda) of the direct-view template 1 at the moment is known 0 ,L 0 ,h 0 ) And posture
Figure FDA0003722423500000021
Nacelle focal length f 0 Focal length f of full strapdown image seeker 1 And target position data (lambda) t ,L t ,h t ) Correcting the position (lambda) of the full strapdown seeker at the moment 1 ,L 1 ,h 1 ) And posture
Figure FDA0003722423500000022
And correcting to the high-low angle-theta 'from the high-low angle-theta', and converting as follows:
Figure FDA0003722423500000023
Figure FDA0003722423500000024
Figure FDA0003722423500000025
Figure FDA0003722423500000026
Figure FDA0003722423500000027
the same principle is that:
Figure FDA0003722423500000028
in the formula, θ ', R is a height angle at the moment of acquiring the direct-view template 1 and a distance from a target, θ ", R ' is a height angle at the moment of correcting the target template 2 and a distance from the target, u, v, u ', v ' is a target image coordinate at the moment of acquiring the direct-view template 1 and a target template 2 image coordinate at the moment of correcting, y is a length of a target PP", Δ θ ' is an imaging opening angle of the target PP "at the moment of the direct-view template 1 in a seeker image coordinate system, and Δ θ" is an imaging opening angle of the target PP "at the moment of correcting in the seeker image coordinate system;
by the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained by correcting by using the direct-view template 1.
3. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (2) of matching the target template 2 with the real-time target map of the nearby target and the area obtained by the full strapdown image seeker is to predict the target by using inertial navigation information to obtain the real-time target map, and the position information (λ, L, h) of the full strapdown image seeker at the current time, and firstly calculate the coordinates of the current time position and the target position under the geocentric system:
Figure FDA0003722423500000031
Figure FDA0003722423500000032
Figure FDA0003722423500000033
Figure FDA0003722423500000034
Figure FDA0003722423500000035
Figure FDA0003722423500000036
in the formula, a e ,b e Is the earth ellipsoid radius; r wt Is the curvature radius of the mortise-unitary ring at the time and the position R N The radius of curvature of the meridian at the current time position; r wt1 The curvature radius of the prime position fourth prime circle is taken as the curvature radius of the prime position fourth prime circle; r N1 The radius of curvature of the meridian at the target position; (lambda t ,L t ,h t ) Longitude and latitude height data of a target position;
subtracting the two to obtain a relative position vector delta E:
△E=E t -E (13)
and then, carrying out coordinate transformation on the position vector according to a geocentric system → a geographic system → a bomb system → a camera system → an image system, and obtaining the imaging coordinate (uu, vv) of the target in the current full-strapdown image seeker state according to a camera pinhole imaging model and the coordinate transformation relation between the geocentric system and the image coordinate system:
Figure FDA0003722423500000041
in the formula (I), the compound is shown in the specification,
Figure FDA0003722423500000042
the center of the earth is the cosine matrix of the geographic system,
Figure FDA0003722423500000043
cosine matrix of geographical to missile system, (u) 0 ,v 0 ) The center of the image of the full strapdown image seeker; f. of 1 Is the seeker focal length; Δ E (1), Δ E (2), and Δ E (3) are projection components of formula (13) in three coordinate axis directions under the geocentric system coordinates.
4. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (2) of matching the target template 2 for tracking with the target real-time image by combining the inertial navigation information is to determine the scale and rotation parameters in the tracking process; selecting a certain area in a target real-time image as a target template 2 for tracking by taking an initial position in the image acquired by a full strapdown image seeker as a center, and correcting the target template 2 for tracking through inertial navigation information; in the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template correction scale factor k is as follows:
Figure FDA0003722423500000044
in the formula (di) 1 For matching the distance dis between the seeker and the target in the current frame full strapdown image 0 The distance theta between the seeker and the target in the last full strapdown image frame during matching 1 And theta 2 The opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
during tracking, the rotation angle garmr is corrected for the template:
garmar=garmar1-garmar0 (16)
in the formula, garmr 1 is the leader roll angle for matching the full strapdown image of the current frame, and garmr 0 is the leader roll angle for the full strapdown image of the previous frame when matching.
5. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein in the step (3), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a completely updated mode is adopted during updating, namely, an image with a certain area with the best matching position of the current frame as the center is used as the template 3 to be matched with the image of the next frame;
T new =T(NMI max ) (18)
in the formula, T (NMI) max ) Matching a template picture with the maximum position of mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering' template matching-countingAnd (3) calculating the gray level mutual information entropy, obtaining the current optimal matching position, updating the target template and matching the template to realize stable target tracking.
CN202011508984.6A 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template Active CN112489091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508984.6A CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508984.6A CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Publications (2)

Publication Number Publication Date
CN112489091A CN112489091A (en) 2021-03-12
CN112489091B true CN112489091B (en) 2022-08-12

Family

ID=74914838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508984.6A Active CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Country Status (1)

Country Link
CN (1) CN112489091B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395448B (en) * 2021-06-15 2023-02-21 西安视成航空科技有限公司 Airborne pod image searching, tracking and processing system
CN113554131B (en) * 2021-09-22 2021-12-03 四川大学华西医院 Medical image processing and analyzing method, computer device, system and storage medium
CN114280978B (en) * 2021-11-29 2024-03-15 中国航空工业集团公司洛阳电光设备研究所 Tracking decoupling control method for photoelectric pod

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717712A (en) * 2018-05-29 2018-10-30 东北大学 A kind of vision inertial navigation SLAM methods assumed based on ground level

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146228A (en) * 1990-01-24 1992-09-08 The Johns Hopkins University Coherent correlation addition for increasing match information in scene matching navigation systems
EP3248029A4 (en) * 2015-01-19 2018-10-03 The Regents of the University of Michigan Visual localization within lidar maps
CN107945215B (en) * 2017-12-14 2021-07-23 湖南华南光电(集团)有限责任公司 High-precision infrared image tracker and target rapid tracking method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717712A (en) * 2018-05-29 2018-10-30 东北大学 A kind of vision inertial navigation SLAM methods assumed based on ground level

Also Published As

Publication number Publication date
CN112489091A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112489091B (en) Full strapdown image seeker target tracking method based on direct-aiming template
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN101246590B (en) Star loaded camera spacing deformation image geometric correction method
CN105222788B (en) The automatic correcting method of the matched aircraft Route Offset error of feature based
CN103017762B (en) The extraterrestrial target fast Acquisition localization method of ground photo-electric telescope
CN103047985A (en) Rapid positioning method for space target
CN110111274B (en) Method for calibrating exterior orientation elements of satellite-borne push-broom optical sensor
CN109387192B (en) Indoor and outdoor continuous positioning method and device
US9453731B2 (en) System and method for determining orientation relative to earth
CN109540113B (en) Total station and star map identification method thereof
CN111238540A (en) Lopa gamma first camera-satellite sensitive installation calibration method based on fixed star shooting
CN111366148A (en) Target positioning method suitable for multiple observations of airborne photoelectric observing and sighting system
CN108364279A (en) Determine the method that stationary orbit remote sensing satellite is directed toward deviation
CN108362268A (en) A kind of automatic astronomical surveing method and measuring system based on video measuring
CN112857356A (en) Unmanned aerial vehicle water body environment investigation and air route generation method
CN110889353B (en) Space target identification method based on primary focus large-visual-field photoelectric telescope
CN113218577A (en) Outfield measurement method for star point centroid position precision of star sensor
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN114860196A (en) Telescope main light path guide star device and calculation method of guide star offset
WO2021135161A1 (en) Real-time celestial positioning and metering method for space debris based on automatic pointing measurement
CN110887477B (en) Autonomous positioning method based on north polarization pole and polarized sun vector
CN115950435A (en) Real-time positioning method for unmanned aerial vehicle inspection image
CN115824190A (en) Vision and GPS-based target ship fusion positioning method
US11580690B1 (en) Horizon-based navigation
CN113240749B (en) Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant