CN115393281A - Infrared weak and small target detection tracking method based on mask and adaptive filtering - Google Patents

Infrared weak and small target detection tracking method based on mask and adaptive filtering Download PDF

Info

Publication number
CN115393281A
CN115393281A CN202210901099.7A CN202210901099A CN115393281A CN 115393281 A CN115393281 A CN 115393281A CN 202210901099 A CN202210901099 A CN 202210901099A CN 115393281 A CN115393281 A CN 115393281A
Authority
CN
China
Prior art keywords
target
image
suspected
mask
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210901099.7A
Other languages
Chinese (zh)
Inventor
刘洋
李晓博
邵应昭
徐常志
郑小松
张茗茗
丁跃利
文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN202210901099.7A priority Critical patent/CN115393281A/en
Publication of CN115393281A publication Critical patent/CN115393281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A method for detecting and tracking infrared dim targets based on masks and adaptive filtering comprises the steps of firstly utilizing the characteristic that a satellite remote sensing scout satellite view field is fixed, generating a target foreground mask based on an initial multi-frame (5-10 frames) infrared image sequence, and recording the characteristic information of an interference target and a real target; and then, detecting the newly input infrared image on the basis again, constructing a self-adaptive filter according to target characteristic information contained in the mask, and dynamically extracting the characteristics of different targets at different motion moments so as to improve the matching and tracking precision of the targets. The method solves the problems of noise interference and multi-target interference of the infrared dim targets during high-speed motion by using the target mask and the adaptive filter, has stronger robustness, can realize high-reliability matching and tracking of a plurality of infrared dim targets under various complex backgrounds, and meets the requirements of real-time high-reliability reconnaissance on sensitive target stars.

Description

Infrared weak and small target detection tracking method based on mask and adaptive filtering
Technical Field
The invention designs an infrared small and weak target detection tracking method, in particular relates to an on-orbit rapid high-reliability detection tracking method for a high-speed infrared small and weak target, and belongs to the field of space remote sensing.
Background
The infrared camera mainly observes by receiving the infrared radiation of a target, particularly has obvious sensitivity on high-speed high-heat radiation targets such as missiles, airplanes and the like, and makes the infrared target detection and tracking technology play an important role in military reconnaissance and early warning. For wide-range satellite infrared remote sensing images, the sensitive flying targets are small in size and often appear as spots or dots, the signal-to-noise ratio of the targets is low, the targets are easily interfered by noise, clutter or cloud layers, and the targets are often submerged in the background. Therefore, for infrared weak and small targets, the target motion or change characteristics are generally used for detection, such as an interframe difference method and a background difference method, and the method has the characteristics of simplicity and directness in calculation and high speed, so that the method is more suitable for the on-orbit real-time reconnaissance and early warning task of a satellite with limited resources.
However, the interframe difference method and the background difference method are sensitive to noise and interference factors such as background change caused by target motion. Moreover, when the target is occluded, the algorithm easily mistakenly recognizes the reappeared target as another moving target, and cannot cope with occlusion change. In addition, because the interframe difference method mainly detects the moving target by using the difference of gray values, when the gray values of most pixel points in the target are the same, the difference image obtained by the interframe difference method only contains images on two sides of the target object, and a 'void' phenomenon is generated in the target, so that the complete contour of the target is difficult to obtain. And because the motion state of the target is changeable, the background constructed by the background difference method is difficult to completely eliminate various real targets.
Disclosure of Invention
The technical problem solved by the invention is as follows: the method overcomes the defects of the prior art, solves the problems of high false alarm and poor track association (tracking loss when target track points are incomplete or shielded and two tracks cannot be associated when the target track points are reoccurred) of the detection and tracking of the satellite in-orbit infrared dim target, and provides a real-time high-precision infrared dim target detection and tracking method.
The technical solution of the invention is as follows:
a method for detecting and tracking infrared weak and small targets based on masks and adaptive filtering comprises the following steps:
1) Using the last frame image I t-1 Corresponding background image
Figure BDA0003770912620000021
From image I of the current frame t Extracting to obtain multiple suspected targets, and obtaining a suspected target coordinate set composed of multiple suspected target coordinates in the current frame
Figure BDA0003770912620000022
And image slices of each suspected object
Figure BDA0003770912620000023
If the suspected target set of the current frame is empty, entering step 6); otherwise, entering the step 2);
2) For the suspected target coordinate set obtained in step 1)
Figure BDA0003770912620000024
Coordinates of the kth suspected target in (1)
Figure BDA0003770912620000025
In the current frame image I t To Chinese
Figure BDA0003770912620000026
Extracting an image block by taking a pixel point at the position as the center, carrying out edge detection on the image block, and carrying out suspected target coordinate set on a suspected target which does not conform to the size range of the target
Figure BDA0003770912620000027
Removing and traversing suspected target coordinate set
Figure BDA0003770912620000028
Then go to step 3);
3) According to the suspected target coordinate set
Figure BDA0003770912620000029
Coordinates of the kth suspected target in (1)
Figure BDA00037709126200000210
From the last frame image I t-1 Extracting a plurality of candidate matching targets corresponding to the suspected target k to form a candidate matching target set, and obtaining a picture slice of each candidate matching target
Figure BDA00037709126200000211
4) If a candidate matching target set corresponding to a suspected target k
Figure BDA00037709126200000212
If the suspected target k is empty, judging the suspected target k as a newly detected suspected target, assigning the information of the suspected target k to a mask set M { M × n }, returning to the step 3), processing the next suspected target until all the suspected targets are traversed, and entering the step 6); otherwise, entering the step 5); the set of masks M { M × n } is composed of M rows and n columns of elements, M being equal to the image I t The number of pixels in the medium-length direction, n being equal to the image I t The number of pixels in the medium width direction; each element comprises information used for representing a corresponding pixel point on the image;
5) Using the suspicion obtained in step 1)Target-like image slices
Figure BDA00037709126200000213
And the candidate matching target picture slice obtained in the step 3)
Figure BDA00037709126200000214
Determining a suspected target k and a corresponding J k Extracting the matching coefficient of each candidate matching target, and taking the candidate matching target meeting the threshold requirement as the candidate target of the suspected target k
Figure BDA00037709126200000215
For the neutralization candidate target in the mask set M { M × n }
Figure BDA00037709126200000216
Updating the information of the elements corresponding to the positions; if some suspected target k does not have a corresponding candidate target
Figure BDA0003770912620000031
Judging the suspected target k as a newly detected suspected target, assigning the information of the suspected target k to a mask set M { M × n }, then returning to the step 3), processing the next suspected target until all the suspected targets are traversed, and then entering the step 6);
6) Using mask set M { M × n }, for the previous frame image I t-1 Corresponding background image
Figure BDA0003770912620000032
Updating to obtain the current frame image I t Corresponding background image
Figure BDA0003770912620000033
When the suspected target set of the current frame is empty, the image I of the previous frame t Corresponding background image
Figure BDA0003770912620000034
Is equal to the last frame image I t-1 Corresponding background image
Figure BDA0003770912620000035
7) Judging whether the suspected target is a real target or an interference target by using the mask set M { M multiplied by n }, and updating information of elements corresponding to the suspected target in the mask set M { M multiplied by n };
8) And (4) carrying out track association by using the information of the corresponding elements of the real target in the mask set M { M multiplied by n } to complete the complete tracking of the same target.
Preferably, information used for representing a corresponding pixel point on an image in each element in the mask set M { M × n } is represented by 6 feature vectors, which are respectively:
m { i } {1} is the target type of the corresponding position of the element M { i }; the object types include: a background point corresponding to a value of M { i } {1} of 0; an interference target corresponding to a value of 1 for M { i } {1 }; a suspected target, corresponding to a value of M { i } {1} of 2; the real target, corresponding to M { i } {1} value is 3; i is more than or equal to 1 and less than or equal to mxn;
m { i } {2} is the number of the target to which the corresponding position of the element M { i } belongs;
m { i } {3} is the number of pixels in the length and width directions of the image at the latest moment of the target to which the corresponding position of the element M { i } belongs;
m { i } {4} is a complete track set of the target to which the corresponding position of the element M { i } belongs; the complete track set consists of position information of central elements of the track corresponding to the target in each frame of image;
m { i } {5} is the speed of the target latest moment to which the corresponding position of the element M { i } belongs;
m { i } {6} is the moving direction of the object at the latest moment to which the corresponding position of the element M { i } belongs.
Preferably, the step 1) obtains a suspected target coordinate set composed of a plurality of suspected target coordinates in the current frame
Figure BDA0003770912620000036
And image slices of each suspected object
Figure BDA0003770912620000037
The method specifically comprises the following steps:
11 Solving for the currentObtaining the difference value image D of the current frame image and the previous frame image by the pixel difference value of the frame image and the previous frame image f =I t -I t-1 (ii) a Simultaneously solving the pixel difference value of the current frame image and the background frame image to obtain the difference value image of the current frame image and the background frame image
Figure BDA0003770912620000041
12 Differential image D) f Medium absolute value greater than Thr f Difference point or difference image D of b Medium absolute value is greater than Thr b The difference points are reserved as candidate points, pixel points corresponding to the candidate points are found from the current frame image, communication areas are generated, then central coordinate points of the communication areas are recorded respectively, according to the target type of elements corresponding to the central coordinate points in the mask set, the central coordinate points with the target type as an interference target are removed from the central coordinate points, the rest central coordinate points are used as the central coordinates of the suspected target and are added into the suspected target coordinate set, and the suspected target coordinate set is obtained
Figure BDA0003770912620000042
k=1,2,3,…,K;Thr f And Thr b The value range of (a) is 8 to 12;
13 For the obtained set of coordinates of suspected object
Figure BDA0003770912620000043
Coordinates of the kth suspected target in (1)
Figure BDA0003770912620000044
At the current frame image I t In the middle to
Figure BDA0003770912620000045
Extracting an image block by taking a pixel point at the position as a center, carrying out edge detection on the image block, extracting the edge shape of a suspected target k, and calculating to obtain an image slice formed by pixels surrounded by the edge shape of the suspected target k;
14 If it is suspected that the product obtained in step 13)If the size of an image slice formed by pixels surrounded by the edge shape of the target k is not in the prior size range of the target, the suspected target k is selected from the suspected target coordinate set
Figure BDA0003770912620000046
Removing the image slices obtained in the step 13) as the image slices of the suspected target k
Figure BDA0003770912620000047
Preferably, the size of the image block in step 13) is greater than 1.25 to 1.7 times of the target maximum size and less than 2 times of the target maximum size.
Preferably, the step 3) is performed from the previous frame image I t-1 Extracting a plurality of candidate matching targets corresponding to the suspected target k to form a candidate matching target set, and obtaining a picture slice of each candidate matching target
Figure BDA0003770912620000048
The method specifically comprises the following steps:
31 Previous frame image I) t-1 The same as in (1)
Figure BDA0003770912620000049
Selecting an image block with the size of qxq as a target candidate region by taking a pixel point at the position as a center;
32 Combining the mask set M { M × n }, selecting the elements with the target types of the elements in the corresponding mask set not being background targets from the target candidate region as candidate matching targets to form a candidate matching target set
Figure BDA0003770912620000051
,j=1,2,3,…,J k Wherein, J k The number of candidate matching targets in a target candidate area corresponding to the suspected target k is counted;
33 Obtaining an image slice for each candidate matching object based on information of the feature vector M { i } {3} in the mask set M { M × n }
Figure BDA0003770912620000052
Preferably, the value ranges in step 31) are as follows:
3v·Δt/r≤q≤5v·Δt/r
wherein r is the image resolution, v is the maximum motion speed of the target, and Δ t is the time difference between adjacent frame images.
Preferably, the method for determining the matching coefficient in step 5) specifically includes:
Figure BDA0003770912620000053
wherein alpha is a filter difference coefficient, and the value range of alpha is 0.01-0.2;
Figure BDA0003770912620000054
for adaptive filters DF k Slicing the suspected target image
Figure BDA0003770912620000055
As a result of the calculation of (a),
Figure BDA0003770912620000056
for adaptive filters DF k Slicing candidate matching target picture
Figure BDA0003770912620000057
The calculation result of (2);
Figure BDA0003770912620000058
and
Figure BDA0003770912620000059
matching targets for candidates
Figure BDA00037709126200000510
At the current frame image I t In the row direction and in the column direction;
Figure BDA00037709126200000511
the coordinates of the kth suspected target.
Preferably, the
Figure BDA00037709126200000512
And
Figure BDA00037709126200000513
the determination method specifically comprises the following steps:
Figure BDA00037709126200000514
wherein,
Figure BDA00037709126200000515
matching targets for candidates
Figure BDA00037709126200000516
Last frame image I t-1 At, the time interval between two adjacent frames,
Figure BDA00037709126200000517
for candidate matching targets obtained from the set of masks M { M × n }
Figure BDA00037709126200000518
The speed of the movement of (a) is,
Figure BDA00037709126200000519
for candidate matching targets obtained from the set of masks M { M × n }
Figure BDA00037709126200000520
The angle to the image line direction.
Preferably, the step 6) obtains the current frame image I t Corresponding background image
Figure BDA0003770912620000061
The method specifically comprises the following steps:
Figure BDA0003770912620000062
wherein, lambda is a background updating coefficient, and the value range of lambda is 0.7-0.9;
Figure BDA0003770912620000063
representing the current frame image I according to the mask set t The pixel values of the pixel points of which the corresponding target types are background or interference targets are kept unchanged, and the pixel values of the other pixel points are processed to zero to obtain a pixel matrix;
Figure BDA0003770912620000064
represented in the current frame image I t Taking pixel blocks corresponding to the suspected target and the real target as a replaced area, replacing the whole replaced area with the pixel block of the neighborhood background, and according to a mask set, replacing the current frame image I t The pixel matrix is obtained after the zero setting processing of the pixels of the pixel points of which the corresponding target types are the background and the interference target; the value of each pixel in the pixel blocks of the neighborhood background is equal to the average value of the corresponding pixels in the pixel blocks with equal size in four directions of upper, lower, left and right outside the replaced area.
Preferably, the method for judging whether the suspected target is a real target or an interference target in the step 7) specifically comprises the following steps: for the elements of which the feature vector target type in the mask set is a suspected target, when the number of times that the target appears in the whole image sequence exceeds Num d Or when the target track stops updating and exceeds Numo, calculating the length of the complete track set of the target to which the corresponding position of the element belongs, and if the track length exceeds Thr track (ii) a If so, judging the target type of the element as a real target, otherwise, judging the element as an interference target; num d The value range of the image frames is equal to the number of the image frames corresponding to 5 to 10 s; the value range of the Numo is equal to the number of image frames corresponding to 10-15 s; thr (Thr) track The value range is 3-5.
Preferably, the method for performing track association in step 8) specifically includes:
when the Track corresponding to the element of which the target type is the real target in the mask set stops updating and exceeds the Numo frame, judging that the real target finishes moving, and adding a target number and a Track into a Track set Track;
when a new track is added into the track set, the track is predicted according to the movement speed and direction of the ending track point of the known track in the track set to obtain the predicted position of the known track in the corresponding new track, and the predicted position is compared with the movement speed and direction of the target corresponding to the initial position of the new track and the new and old tracks to complete the complete tracking of the target; the value range of Numo is equal to the number of image frames corresponding to 10-15 s.
Preferably: initial background image
Figure BDA0003770912620000071
The pixel value of each pixel in the image is equal to the average value of pixel points at corresponding positions in the initially obtained N frames of images, and the value range of N is 5-10.
Compared with the prior art, the invention has the advantages that:
(1) The invention records various types of target information by generating a target mask, thereby greatly reducing the influence of interference on the target during detection and providing detection precision.
(2) The invention constructs a self-adaptive filter aiming at the characteristic that the sensitive target shows dynamic change in the flying process, can extract the characteristic information of the target in full time, and increases the matching accuracy.
(3) The invention associates the tracks of all real targets, can solve the problem of target shielding to a certain extent, and can generate complete situation information of the targets by combining the target characteristics contained in the mask.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of a method for generating an adaptive filter according to the present invention.
Detailed Description
For the satellite on-orbit infrared reconnaissance early warning system, the reliability and timeliness of information are the key of battlefield support, and particularly for a high-speed flying target, the real function of the space-based reconnaissance early warning system can be exerted only by real-time or quasi-real-time detection and tracking, so that a frame difference method and a background difference method are selected for combined detection, and the rapid processing of a mass infrared image sequence is realized. However, the method is susceptible to interference of noise, imaging dead spots, imaging bright spots and moving clouds, the interference is obviously different from the surrounding environment, is similar to a real infrared weak target, often exists in a field of view in the form of spots or dots, and causes a high false alarm when the real target is detected, and a tracking loss situation also occurs after the target is shielded. However, different from the real target, the shape characteristics and the motion characteristics of the interference targets have certain differences with the real target, so based on the priori knowledge, the invention utilizes the image sequence to generate mask information, extracts the complete contour characteristics of various targets through an edge detection operator to support detection, simultaneously stores and analyzes various important characteristic information of the targets in the detection and tracking process in real time, and utilizes the information to construct an adaptive filter, further confirms the real target, greatly reduces false alarm, and improves the reliability of information while ensuring the processing.
As shown in fig. 1, the method of the present invention comprises the following steps:
(1) According to the size M multiplied by n of the obtained remote sensing infrared image, an initial mask set M { M multiplied by n } is pre-generated, wherein M is the number of pixels of each row of the remote sensing infrared image, n is the number of pixels of each column of the remote sensing infrared image, namely M { M multiplied by n } contains all observation point information in the observation visual field of the camera, and for any element M { i }, i is larger than or equal to 1 and is smaller than or equal to M multiplied by n, the corresponding position of the element in the observation visual field of the remote sensing image can be found according to the index i. Each element M { i } in the mask set M { M × n } corresponds to 6 eigenvectors M { i } { j }, i is larger than or equal to 1 and smaller than or equal to M × n, and j is larger than or equal to 1 and smaller than or equal to 6.
Wherein,
m { i } {1} is a target type of the corresponding position of the element; the object types include: a background point corresponding to a value of M { i } {1} of 0; an interference target corresponding to a value of 1 for M { i } {1 }; a suspected target, corresponding to a value of M { i } {1} of 2; the real target, corresponding to M { i } {1} value is 3; only the center point element of the target slice has a value, and the rest are empty. The suspected target includes: interfering targets and real targets. For background points, M { i } contains 1 eigenvector, and the remaining eigenvectors are null. For suspected, interfering and real targets, M { i } contains 6 eigenvectors.
M { i } {2} is the number of the target to which the corresponding position of the element belongs;
m { i } {3} is the size characteristic (namely the number of pixels in the length direction and the width direction of the remote sensing infrared image) of the latest moment of the target of the corresponding position of the element;
m { i } {4} is a complete track set of a target of a corresponding position of the element; the complete track set consists of position information of central elements of the track corresponding to the target in each frame of image;
m { i } {5} is the speed of the target latest moment of the corresponding position of the element;
m { i } {6} is the moving direction of the element corresponding to the target newest moment of the position.
The initial value of the feature vector of each element in the initial mask set is set to 0, that is, it is assumed that the initial target types of all the pixel points are background points.
(2) According to the initially obtained N frames of remote sensing infrared images (N is generally 5-10 images) { I 1 ,I 2 ,...,I N And summing and averaging pixel points at each corresponding position to obtain an initial background image
Figure BDA0003770912620000081
And the pixel value of each pixel in the initial background image is equal to the average value of pixel points at corresponding positions in the N remote sensing infrared images.
(3) Performing suspected target extraction on the remote sensing infrared image of the current frame through the initial background image to obtain a suspected target coordinate set consisting of the suspected target coordinates in the current frame and an image slice of each suspected target
Figure BDA0003770912620000091
If the suspected target set of the current frame is empty, repeating the step (3) for the remote sensing infrared image of the next framePerforming suspected target extraction until the suspected target coordinate set is not empty, and entering the step (4); if the suspected target set of the current frame is empty, the background image of the current frame
Figure BDA0003770912620000092
The same as the background image of the previous frame;
(31) Using the difference between the current frame image and the previous frame image, and the difference between the current frame image and the previous frame image
Figure BDA0003770912620000093
Detecting the target by the difference of the background frame images, respectively solving the pixel difference value between the current frame image and the previous frame image to obtain a difference image D between the current frame image and the previous frame image f =I t -I t-1 (ii) a Simultaneously solving the pixel difference value of the current frame image and the background frame image to obtain the difference value image of the current frame image and the background frame image
Figure BDA0003770912620000094
Each point in the difference image is used as a difference point, and the pixel value of each difference point in the difference image of the current frame image and the previous frame image is equal to the difference value between the value of the pixel point corresponding to the current frame image and the value of the pixel point corresponding to the previous frame image. The pixel value of each difference point in the difference image of the current frame image and the background frame image is equal to the difference value between the value of the pixel point corresponding to the current frame image and the value of the pixel point corresponding to the background frame image.
(32) Difference image D f Medium absolute value is greater than Thr f Difference point or difference image D of b Medium absolute value is greater than Thr b The difference points of (are kept, by statistics, thr) f And Thr b Generally, the value range is 8-12) as a candidate point, finding pixel points corresponding to the position of the candidate point from a current frame image, communicating the pixel points by using a communication domain marking method (the existing mature method) to generate K communication regions, respectively recording central coordinate points of the communication regions, and according to the target type of elements corresponding to the position of the central coordinate points in a mask set, respectively recording the central coordinate points of the communication regions from the target typeRemoving center coordinate points with the target type as an interference target from the center coordinate points, taking the rest center coordinate points as the center coordinates of the suspected target, and adding the center coordinate points into the suspected target coordinate set to obtain the suspected target coordinate set
Figure BDA0003770912620000095
K =1,2,3, \ 8230;, K, wherein,
Figure BDA0003770912620000096
indicating that the k-th suspected target center is in the second position in the observation field
Figure BDA0003770912620000097
Column, first
Figure BDA0003770912620000098
The pixel points (positions) of the rows. (when the suspected target coordinate set is initially obtained, the target type of each element in the suspected target coordinate set is the suspected target, that is to say
Figure BDA0003770912620000101
(33) For the coordinates of the kth suspected target in the suspected target coordinate set obtained in the step (32)
Figure BDA0003770912620000102
In the current frame image I t In the middle to
Figure BDA0003770912620000103
And (3) taking the pixel point at the position as the center to extract an image block, wherein the size of the image block is 1.25-1.7 times larger than the maximum size of the target and is 2 times smaller than the maximum size of the target. (the selection reason is that the orbit of the infrared remote sensing observation satellite is fixed, and the resolution ratio of the infrared remote sensing observation satellite is also determinable, so that the prior knowledge can be used for counting to obtain the size range of the target in the observation image), the sobel operator is used for carrying out edge detection on the image block, the edge shape of the suspected target k is extracted, and the size of the suspected target and the image of the suspected target formed by the pixels surrounded by the edge shape can be calculated to obtain the size of the suspected target and the image of the suspected targetImage slice
Figure BDA0003770912620000104
If the size of the image slice of the suspected target k is not in the size range of the observation image corresponding to the target obtained through statistics, the target k is selected from the suspected target coordinate set
Figure BDA0003770912620000105
Removing, traversing all elements in the suspected target coordinate set, and then entering the step (4);
(4) Extracting a candidate matching target set and a candidate matching target picture slice corresponding to each suspected target from the previous frame of image according to the suspected target coordinate set obtained in the step (3)
Figure BDA0003770912620000106
(41) For the suspected target k, the last frame image I t-1 The same as in (1)
Figure BDA0003770912620000107
Selecting an image block with the size of qxq (the size q of the image block is mainly judged by combining the image resolution r and the target maximum motion speed v, q is more than or equal to 3v · Δ t/r and less than or equal to 5v · Δ t/r, wherein Δ t is the time difference between adjacent frame images) as a target candidate region by taking a pixel point at the position as a center;
(42) Combining the mask set M { M multiplied by n }, selecting points which correspond to the elements in the mask set and are not the background target in the target candidate region, using the points as candidate matching targets, and forming a candidate matching target set
Figure BDA0003770912620000108
,j=1,2,3,…,J k Wherein, J k And counting the number of candidate matching targets in a target candidate area corresponding to the suspected target k. According to the information of the feature vector M { i } {3} in the mask set M { M × n }, obtaining the image slice of each candidate matching target
Figure BDA0003770912620000109
(43) Repeating the steps (41) to (42) K times to obtain a candidate matching target set of each suspected target.
(5) If a candidate matching target set corresponding to a suspected target k
Figure BDA0003770912620000111
If the result is null, the suspected target k in the current frame is the newly detected suspected target (corresponding to the suspected target k)
Figure BDA0003770912620000112
Value of 2), assign the new number to the mask set
Figure BDA0003770912620000113
And assigning the size characteristic and the target position to
Figure BDA0003770912620000114
And
Figure BDA0003770912620000115
completing the frame target detection, and entering the step (7);
if the candidate matches the target set { T } P If k (j) is not empty, go to step (6);
(6) Utilizing the suspected target image slice obtained in the step (3)
Figure BDA0003770912620000116
And the candidate matching target picture slice obtained in the step (4)
Figure BDA0003770912620000117
Respectively determining each suspected target and corresponding J k Extracting the matching coefficient of each candidate matching target, and taking the candidate matching target meeting the threshold requirement as the candidate target of the suspected target k
Figure BDA0003770912620000118
According to the candidate target
Figure BDA0003770912620000119
Obtaining the motion direction and speed of the suspected target k, and selecting the candidate target in the mask
Figure BDA00037709126200001110
Updating the feature vectors of the corresponding elements; if some suspected target k does not have a corresponding candidate target
Figure BDA00037709126200001111
Then suspect target k is defined as the newly detected suspect target (i.e., the target is identified as the target
Figure BDA00037709126200001112
) Updating the feature vector of the element at the corresponding position in the mask;
(61) Utilizing the suspected target image slice obtained in the step (3)
Figure BDA00037709126200001113
And the candidate matching target picture slice obtained in the step (4)
Figure BDA00037709126200001114
Using an adaptive filter DF of the same size as the target k Matching the suspected target k with the candidate matching target set corresponding to the suspected target k
Figure BDA00037709126200001115
The matching coefficients are calculated as follows:
Figure BDA00037709126200001116
in the above formula, α is the filter difference coefficient (α is usually 0.01-0.2, and is negatively correlated with the dynamic range of image pixel values),
Figure BDA00037709126200001117
for adaptive filters DF k Slicing the suspected target image
Figure BDA00037709126200001118
As a result of the calculation of (a),
Figure BDA00037709126200001119
for adaptive filters DF k Slicing candidate matching target picture
Figure BDA00037709126200001120
The calculation result of (2);
Figure BDA00037709126200001121
for a suspected object k in an image I t A matrix of all pixels occupied in the image, i.e. a slice of the suspected target image (the slice is extracted mainly using the edge detection result in step 33),
Figure BDA0003770912620000121
match target for jth candidate
Figure BDA0003770912620000122
In picture I t-1 Candidate matching target image slice (representing a matrix of all pixels occupied by the target and scaled to image I t The same size),
Figure BDA0003770912620000123
and
Figure BDA0003770912620000124
matching targets for candidates
Figure BDA0003770912620000125
In the current frame image I t The predicted positions in the row direction and the column direction are calculated as follows:
Figure BDA0003770912620000126
in the above formula, the first and second carbon atoms are,
Figure BDA0003770912620000127
matching targets for candidates
Figure BDA0003770912620000128
In picture I t-1 At, is the time interval between two adjacent frames,
Figure BDA0003770912620000129
and
Figure BDA00037709126200001210
respectively as candidate matching targets
Figure BDA00037709126200001211
The speed of movement and the direction of movement (direction being the angle between the direction of movement and the row direction x).
When a dynamic filter is constructed, the imaging size of a target in an infrared remote sensing image is generally continuously changed within the range of 1 × 1 to 10 × 10, and the tail flame temperature of a high-speed moving target is high, so that the infrared camera can record part of the tail flame during imaging, an elliptical bright spot with a direction is formed in the image when the target moves at a high speed, and the axial direction of the bright spot is close to the moving direction of the target. For infrared weak and small targets with dynamic changes, if a fixed filter is used for feature extraction, the time stable capture of the targets is difficult to realize. Therefore, the present invention designs a rectangular angular adaptive filter with a 5 × 5 filter DF according to the target characteristics base Based on the length l of the target k k And width w k Generating an adaptive filter DF k ,DF k Has a length of l k +2, width w k +2, filter coefficients from DF base The coefficients are interpolated or sampled in the same direction as the axis of the target k, as shown in fig. 2.
(62) It can be seen that the suspected target k is matched with the candidate matching target
Figure BDA00037709126200001212
The larger the difference between them, the matching coefficient is represented
Figure BDA0003770912620000131
The larger;
Figure BDA0003770912620000132
the smaller the size, the more likely the suspected target k is to be represented as a candidate matching target
Figure BDA0003770912620000133
The more similar, all satisfy
Figure BDA0003770912620000134
(Thr R Is a statistic, has different designs according to different tasks, and is 2 to 5) in the candidate matching target set, and the candidate matching target with the minimum matching coefficient value is selected as the candidate target matched by k
Figure BDA0003770912620000135
I.e. to consider the candidate object
Figure BDA0003770912620000136
The same target as the suspected target k, the moving direction and speed of the target k can be calculated through the change of the positions of the current frame and the previous frame, and on the basis, the type, the number, the size, the trajectory set, the speed and the moving direction of the target are characterized by a mask
Figure BDA0003770912620000137
Is updated and the mask is eliminated
Figure BDA0003770912620000138
Historical characteristic information of the site; if no matching coefficient is less than Thr R If the object is also considered to be a new emerging object, it establishes a new number and is masked
Figure BDA0003770912620000139
The number, size and track of the target are updated.
(7) After completing the current frame image I t After detection, the mask M is used to complete the background image corresponding to the current frame
Figure BDA00037709126200001310
Updating the pixel values corresponding to the suspected target and the real target, wherein the updating principle is as follows:
Figure BDA00037709126200001311
in the above formula, λ is a background update coefficient, and is generally (0.7-0.9),
Figure BDA00037709126200001312
representing the image I from the current frame according to a set of masks t And extracting a pixel matrix (pixel points corresponding to the background, the interference target, the suspected target and the real target can be obtained according to values of M { i } {1} and M { i } {3 }) obtained after the pixel values of the pixel points of which the target types are the background or the interference target are kept unchanged and the pixel values of the rest of the pixel points (the suspected target and the real target pixel points) are processed to zero.
Figure BDA00037709126200001313
Represented in the current frame image I t On the basis, replacing the whole pixel block corresponding to the suspected target and the real target with the neighborhood background of the target (the neighborhood background represents the target in the image I) t Background pixels in four directions, namely, up, down, left and right, of the middle position), the size of the neighborhood background pixel block depends on the size of the corresponding target, the neighborhood background pixel block is obtained by average weighting of the four background pixel blocks with the same size, up, down, left and right, around the target slice, and the pixels corresponding to the background and the interference target are set to be zero.
(8) For the elements of which the feature vector target type in the mask set is a suspected target, when the number of times that the target appears in the whole image sequence exceeds Num d (typically the number of image frames corresponding to imaging 5-10 s) or the target trajectory stops updating beyond Numo (typically the number of image frames corresponding to imaging 10-15 s, i.e., the target trajectoryIf no new track exists in the 10-15 s image sequence continuously, the target is judged to have disappeared) frames, the track length is calculated, and if the track length exceeds Thr track (Thr track The value range is 3-5, i.e. the target moves more than 3 pixels in the embodiment of the present invention), the target type of the element is determined as a real target, otherwise, the target type is determined as an interference target, and the mask M is updated, the updating principle is as follows: if it is a real target, in
Figure BDA0003770912620000141
Updating the target type, number, appearance characteristic, target speed and motion direction; if the target is interference target, only the type information is reserved, and the values of the eigenvectors M { i } {2} to M { i } {6} are all null, namely:
Figure BDA0003770912620000142
(9) Element O for the type of the feature vector target in the mask set as a real target T And when the target Track stops updating and exceeds the Numo frame, judging that the target motion is finished, and adding the target number and the Track into a Track set Track. For new track
Figure BDA0003770912620000143
Is the coordinate position of the first track in the t frame image, t b (l) For the initial occurrence of the first track, t e (l) A frame appears for the end of the ith track. When the new Track Track (l + 1) is added into the Track set, the Track is predicted according to the movement speed and direction of the Track ending point in the Track to obtain the predicted position of the Track in the corresponding new Track, and the predicted position is compared with the initial position of the new Track and the movement speeds and directions of the targets corresponding to the new Track and the old Track, so that the complete tracking of the target is completed.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above. In the present embodiment, the technical features in the embodiments may be combined with each other without conflict.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.

Claims (12)

1. A method for detecting and tracking infrared dim targets based on masks and adaptive filtering is characterized by comprising the following steps:
1) Using the last frame image I t-1 Corresponding background image
Figure FDA0003770912610000011
From picture I of the current frame t Extracting to obtain multiple suspected targets, and obtaining a suspected target coordinate set composed of multiple suspected target coordinates in the current frame
Figure FDA0003770912610000012
And image slices of each suspected object
Figure FDA0003770912610000013
If the suspected target set of the current frame is empty, entering step 6); otherwise, entering the step 2);
2) For the suspected target coordinate set obtained in step 1)
Figure FDA0003770912610000014
Coordinates of the kth suspected target in (1)
Figure FDA0003770912610000015
In the current frame image I t To Chinese
Figure FDA0003770912610000016
Extracting an image block by taking a pixel point at the position as the center, carrying out edge detection on the image block, and carrying out suspected target coordinate set on a suspected target which does not conform to the size range of the target
Figure FDA0003770912610000017
Removing and traversing suspected target coordinate set
Figure FDA0003770912610000018
Then go to step 3);
3) According to the suspected target coordinate set
Figure FDA0003770912610000019
Coordinates of the kth suspected target in (1)
Figure FDA00037709126100000110
From the last frame image I t-1 Extracting a plurality of candidate matching targets corresponding to the suspected target k to form a candidate matching target set, and obtaining a picture slice of each candidate matching target
Figure FDA00037709126100000111
4) If a candidate matching target set corresponding to a suspected target k
Figure FDA00037709126100000112
If the suspected target k is empty, judging the suspected target k as a newly detected suspected target, assigning the information of the suspected target k to a mask set M { M × n }, returning to the step 3), processing the next suspected target until all the suspected targets are traversed, and entering the step 6); otherwise, entering the step 5); the set of masks M { M × n } is composed of M rows and n columns of elements, M being equal to the image I t Number of pixels in the medium-length direction, n being equal to image I t The number of pixels in the medium width direction; each element comprises information used for representing a corresponding pixel point on the image;
5) By using stepsStep 1) obtaining a suspected target image slice
Figure FDA00037709126100000113
And 3) obtaining candidate matching target picture slices
Figure FDA00037709126100000114
Determining a suspected target k and a corresponding J k Extracting the candidate matching target meeting the threshold requirement as the candidate target of the suspected target k according to the matching coefficient of each candidate matching target
Figure FDA0003770912610000021
For the neutralization candidate target in the mask set M { M × n }
Figure FDA0003770912610000022
Updating the information of the elements corresponding to the positions; if some suspected target k does not have a corresponding candidate target
Figure FDA0003770912610000023
Judging the suspected target k as a newly detected suspected target, assigning the information of the suspected target k to a mask set M { M × n }, then returning to the step 3), processing the next suspected target until all the suspected targets are traversed, and then entering the step 6);
6) Using mask set M { M × n }, for the previous frame image I t-1 Corresponding background image
Figure FDA0003770912610000024
Updating to obtain current frame image I t Corresponding background image
Figure FDA0003770912610000025
When the suspected target set of the current frame is empty, the image I of the previous frame t Corresponding background image
Figure FDA0003770912610000026
Is equal toOne frame image I t-1 Corresponding background image
Figure FDA0003770912610000027
7) Judging whether the suspected target is a real target or an interference target by using the mask set M { M multiplied by n }, and updating information of elements corresponding to the suspected target in the mask set M { M multiplied by n };
8) And (4) carrying out track association by using the information of the corresponding elements of the real target in the mask set M { M multiplied by n } to complete the complete tracking of the same target.
2. The method for detecting and tracking the infrared weak and small target based on the mask and the adaptive filtering as claimed in claim 1, wherein the information used for characterizing the corresponding pixel points on the image in each element in the mask set M { M × n } is represented by 6 eigenvectors, which are respectively:
m { i } {1} is the target type of the corresponding position of the element M { i }; the object types include: a background point corresponding to a value of M { i } {1} of 0; an interference target, corresponding to a value of 1 for M { i } {1 }; a suspected target, corresponding to a value of M { i } {1} of 2; the real target, corresponding to M { i } {1} value is 3; i is more than or equal to 1 and less than or equal to mxn;
m { i } {2} is the number of the target to which the corresponding position of the element M { i } belongs;
m { i } {3} is the number of pixels in the length and width directions of the image at the latest moment of the target to which the corresponding position of the element M { i } belongs;
m { i } {4} is a complete track set of the target to which the corresponding position of the element M { i } belongs; the complete track set consists of position information of central elements of the track corresponding to the target in each frame of image;
m { i } {5} is the speed of the target latest moment to which the corresponding position of the element M { i } belongs;
m { i } {6} is the motion direction of the object at the latest moment to which the corresponding position of the element M { i } belongs.
3. The method for detecting and tracking infrared dim target based on mask and adaptive filtering as claimed in claim 2, wherein said step 1) is obtained from the current frameA suspected target coordinate set composed of multiple suspected target coordinates
Figure FDA0003770912610000031
And image slices of each suspected object
Figure FDA0003770912610000032
The method specifically comprises the following steps:
11 Solving the pixel difference between the current frame image and the previous frame image to obtain a difference image D between the current frame image and the previous frame image f =I t -I t-1 (ii) a Simultaneously solving the pixel difference value of the current frame image and the background frame image to obtain a difference value image of the current frame image and the background frame image
Figure FDA0003770912610000033
12 Differential image D) f Medium absolute value is greater than Thr f Difference point or difference image D of b Medium absolute value is greater than Thr b The difference points are reserved as candidate points, pixel points corresponding to the candidate points are found from the current frame image, communication areas are generated, then central coordinate points of the communication areas are recorded respectively, according to the target type of elements corresponding to the central coordinate points in the mask set, the central coordinate points with the target type as an interference target are removed from the central coordinate points, the rest central coordinate points are used as the central coordinates of the suspected target and are added into the suspected target coordinate set, and the suspected target coordinate set is obtained
Figure FDA0003770912610000034
Thr f And Thr b The value range of (A) is 8-12;
13 For the obtained set of coordinates of suspected object
Figure FDA0003770912610000035
Coordinates of the kth suspected target in (1)
Figure FDA0003770912610000036
At the current frame image I t To Chinese
Figure FDA0003770912610000037
Extracting an image block by taking a pixel point at the position as a center, carrying out edge detection on the image block, extracting the edge shape of the suspected target k, and calculating to obtain an image slice formed by pixels surrounded by the edge shape of the suspected target k;
14 ) if the size of the image slice formed by the pixels surrounded by the edge shape of the suspected target k obtained in the step 13) is not in the prior size range of the target, the suspected target k is selected from the suspected target coordinate set
Figure FDA0003770912610000038
Removing the image slices obtained in the step 13) as the image slices of the suspected target k
Figure FDA0003770912610000039
4. The method for detecting and tracking infrared weak and small objects based on the mask and adaptive filtering as claimed in claim 3, wherein the size of the image block in step 13) is greater than 1.25-1.7 times the maximum size of the object and less than 2 times the maximum size of the object.
5. The method for detecting and tracking infrared weak and small target based on mask and adaptive filtering as claimed in claim 2, wherein said step 3) is performed from the previous frame of image I t-1 Extracting a plurality of candidate matching targets corresponding to the suspected target k to form a candidate matching target set, and obtaining a picture slice of each candidate matching target
Figure FDA0003770912610000041
The method specifically comprises the following steps:
31 In the previous frame of image I) t-1 The same as in (1)
Figure FDA0003770912610000042
Selecting an image block with the size of qxq as a target candidate region by taking a pixel point at the position as a center;
32 In combination with the mask set M { M × n }, from the target candidate region, selecting elements whose target types corresponding to the elements in the mask set are not background targets as candidate matching targets to form a candidate matching target set
Figure FDA0003770912610000043
Wherein, J k The number of candidate matching targets in a target candidate area corresponding to the suspected target k;
33 Obtaining an image slice for each candidate matching object based on information of the feature vector M { i } {3} in the mask set M { M × n }
Figure FDA0003770912610000044
6. The method for detecting and tracking the infrared dim target based on the mask and the adaptive filtering as claimed in claim 5, wherein the value range in step 31) is as follows:
3v·Δt/r≤q≤5v·Δt/r
wherein r is the image resolution, v is the maximum motion speed of the target, and Δ t is the time difference between adjacent frame images.
7. The method for detecting and tracking the infrared weak and small target based on the mask and the adaptive filtering as claimed in claim 2, wherein the method for determining the matching coefficient in step 5) specifically comprises:
Figure FDA0003770912610000045
wherein alpha is a filter difference coefficient, and the value range of alpha is 0.01-0.2;
Figure FDA0003770912610000046
for adaptive filters DF k Slicing the suspected target image
Figure FDA0003770912610000047
As a result of the calculation of (a),
Figure FDA0003770912610000048
for adaptive filters DF k Slicing candidate matching target picture
Figure FDA0003770912610000049
The calculation result of (2);
Figure FDA00037709126100000410
and
Figure FDA00037709126100000411
matching targets for candidates
Figure FDA00037709126100000412
In the current frame image I t In the row direction and in the column direction;
Figure FDA00037709126100000413
the coordinates of the kth suspected target.
8. The method as claimed in claim 7, wherein the method for detecting and tracking infrared dim target based on mask and adaptive filtering is characterized in that
Figure FDA0003770912610000051
And
Figure FDA0003770912610000052
the determination method specifically comprises the following steps:
Figure FDA0003770912610000053
wherein,
Figure FDA0003770912610000054
matching targets for candidates
Figure FDA0003770912610000055
Last frame image I t-1 At, the time interval between two adjacent frames,
Figure FDA0003770912610000056
for candidate matching targets obtained from the set of masks M { M × n }
Figure FDA0003770912610000057
The speed of movement of (a) is,
Figure FDA0003770912610000058
for candidate matching targets obtained from the set of masks M { M × n }
Figure FDA0003770912610000059
The angle to the image line direction.
9. The method for detecting and tracking infrared dim target based on mask and adaptive filtering as claimed in claim 2, wherein said step 6) obtains current frame image I t Corresponding background image
Figure FDA00037709126100000510
The method specifically comprises the following steps:
Figure FDA00037709126100000511
wherein, lambda is a background updating coefficient, and the value range of lambda is 0.7-0.9;
Figure FDA00037709126100000512
representing the current frame image I according to the mask set t The pixel values of the pixel points of which the corresponding target types are background or interference targets are kept unchanged, and the pixel values of the other pixel points are processed to zero to obtain a pixel matrix;
Figure FDA00037709126100000513
is shown in the current frame image I t Taking pixel blocks corresponding to the suspected target and the real target as a replaced area, replacing the whole replaced area with the pixel block of the neighborhood background, and according to a mask set, replacing the current frame image I t The pixel matrix is obtained after the zero setting processing of the pixels of the pixel points of which the corresponding target types are the background and the interference target; the value of each pixel in the pixel blocks of the neighborhood background is equal to the average value of the corresponding pixels in the pixel blocks with equal size in four directions of upper, lower, left and right outside the replaced area.
10. The method for detecting and tracking the infrared weak and small target based on the mask and the adaptive filtering according to any one of claims 1 to 9, wherein the step 7) is a method for judging whether the suspected target is a real target or an interference target, specifically: for the elements of which the feature vector target type in the mask set is a suspected target, when the number of times that the target appears in the whole image sequence exceeds Num d Or the target track stops updating beyond Num o Then, calculating the length of the complete track set of the target of the corresponding position of the element, and if the track length exceeds Thr track (ii) a If so, judging the target type of the element as a real target, otherwise, judging the element as an interference target; num d The value range of the image frames is equal to the number of the image frames corresponding to 5 to 10 s; num o The value range of (a) is equal to the number of image frames corresponding to 10-15 s; thr (Thr) track The value range is 3-5.
11. The method for detecting and tracking the infrared dim target based on the mask and the adaptive filtering according to any one of claims 1 to 9, wherein the step 8) is a method for performing track association, and specifically comprises the following steps:
when the track corresponding to the element with the target type being the real target in the mask set stops updating and exceeds Num o When the frame is in use, judging that the real target motion is finished, and adding a target number and a Track into a Track set Track;
when a new track is added into the track set, the track is predicted according to the movement speed and direction of the ending track point of the known track in the track set to obtain the predicted position of the known track in the corresponding new track, and the predicted position is compared with the movement speed and direction of the target corresponding to the initial position of the new track and the new and old tracks to complete the complete tracking of the target; num o The value range of (a) is equal to the number of image frames corresponding to 10-15 s.
12. The method for detecting and tracking the infrared dim target based on the mask and the adaptive filtering according to any one of claims 1 to 9, characterized in that: initial background image
Figure FDA0003770912610000061
The pixel value of each pixel in the image is equal to the average value of pixel points at corresponding positions in the initially obtained N frames of images, and the value range of N is 5-10.
CN202210901099.7A 2022-07-28 2022-07-28 Infrared weak and small target detection tracking method based on mask and adaptive filtering Pending CN115393281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210901099.7A CN115393281A (en) 2022-07-28 2022-07-28 Infrared weak and small target detection tracking method based on mask and adaptive filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210901099.7A CN115393281A (en) 2022-07-28 2022-07-28 Infrared weak and small target detection tracking method based on mask and adaptive filtering

Publications (1)

Publication Number Publication Date
CN115393281A true CN115393281A (en) 2022-11-25

Family

ID=84116400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210901099.7A Pending CN115393281A (en) 2022-07-28 2022-07-28 Infrared weak and small target detection tracking method based on mask and adaptive filtering

Country Status (1)

Country Link
CN (1) CN115393281A (en)

Similar Documents

Publication Publication Date Title
CN107993245B (en) Aerospace background multi-target detection and tracking method
CN109978851B (en) Method for detecting and tracking small and medium moving target in air by using infrared video
CN109102522B (en) Target tracking method and device
EP1505543A2 (en) Video object tracking
CN110390292B (en) Remote sensing video vehicle target detection and tracking method based on dynamic correlation model
WO2006135419A2 (en) A method and system for improved unresolved target detection using multiple frame association
CN109448023B (en) Satellite video small target real-time tracking method
CN104834915B (en) A kind of small infrared target detection method under complicated skies background
CA2628611A1 (en) Tracking using an elastic cluster of trackers
CN111027496A (en) Infrared dim target detection method based on space-time joint local contrast
CN110400294B (en) Infrared target detection system and detection method
CN111709968A (en) Low-altitude target detection tracking method based on image processing
Liu et al. Space target extraction and detection for wide-field surveillance
CN109446978A (en) Based on the winged maneuvering target tracking method for staring satellite complex scene
CN115439777A (en) Video satellite target tracking method based on multi-feature fusion and motion estimation
US6496592B1 (en) Method for tracking moving object by means of specific characteristics
CN111145198A (en) Non-cooperative target motion estimation method based on rapid corner detection
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN116091804B (en) Star suppression method based on adjacent frame configuration matching
CN115393281A (en) Infrared weak and small target detection tracking method based on mask and adaptive filtering
CN115984751A (en) Twin network remote sensing target tracking method based on multi-channel multi-scale fusion
CN111553876B (en) Pneumatic optical sight error image processing method and system
CN111161304B (en) Remote sensing video target track tracking method for rapid background estimation
US12108022B2 (en) Method for infrared small target detection based on depth map in complex scene
CN114820801A (en) Space target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination