CN111739055B - Infrared point-like target tracking method - Google Patents

Infrared point-like target tracking method Download PDF

Info

Publication number
CN111739055B
CN111739055B CN202010522691.7A CN202010522691A CN111739055B CN 111739055 B CN111739055 B CN 111739055B CN 202010522691 A CN202010522691 A CN 202010522691A CN 111739055 B CN111739055 B CN 111739055B
Authority
CN
China
Prior art keywords
target
sample
correlation degree
degree value
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010522691.7A
Other languages
Chinese (zh)
Other versions
CN111739055A (en
Inventor
艾斯卡尔·艾木都拉
武文成
张国峰
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang University
Original Assignee
Xinjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang University filed Critical Xinjiang University
Priority to CN202010522691.7A priority Critical patent/CN111739055B/en
Publication of CN111739055A publication Critical patent/CN111739055A/en
Application granted granted Critical
Publication of CN111739055B publication Critical patent/CN111739055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an infrared point target tracking method, which comprises the following steps: determining a target model in a current frame image of a captured target, acquiring a detection sample set in a next frame image of the current frame, calculating to obtain a correlation degree value of each detection sample, and selecting the maximum correlation degree value as an optimal detection correlation degree value; judging the optimal detection correlation degree value to determine whether to enter re-detection; if not, updating information to realize tracking; if yes, determining a target sample set in the current frame image; obtaining a correlation degree value corresponding to each target sample through calculation, and selecting the maximum correlation degree value as an optimal target correlation degree value; and updating information to realize target tracking. The invention improves the real-time performance and accuracy of tracking.

Description

Infrared point-like target tracking method
Technical Field
The invention relates to the technical field of target detection and tracking, in particular to an infrared point-like target tracking method.
Background
In recent years, mature "stealth" techniques have been developed to combat electromagnetic detection techniques such as radar. Are increasingly being deployed by a variety of equipment such that conventional detection techniques have not been able to meet the requirements of current and future military developments. Meanwhile, researchers find that the existing weaponry has high infrared radiation energy, so that targets can be detected quickly and inexpensively through an infrared camera. However, since the system needs to have an early warning capability, the system is required to be able to detect and track the target at a long distance, which causes the infrared radiation energy projected by the target to the camera to be greatly reduced. In infrared video, point objects projected from far away typically occupy fewer pixels (less than 49 pixels), are point-like in gaussian, without texture, and dim. Furthermore, due to the variability of the heat source radiation in view, the infrared image is easily contaminated by the infrared sensor noise. Therefore, the conventional tracking algorithm is difficult to continuously track the point-like target accurately and effectively. Especially when the field of view is filled with a large number of point-like noise points, or the target is submerged in clouds or sea clutter.
Conventional algorithms include: template matching algorithm, optical flow algorithm, mean shift algorithm, pipeline filtering algorithm, particle filtering algorithm and the like. However, various complex backgrounds degrade the discriminability of the target features, resulting in inaccurate parameter estimation in the tracking model. Meanwhile, the above methods generally require a large amount of computing resources, and the real-time performance is poor when used in tracking.
Disclosure of Invention
The invention aims to provide an infrared point target tracking method to improve real-time performance and accuracy.
In order to achieve the purpose, the invention provides the following scheme:
an infrared point-like target tracking method, the method comprising:
step S1, obtaining a target model through training in the current frame image of the captured target;
step S2, determining a detection sample set in the next frame image of the current frame; performing feature extraction on each detection sample in the detection sample set to obtain a feature matrix corresponding to each detection sample;
step S3, calculating the correlation degree of the feature matrix corresponding to each detection sample and the target model to obtain the correlation degree value corresponding to each detection sample, and selecting the maximum correlation degree value as the optimal detection correlation degree value;
step S4, judging whether the jitter of the optimal detection correlation degree value is larger than or equal to a set threshold value; if the value is greater than or equal to the set threshold value, executing step S6; if the value is less than the set threshold value, executing step S5;
step S5, determining the target position in the detection sample corresponding to the optimal detection correlation degree value, updating the target model in step S3 through training, and repeating the steps S2-S4 to realize target tracking;
step S6, determining a target sample set in the next frame image of the current frame; performing feature extraction on each target sample in the target sample set to obtain a feature matrix corresponding to each target sample, performing correlation calculation on the feature matrix corresponding to each target sample and the target model to obtain a correlation degree value corresponding to each target sample, and selecting the maximum correlation degree value as an optimal target correlation degree value; and determining the target position in the target sample corresponding to the optimal target correlation degree value, and repeating the steps S2-S4 by training and updating the target model in the step S3 to realize target tracking.
Preferably, step S1 includes:
step S11, determining a target area in the current frame image where the target is captured;
step S12, based on the target area, taking 2 times of the size of the target area as a training sample acquisition area;
step S13, obtaining a training sample set in the training sample acquisition area by adopting a cyclic sampling method;
step S14, extracting the features of each training sample in the training sample set to obtain a feature matrix corresponding to each training sample;
step S15, training the feature matrix corresponding to each training sample to obtain the target model.
Preferably, step S2 includes:
step S21, taking the size of 2 times of the target area in the next frame image of the current frame as a detection sample acquisition area;
step S22, acquiring the detection sample set in the sample acquisition area by adopting a cyclic sampling method;
step S23, performing feature extraction on each detection sample in the detection sample set to obtain a feature matrix corresponding to each detection sample.
Preferably, step S4 is specifically:
judging whether the jitter of the optimal detection correlation degree value of the next frame of the current frame and the optimal correlation degree value of the previous b frames is larger than or equal to a set threshold value or not; the optimal correlation degree value is the optimal detection correlation degree value or the optimal target correlation degree value; if the value is less than the set threshold value, executing step S5; if the value is greater than or equal to the set threshold value, executing step S6; b is a positive integer greater than or equal to 1; if the sum of all the frame numbers before the current frame is less than b, taking b as the sum of all the frame numbers before the current frame; and if the sum of all the frame numbers before the current frame is greater than or equal to b, taking b as a set value.
Preferably, step S6 includes:
step S61, determining a target area according to a target in a next frame image of a current frame, further determining and determining a prediction sample acquisition area, and searching on a prediction track in the prediction sample acquisition area by taking the size of the target area as a window to obtain a prediction sample set;
step S62, calculating to obtain the significance value of each prediction sample in the prediction sample set, and selecting the prediction sample corresponding to the top n maximum significance values as a target sample set;
step S63, extracting the features of each target sample in the target sample set to obtain a feature matrix corresponding to each target sample;
step S64, calculating the correlation degree of the feature matrix corresponding to the target sample and the target model to obtain the corresponding correlation degree value of each target sample; selecting the maximum correlation degree value as an optimal target correlation degree value;
step S65, determining the target position and the target area in the target sample corresponding to the optimal target correlation degree value, updating the target model in step S3, and repeating steps S2-S4 to realize target tracking.
Preferably, the correlation degree value corresponding to each of the detection samples is obtained by the following formula:
Figure BDA0002532642530000031
in the formula:
Figure BDA0002532642530000032
is the target model.
Figure BDA0002532642530000033
Is the first row of a kernel correlation matrix generated from a training sample set and a test sample set, Z is the test sample set, X is the training sample set,
Figure BDA0002532642530000034
is the value of the degree of correlation,
Figure BDA0002532642530000035
a set of correlation level values for a set of samples is detected.
Preferably, the formula of the target model in the updating step S3 by training is as follows:
Figure BDA0002532642530000041
in the formula:
Figure BDA0002532642530000042
and the target model is lambda is a regularization parameter, and Y is a label set corresponding to the detection sample set.
Figure BDA0002532642530000043
Is the first row of the kernel correlation matrix generated from the set of test samples, and Z is the set of test samples.
Preferably, the jitter of the optimal detection correlation degree value of the next frame of the current frame and the optimal correlation degree value of the previous b frames is calculated according to the following formula:
Figure BDA0002532642530000044
in the formula: p is jitter, t is the number of frames of the next frame of the current frame,
Figure BDA0002532642530000045
the optimal detection correlation degree value of the t frame is obtained; sigma (R)max) The standard deviation of the optimal correlation degree value of the previous b frames.
Preferably, b is 4.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention relates to an infrared point target tracking method, which comprises the following steps: determining a target model in a current frame image of a captured target, acquiring a detection sample set in a next frame image of the current frame, calculating to obtain a correlation degree value of each detection sample, and selecting the maximum correlation degree value as an optimal detection correlation degree value; judging the optimal detection correlation degree value to determine whether to enter re-detection; if not, updating information to realize tracking; if yes, determining a target sample set in the current frame image; obtaining a correlation degree value corresponding to each target sample through calculation, and selecting the maximum correlation degree value as an optimal target correlation degree value; and updating information to realize target tracking. The invention improves the real-time performance and accuracy of tracking.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of an infrared point-like target tracking method according to the present invention;
FIG. 2 is a schematic view of an infrared point target in view according to the present invention;
FIG. 3 is a schematic diagram of feature matrix extraction according to the present invention;
FIG. 4 is a diagram illustrating the ellipse fitting result of gradient distribution of different regions according to the present invention.
FIG. 5 is a schematic diagram of a sample set obtained by a cyclic sampling method according to the present invention;
FIG. 6 is a graph showing the comparison between the present invention and the prior art in sequence 1;
FIG. 7 is a graph showing the comparison between the present invention and the prior art in sequence 2;
FIG. 8 is a graph showing the comparison between the present invention and the prior art in sequence 3;
FIG. 9 is a diagram showing the comparison effect of the present invention and the prior art in sequence 4.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an infrared point target tracking method to improve real-time performance and accuracy.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
As shown in fig. 1, the present invention provides an infrared dotted target tracking method, including:
step S1, obtaining a target model through training in the current frame image of the captured target;
specifically, step S1 of the present invention includes:
in step S11, a target area is determined in the current frame image where the target is captured.
And step S12, based on the target area, taking the size of 2 times of the target area as a training sample acquisition area.
And step S13, obtaining a training sample set in the training sample acquisition area by adopting a cyclic sampling method.
Step S14, performing feature extraction on each training sample in the training sample set to obtain a feature matrix corresponding to each training sample.
And step S15, training the feature matrix corresponding to each training sample to obtain the target model.
In the process of tracking the infrared dot-shaped target, the difference between the target and the background clutter needs to be effectively described by extracting features so as to have good discriminability. The target has only gray information and no obvious texture features in the infrared image. Meanwhile, as the infrared radiation energy of the target is influenced by the natural environment, the infrared radiation energy of the target is in the form of a gaussian light mass in the visual field, as shown in fig. 2, it can be seen that the target and the background have a high gradient difference within a range of 360 degrees. Therefore, the invention adopts the gradient change in the window shown in figure 3 to carry out the feature extraction, divides the window into 16 gradient regions with the same size, wherein the gradient regions comprise gradient regions of 4 targets and gradient regions of 12 backgrounds, and applies
Figure BDA0002532642530000066
And Σ dy (x, y) | | four quantities to describe the gradient change of 16 said gradient regions respectively, get the said feature matrix; where dx (x, y) is the gradient in the x-direction and dy (x, y) is the gradient in the y-direction.
Step S2, determining a detection sample set in the next frame image of the current frame; performing feature extraction on each detection sample in the detection sample set to obtain a feature matrix corresponding to each detection sample;
as an alternative embodiment, step S2 includes:
step S21, taking the size of 2 times of the target area in the next frame image of the current frame as the detection sample collection area.
And step S22, acquiring the detection sample set in the detection sample acquisition area by adopting a cyclic sampling method.
Step S23, performing feature extraction on each detection sample in the detection sample set to obtain a feature matrix corresponding to each detection sample.
Step S3, calculating the correlation degree between the feature matrix corresponding to each detection sample and the target model to obtain the correlation degree value corresponding to each detection sample, and selecting the maximum correlation degree value as the optimal detection correlation degree value. The specific calculation formula is as follows:
Figure BDA0002532642530000061
in the formula:
Figure BDA0002532642530000062
is the target model.
Figure BDA0002532642530000063
Is the first row of a kernel correlation matrix generated from a training sample set and a test sample set, Z is the test sample set, X is the training sample set,
Figure BDA0002532642530000064
is the value of the degree of correlation,
Figure BDA0002532642530000065
a set of correlation level values for a set of samples is detected.
Step S4, judging whether the jitter of the optimal detection correlation degree value is larger than or equal to a set threshold value; if the value is greater than or equal to the set threshold value, executing step S6; if the value is smaller than the set threshold value, step S5 is executed.
Specifically, whether the jitter of the optimal detection correlation degree value of the next frame of the current frame and the optimal correlation degree value of the previous b frames is greater than or equal to a set threshold value is judged; the optimal correlation degree value is the optimal detection correlation degree value or the optimal target correlation degree value; if the value is less than the set threshold value, executing step S5; if the value is greater than or equal to the set threshold value, executing step S6; b is a positive integer greater than or equal to 1; if the sum of all the frame numbers before the current frame is less than b, taking b as the sum of all the frame numbers before the current frame; and if the sum of all the frame numbers before the current frame is greater than or equal to b, taking b as a set value. In this example, b is 4. The jitter is calculated according to the following formula:
Figure BDA0002532642530000071
in the formula: p is jitter, t is the number of frames of the next frame of the current frame,
Figure BDA0002532642530000072
the optimal detection correlation degree value of the t frame is obtained; sigma (R)max) The standard deviation of the optimal correlation degree value of the previous b frames.
Step S5, determining a target position and a target area in the detection sample corresponding to the optimal detection correlation degree value, updating the target model in step S3, and repeating the steps S2-S4 to realize target tracking;
step S6, determining a target sample set in the next frame image of the current frame; performing feature extraction on each target sample in the target sample set to obtain a feature matrix corresponding to each target sample, performing correlation calculation on the feature matrix corresponding to each target sample and the target model to obtain a correlation degree value corresponding to each target sample, and selecting the maximum correlation degree value as an optimal target correlation degree value; and determining the target position in the target sample corresponding to the optimal target correlation degree value, and repeating the steps S2-S4 by training and updating the target model in the step S3 to realize target tracking.
For each frame, judging and performing subsequent processing after the optimal detection correlation degree value is obtained so as to determine to update information according to the optimal detection correlation degree value or the optimal target correlation degree value; the optimal correlation degree value for each frame is the optimal detection correlation degree value or the optimal target correlation degree value.
As an alternative embodiment, step 6 of the present invention comprises:
step S61, determining a target area according to the target in the next image of the current frame, further determining a prediction sample acquisition area, and searching on the prediction trajectory in the prediction sample acquisition area with the size of the target area as a window to obtain a prediction sample set.
Step S62, obtaining a saliency value of each of the prediction samples in the prediction sample set through calculation, and selecting the prediction sample corresponding to the top n largest saliency values as a target sample set. In this embodiment, n is 3. The significance value M is calculated according to the following formula:
M=μtr(C)-det(C)=μ(λ12)-λ1λ2
in the formula: lambda [ alpha ]1And λ2Mu is a coefficient for two eigenvalues of the covariance matrix corresponding to the prediction samples. tr (C) is the trace of the covariance matrix, det (C) is the determinant of the covariance matrix.
In this example, μ is 0.06.
Formula M ═ μ tr (c) -det (c) ═ μ (λ)12)-λ1λ2The specific derivation process of (2) is as follows:
the gradient change degree S (x, y) of the prediction sample is as follows:
Figure BDA0002532642530000081
in the formula: x, y are the x and y coordinates of the prediction sample, respectively, p (x, y) represents the pixel at (x, y) of the prediction sample, w (u, v) is the gaussian weighting kernel, and u, v are the u and v coordinates of the gaussian weighting kernel, respectively.
Using a taylor first order expansion, the formula S (x, y) is:
Figure BDA0002532642530000082
further by the form of a matrix can be expressed as:
Figure BDA0002532642530000083
Figure BDA0002532642530000084
in the formula: c is a covariance matrix of two quantities of gradient dx (x, y) and dy (x, y) of the prediction sample in the x-direction and the y-direction.
As shown in fig. 4, and the nature of covariance, the determinant of the covariance matrix reflects the correlation of two variables. The smaller the correlation, the closer the values of the two quantities are represented, i.e. the closer the major and minor axes of the ellipse, the closer the ellipse is to the circle. Corresponding characteristic value (lambda)12) The degree of change in the gradient in the x-direction and the y-direction within the window is reflected. From fig. 4, it can be seen that when the gradient changes in both directions relatively greatly, i.e. the characteristic value (λ)12) When both are relatively large, the probability of being a target is higher.
The covariance matrix C may be obtained by counting the x-direction and y-direction gradient values within the window. Let the gradient set G in two directions within the window be expressed as:
Figure BDA0002532642530000091
where w (i, j) represents the weight at (i, j) in the Gaussian weighted kernel, dx (x)i,yi) Pixel is in (x)i,yi) And the gradient in the x-direction. dy (x)i,yi) Is likeIs in (x)i,yi) And the gradient in the y-direction.
The covariance matrix C can be expressed as:
C=GTG;
in the formula: t is transposition.
Thus, the degree of saliency may be defined as:
M=μtr(C)-det(C)=μ(λ12)-λ1λ2
step S63, extracting the features of each target sample in the target sample set to obtain a feature matrix corresponding to each target sample;
step S64, calculating the correlation degree between the feature matrix corresponding to the target sample and the target model to obtain the corresponding correlation degree value of each target sample; and selecting the maximum correlation degree value as an optimal target correlation degree value. The specific calculation method is the same as the calculation of the correlation degree value corresponding to each detection sample.
Step S65, determining the target position and the target area in the target sample corresponding to the optimal target correlation degree value, updating the target model in step S3, and repeating steps S2-S4 to realize target tracking.
In this embodiment, the formula of the target model in the updating step S3 in step S5 is expressed as:
Figure BDA0002532642530000092
in the formula:
Figure BDA0002532642530000093
and the target model is lambda is a regularization parameter, and Y is a label set corresponding to the detection sample set.
Figure BDA0002532642530000094
Is the first row of the kernel correlation matrix generated from the set of test samples, and Z is the set of test samples.
Formula (la)
Figure BDA0002532642530000101
The specific derivation process of (2) is as follows:
the root of the tracking is to require a target model such that it satisfies the following conditions:
Figure BDA0002532642530000102
in the formula: w is the target model before update, zeTo detect the e-th sample in the sample set, qeIn order to detect the label corresponding to the sample, λ is a regular term, so as to prevent overfitting.
And (3) calculating the partial derivative of w to obtain:
w=(XTX+λI)-1XTY;
in the formula: x is a detection sample set, Y is a label set corresponding to the detection sample set, T is a transposition, and I is an identity matrix.
The nonlinear problem is mapped through the nonlinearity of the characteristic, so that the mapped characteristic space meets the linear relation, and a mapping function is set as
Figure BDA0002532642530000103
Let w be expressed as:
Figure BDA0002532642530000104
in the formula: alpha is alphaeIs the linear combination coefficient of the parameter matrix of the target model. Then alpha iseIt should satisfy:
Figure BDA0002532642530000105
in the formula: alpha is alphaeVectors of composition, using kernel functions
Figure BDA0002532642530000106
Further partial derivation of α can yield:
Figure BDA0002532642530000107
Figure BDA0002532642530000108
in the formula: and I is an identity matrix. k is a radical ofXXA kernel correlation matrix of a training sample set.
According to the property of the circulant matrix in the fourier domain:
Figure BDA0002532642530000111
in the formula: and k is a nuclear correlation cyclic matrix generated by cyclic sampling, k is a generated vector of k, and F is a discrete Fourier transform matrix. FHIs the conjugate transpose of F. And is Fourier transformed
Figure BDA0002532642530000112
Further derivation of α can be found:
Figure BDA0002532642530000113
in the formula:
Figure BDA0002532642530000114
is a kernel correlation matrix kZZThe first row of (a).
The target model update in step S6 is the same as above.
The correlation degree value calculation formula in step S3
Figure BDA0002532642530000115
The derivation process of (1) is as follows:
f(Z)=wTZ;
the nonlinear problem is mapped through the nonlinearity of the characteristic, and the mapping function is also set as
Figure BDA0002532642530000116
Let f (Z) be:
Figure BDA0002532642530000117
also using kernel functions
Figure BDA0002532642530000118
Then there are:
Figure BDA0002532642530000119
Figure BDA00025326425300001110
in the formula: wherein k isXZAnd a kernel correlation matrix of the training sample set and the detection sample set.
The same can be obtained from the properties of the circulant matrix in the fourier domain:
Figure BDA00025326425300001111
specifically, the technical effects of the application and the prior art are compared and analyzed through two factors, namely the pixel error success rate and the overlapping degree success rate.
The four sequences are set to perform four experiments for specific analysis, and the prior art in this embodiment includes three algorithms, namely, a Tracking Learning Detection (TLD) algorithm, a Kernel Correlation Filtering (KCF) algorithm, and a constrained sample sparse particle filtering (CSRPF) algorithm.
Fig. 6-9 are graphs showing the comparison results of four sequences in sequence, wherein a is a graph showing the success rate of the error of the center pixel, and b is a graph showing the success rate of the overlapping degree. As can be seen from the figure, the performance of the technical scheme of the application is better than that of the other three algorithms.
In this embodiment, the boundary effect is eliminated for each of the detected samples in the detected sample set and each of the predicted samples in the predicted sample set through a preselected window.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (7)

1. An infrared point-like target tracking method is characterized by comprising the following steps:
step S1, obtaining a target model through training in the current frame image of the captured target;
step S2, determining a detection sample set in the next frame image of the current frame; extracting the characteristics of all detection samples in the detection sample set to obtain a characteristic matrix corresponding to each detection sample;
step S3, calculating the correlation degree of the feature matrix corresponding to each detection sample and the target model to obtain the correlation degree value corresponding to each detection sample, and selecting the maximum correlation degree value as the optimal detection correlation degree value;
step S4, judging whether the jitter of the optimal detection correlation degree value is larger than or equal to a set threshold value; if the value is greater than or equal to the set threshold value, executing step S6; if the value is less than the set threshold value, executing step S5;
step S5, determining the target position in the detection sample corresponding to the optimal detection correlation degree value, updating the target model in step S3 through training, and repeating the steps S2-S4 to realize target tracking;
step S6, determining a target sample set in the next frame image of the current frame; performing feature extraction on each target sample in the target sample set to obtain a feature matrix corresponding to each target sample, performing correlation degree calculation on the feature matrix corresponding to each target sample and the target model to obtain a correlation degree value corresponding to each target sample, and selecting the maximum correlation degree value as an optimal target correlation degree value; determining the target position in the target sample corresponding to the optimal target correlation degree value, and repeating the steps S2-S4 by training and updating the target model in the step S3 to realize target tracking;
step S4 specifically includes:
judging whether the jitter of the optimal detection correlation degree value of the next frame of the current frame and the optimal correlation degree value of the previous b frames is larger than or equal to a set threshold value or not; the optimal correlation degree value is the optimal detection correlation degree value or the optimal target correlation degree value; if the value is less than the set threshold value, executing step S5; if the value is greater than or equal to the set threshold value, executing step S6; b is a positive integer greater than or equal to 1; if the sum of all the frame numbers before the current frame is less than b, taking b as the sum of all the frame numbers before the current frame; if the sum of all the frame numbers before the current frame is greater than or equal to b, taking b as a set value;
the jitter of the optimal detection correlation degree value of the next frame of the current frame and the optimal correlation degree value of the previous b frames is calculated according to the following formula:
Figure FDA0003594204220000021
in the formula: p is jitter, t is the number of frames of the next frame of the current frame,
Figure FDA0003594204220000022
the optimal detection correlation degree value of the t frame is obtained; sigma (R)max) Optimal correlation for previous b framesStandard deviation of degree values.
2. The method according to claim 1, wherein step S1 includes:
step S11, determining a target area in the current frame image where the target is captured;
step S12, based on the target area, taking 2 times of the size of the target area as a training sample acquisition area;
step S13, obtaining a training sample set in the training sample acquisition area by adopting a cyclic sampling method;
step S14, extracting the features of each training sample in the training sample set to obtain a feature matrix corresponding to each training sample;
and step S15, training the feature matrix corresponding to each training sample to obtain the target model.
3. The method according to claim 2, wherein step S2 includes:
step S21, taking the size of 2 times of the target area in the next frame image of the current frame as a detection sample acquisition area;
step S22, acquiring the detection sample set in the sample acquisition area by adopting a cyclic sampling method;
step S23, performing feature extraction on each detection sample in the detection sample set to obtain a feature matrix corresponding to each detection sample.
4. The method according to claim 1, wherein step S6 includes:
step S61, determining a target area according to a target in a next frame image of a current frame, further determining a prediction sample acquisition area, and searching on a prediction track in the prediction sample acquisition area by taking the size of the target area as a window to obtain a prediction sample set;
step S62, calculating to obtain the significance value of each prediction sample in the prediction sample set, and selecting the prediction sample corresponding to the top n maximum significance values as a target sample set;
step S63, performing feature extraction on each target sample in the target sample set to obtain a feature matrix corresponding to each target sample;
step S64, calculating the correlation degree between the feature matrix corresponding to the target sample and the target model to obtain the corresponding correlation degree value of each target sample; selecting the maximum correlation degree value as an optimal target correlation degree value;
step S65, determining the target position and the target area in the target sample corresponding to the optimal target correlation degree value, updating the target model in step S3, and repeating steps S2-S4 to realize target tracking.
5. The method of claim 1, wherein the obtaining of the correlation degree value corresponding to each of the detection samples is performed according to the following formula:
Figure FDA0003594204220000031
in the formula:
Figure FDA0003594204220000032
in order to be the target model,
Figure FDA0003594204220000033
is the first row of a kernel correlation matrix generated from a training sample set and a test sample set, Z is the test sample set, X is the training sample set,
Figure FDA0003594204220000034
is the value of the degree of correlation,
Figure FDA0003594204220000035
a set of correlation level values for a set of samples is detected.
6. The method according to claim 1, wherein the formula of the target model in the updating step S3 through training is as follows:
Figure FDA0003594204220000036
in the formula:
Figure FDA0003594204220000037
is a target model, lambda is a regularization parameter, Y is a label set corresponding to the detection sample set,
Figure FDA0003594204220000038
is the first row of the kernel correlation matrix generated from the set of test samples, and Z is the set of test samples.
7. The method of claim 1, wherein b is 4.
CN202010522691.7A 2020-06-10 2020-06-10 Infrared point-like target tracking method Active CN111739055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010522691.7A CN111739055B (en) 2020-06-10 2020-06-10 Infrared point-like target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010522691.7A CN111739055B (en) 2020-06-10 2020-06-10 Infrared point-like target tracking method

Publications (2)

Publication Number Publication Date
CN111739055A CN111739055A (en) 2020-10-02
CN111739055B true CN111739055B (en) 2022-07-05

Family

ID=72648661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010522691.7A Active CN111739055B (en) 2020-06-10 2020-06-10 Infrared point-like target tracking method

Country Status (1)

Country Link
CN (1) CN111739055B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778645A (en) * 2014-01-16 2014-05-07 南京航空航天大学 Circular target real-time tracking method based on images
CN104331902A (en) * 2014-10-11 2015-02-04 深圳超多维光电子有限公司 Target tracking method, target tracking device, 3D display method and 3D display device
CN106650668A (en) * 2016-12-27 2017-05-10 上海葡萄纬度科技有限公司 Method and system for detecting movable target object in real time
CN107341820A (en) * 2017-07-03 2017-11-10 郑州轻工业学院 A kind of fusion Cuckoo search and KCF mutation movement method for tracking target
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm
CN109410254A (en) * 2018-11-05 2019-03-01 清华大学深圳研究生院 A kind of method for tracking target modeled based on target and camera motion
CN109461166A (en) * 2018-10-26 2019-03-12 郑州轻工业学院 A kind of fast-moving target tracking based on KCF mixing MFO

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043284B2 (en) * 2014-05-07 2018-08-07 Varian Medical Systems, Inc. Systems and methods for real-time tumor tracking

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778645A (en) * 2014-01-16 2014-05-07 南京航空航天大学 Circular target real-time tracking method based on images
CN104331902A (en) * 2014-10-11 2015-02-04 深圳超多维光电子有限公司 Target tracking method, target tracking device, 3D display method and 3D display device
CN106650668A (en) * 2016-12-27 2017-05-10 上海葡萄纬度科技有限公司 Method and system for detecting movable target object in real time
CN107341820A (en) * 2017-07-03 2017-11-10 郑州轻工业学院 A kind of fusion Cuckoo search and KCF mutation movement method for tracking target
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm
CN109461166A (en) * 2018-10-26 2019-03-12 郑州轻工业学院 A kind of fast-moving target tracking based on KCF mixing MFO
CN109410254A (en) * 2018-11-05 2019-03-01 清华大学深圳研究生院 A kind of method for tracking target modeled based on target and camera motion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Object Tracking via Deep Multi-View CompressIve Model for Visible and Infrared Sequences;Xu Ningwen 等;《IEEE2018 International Conference on Information Fusion(FUSION)》;20180713;941-948 *
一种基于多尺度点状目标建模的检测算法;陈勤霞 等;《激光技术》;20191114;第44卷(第04期);520-524 *
基于TBD的粒子滤波红外点状动目标跟踪算法研究;于伟俊 等;《电子技术应用》;20080406;第04卷;81-83 *
基于机器学习的可视目标跟踪研究;丁智华 等;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180215(第02期);I138-1901 *
天空背景下红外小目标检测与跟踪算法研究;朱冰;《中国优秀硕士论文全文数据库 工程科技Ⅱ辑》;20170515(第05期);C032-4 *

Also Published As

Publication number Publication date
CN111739055A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN108765458B (en) Sea surface target scale self-adaptive tracking method of high-sea-condition unmanned ship based on correlation filtering
CN109816641B (en) Multi-scale morphological fusion-based weighted local entropy infrared small target detection method
CN110120064B (en) Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning
CN108109162B (en) Multi-scale target tracking method using self-adaptive feature fusion
CN112149591B (en) SSD-AEFF automatic bridge detection method and system for SAR image
CN110555870A (en) DCF tracking confidence evaluation and classifier updating method based on neural network
CN110827262A (en) Weak and small target detection method based on continuous limited frame infrared image
Song et al. Feature extraction and target recognition of moving image sequences
Der et al. Probe-based automatic target recognition in infrared imagery
Chang et al. New metrics for clutter affecting human target acquisition
CN116310837A (en) SAR ship target rotation detection method and system
CN111105444A (en) Continuous tracking method suitable for underwater robot target grabbing
CN113033356B (en) Scale-adaptive long-term correlation target tracking method
CN111739055B (en) Infrared point-like target tracking method
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN117152486A (en) Image countermeasure sample detection method based on interpretability
CN116953702A (en) Rotary target detection method and device based on deduction paradigm
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
CN112614158B (en) Sampling frame self-adaptive multi-feature fusion online target tracking method
CN113160271B (en) High-precision infrared target tracking method integrating correlation filtering and particle filtering
Sun et al. A method of infrared image pedestrian detection with improved yolov3 algorithm
Sun et al. Moving object extraction based on saliency detection and adaptive background model
Hu et al. Aircraft Targets Detection in Remote Sensing Images with Feature Optimization
CN112991390B (en) Multi-tracker fusion target tracking method based on background perception
CN105787961B (en) The Camshift motion target tracking method of goal histogram based on Background color information weighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant