Disclosure of Invention
In view of this, the invention aims to provide an unmanned aerial vehicle video multi-target tracking method, which is used in application scenes such as video behavior analysis, scene understanding, traffic management, safety prevention and control and the like and mainly aims at solving the problem of unmanned aerial vehicle video multi-target tracking that target omission, false detection and position drift occur under complex environments such as illumination change, airflow interference and the like. The application of the unmanned aerial vehicle multi-target tracking system in environment perception is realized, and a multi-target tracking solution for the unmanned aerial vehicle in a complex environment is provided; and a registration and association integrated method is utilized, so that the tracking process is simplified, and the tracking accuracy and the tracking rate are improved.
In order to achieve the purpose, the invention provides the following technical scheme:
an unmanned aerial vehicle video multi-target tracking method specifically comprises the following steps:
s1: acquiring an image detection result, and initializing a track by using position and size information between front and rear frame targets;
s2: predicting the position state of the track;
s3: carrying out registration and association integration on the track and the measurement by adopting a thin plate spline interpolation registration algorithm, and iteratively solving the corresponding relation between the track and the measurement;
s4: introducing the corresponding relation between the track and the measurement and the transformed track into a Kalman state equation, and performing secondary matching on the unmatched track and the measurement;
s5: a track management method is established, track states are updated, a track curve expression is established, and the initial and termination of tracks, false detection deletion, missed detection recovery and track smoothing are processed.
Further, step S1 specifically includes: and performing initial track operation on the detection result of the first 5 frames of images of the video through the obtained image detection result, calculating a similarity cost matrix by using the position proximity and the size information between the measurement of the two frames, and judging whether the targets are similar or not by setting a corresponding threshold value so as to be related to each other.
Further, step S2 specifically includes: and finally, linearly fusing the state prediction based on the one-step prediction and the state prediction based on the curve fitting through the confidence coefficient of the curve prediction obtained from the track management to obtain a track prediction position.
Further, step S3 specifically includes the following steps:
s31: obtaining a track to be matched and measurement, and taking the position of a central point represented by a target rectangular frame to define a predicted track lattice as x ═ xiI is 1, …, n and y is yj,j=1,…,m};
S32: calculating a predicted trajectory x
iAnd measure y
jMatrix of appearance differences between
By adopting a pre-training depth feature network VGG19, removing the full connection layer thereof, extracting the depth convolution feature maps of the track and measured conv3-4, conv4-4 and conv5-4, extracting the directional gradient histogram feature of the target, fusing the features by using a linear combination mode to obtain the image feature vector of the target track and the target measurement, calculating the appearance similarity of the track and the target measurement by using a bilinear similarity measurement method, and taking the negative logarithm of a similarity matrix to obtain an appearance difference matrix
S33: calculating a predicted trajectory x
iAnd measure y
jSize difference matrix between
Obtaining the width and height of the track and the measurement, obtaining the area between the target track and the target measurement, calculating the width similarity, the length similarity and the area similarity between the track and the measurement, obtaining a size similarity matrix through linear combination, and obtaining a size difference matrix by taking the negative logarithm of the similarity matrix
S34: calculating a predicted trajectory x
iAnd measure y
jMotion difference matrix between
Obtaining a motion difference matrix by calculating Euclidean distance between the track and the measurement
S35: after the appearance difference matrix, the size difference matrix and the motion difference matrix are obtained, the appearance difference matrix, the size difference matrix and the motion difference matrix are fused through linear combination to obtain a mixed characteristic difference matrix
A spatial transformation function f (x) between the trajectory and the measurements is then constructed
i);
S36: constructing a registration and association integrated target function between the track and the measurement based on the mixed feature difference matrix and the spatial transformation function;
s37: and iteratively solving the registration and association integrated objective function by using a linear distribution algorithm and orthogonal triangular decomposition until the average distance between the track and the measurement is smaller than a corresponding threshold value, and acquiring an optimal corresponding relation matrix between the track and the measurement.
Further, in step S35, the spatial transformation function between the constructed trajectory and the measurement is:
wherein d represents an affine coefficient matrix, ω represents a non-rigid deformation coefficient matrix,
is the kernel vector of the thin-plate spline interpolation function, which is an n-dimensional row vector and defined in the two-dimensional image as:
x
cis a set of control points selected from the lattice x.
Further, in step S36, the registration-association-integrated objective function between the trajectory and the measurement is constructed based on the mixed feature difference matrix and the spatial transformation function as follows:
wherein, P (L)ij) Is the correlation probability of the track and the measurement, when the track xiAnd measure yjWhen corresponding, P (L)ij) Value 1, otherwise P (L)ij) The value is 0 and the normalization parameter λ is used to adjust the non-rigid deformation coefficient matrix ω.
Further, step S37 specifically includes: after a track and measurement registration correlation integrated objective function is constructed, a linear distribution algorithm is used for solving a mixed characteristic difference matrix L of the track and the measurementijObtaining the association probability P (L)ij) Then, the target function is subjected to orthogonal triangular decomposition to obtain an affine coefficient diAnd non-rigid deformation coefficient omegaiAnd (4) iteratively solving the step until the average distance D between the track and the measurement is smaller than the corresponding threshold Dt, and obtaining the optimal correlation result of the track and the measurement.
Further, step S4 specifically includes: and solving the transformed track by the registration and association integrated method, substituting the transformed track into a Kalman state prediction equation, updating the corresponding track Kalman state according to the obtained corresponding relation, predicting the current time state of the track unmatched at the previous time by using the historical motion state of the track, and then substituting the unmatched track and the unmatched measurement at the current time into the registration and association integrated step for association.
Further, step S5 specifically includes the following steps:
s51: updating the track position according to the corresponding relation and based on a Kalman filtering updating equation, updating the track size by utilizing the average of the width and the height of the track frame and the width and the height of the corresponding measuring frame, and updating the track appearance according to the corresponding measuring appearance;
s52: storing historical track motion information, performing linear fitting on historical track positions, constructing a mathematical expression of a curve, establishing corresponding confidence of the expression, and setting high confidence of the curve expression if the curve is continuously fitted for more than a certain number of frames and the curve is smooth;
s53: and establishing a method for judging the track and the measurement state, judging that the track is terminated if the track position exceeds the image boundary or the track is not matched with the corresponding measurement in three continuous frames, and judging that the track is false detection if the track position does not have the corresponding relationship in the three continuous frames, and deleting the track. If the partial frames of the track are not matched, missing detection exists in the track, and the average of the positions of the frames before and after the missing detection frame is utilized to recover the missing detection position; and finally, smoothing the track with the judged state as termination by utilizing a polynomial smoothing algorithm.
The invention has the beneficial effects that:
1) the method can reduce the implementation cost, simultaneously avoids processing huge data when the laser radar is used, has no limit on the target type and has wide applicability range;
2) compared with other existing target tracking methods, the target tracking method has the advantages that the missing detection and the false detection of the unmanned aerial vehicle in the target tracking process are considered, the situations that the unmanned aerial vehicle has a wide visual field and a large number of targets are considered, the unpredictable camera motion of the unmanned aerial vehicle during air flight is also fully considered, and the target tracking problem of the unmanned aerial vehicle during air flight is reflected more comprehensively;
3) compared with the traditional target tracking algorithm, the method has the advantages that single target feature is adopted, the method combines a deep learning network to extract appearance features, and combines position, size and motion features, so that the target tracking accuracy is improved;
4) the invention applies the registration to the data association and provides a registration association integration method, which is beneficial to simplifying the tracking flow and improving the tracking accuracy and the tracking rate.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to 4, in the present embodiment, a low-light-level camera is used as an image data acquisition sensor, and an algorithm is written in a VS2019 environment to implement a multi-target tracking method for an unmanned aerial vehicle.
An unmanned aerial vehicle video multi-target tracking method is shown in fig. 1 and specifically comprises the following steps:
s1: initializing a track according to the position and size information measured by the previous and next frames, specifically comprising:
carrying out initial track operation on the detection result of the image of the first 5 frames according to the obtained detection result of the image, judging the correlation between targets by utilizing the position and size information between the detection of the two frames and setting a threshold value, wherein the correlation of the size similarity mainly utilizes the difference between the sizes of the two frames to be compared with the sum of the sizes of the measurement of the two frames, and setting a corresponding threshold value to judge the correlation, and the correlation of the position similarity is shown as a formula (1):
in the above formula, ysIndicating the position of the measurement target at the previous time, ykAnd when the positions among the targets meet the formula, judging that the positions of the targets are similar, and finally respectively corresponding the targets of the previous 5 frames one by one through the judgment of similarity, thereby initializing the track and preparing for the association of the subsequent track and measured data.
S2: predicting the position state of the track specifically comprises the following steps:
and finally, linearly fusing the state prediction based on the one-step prediction and the state prediction based on the curve fitting through the confidence coefficient of the curve prediction obtained from the track management to obtain a track prediction position.
S3: the method for carrying out registration and association integration on the track and measurement specifically comprises the following steps:
s31: obtaining a track to be matched and measurement, and taking the position of a central point represented by a target rectangular frame to define a predicted track lattice as x ═ xiI is 1, …, n and y is yj,j=1,…,m};
S32: calculating a predicted trajectory x
iAnd measure y
jMatrix of appearance differences between
By adopting a pre-training depth feature network VGG19, removing the full connection layer thereof, extracting the depth convolution feature maps of the trajectory and measured conv3-4, conv4-4 and conv5-4, and extracting the directional gradient histogram feature of the target,fusing the features by linear combination to obtain target track and image feature vector measured by the target, calculating the appearance similarity of the track and the measurement by bilinear similarity measurement method, and taking negative logarithm to the similarity matrix to obtain appearance difference matrix
S33: calculating a predicted trajectory x
iAnd measure y
jSize difference matrix between
Obtaining the width and height of the track and the measurement, obtaining the area between the target track and the target measurement, calculating the width similarity, the length similarity and the area similarity between the track and the measurement, obtaining a size similarity matrix through linear combination, and obtaining a size difference matrix by taking the negative logarithm of the similarity matrix
S34: calculating a predicted trajectory x
iAnd measure y
jMotion difference cost matrix between
Calculating Euclidean distance between the track x and the measurement y to obtain a motion difference matrix, wherein the position of the track x changes along with the iteration of the space transformation, namely the motion difference matrix
S35: linearly combining the track with measured appearance, size and motion difference matrix, and mixing the feature difference matrix L
ijWherein
When track x
iAnd measure y
jThe more similar the features, the mixed feature difference matrix L
ijThe smaller, it originates from the same targetThe greater the likelihood. Subsequently, a spatial transformation function between the trajectory lattice and the measurement lattice is constructed
Wherein d represents an affine coefficient matrix, ω represents a non-rigid deformation coefficient matrix,
is the kernel vector of the thin-plate spline interpolation function, which is an n-dimensional row vector and defined in the two-dimensional image as:
x
cis a group of control points selected from the lattice x, the schematic diagram of the spatial transformation is shown in fig. 2;
s36: based on the mixed feature difference matrix and the spatial transformation function, a registration correlation integrated objective function between the track and the measurement is constructed, as shown in formula (2):
wherein, P (L)ij) Is the correlation probability of the track and the measurement, when the track xiAnd measure yjWhen corresponding, P (L)ij) Value 1, otherwise P (L)ij) The value is 0, and the normalization parameter lambda is used for adjusting the non-rigid deformation coefficient matrix omega;
s37: after a track and measurement registration correlation integrated objective function is constructed, a linear distribution algorithm is used for solving a mixed characteristic difference matrix L of the track and the measurementijObtaining the association probability P (L)ij) Then, the target function is subjected to orthogonal triangular decomposition to obtain an affine coefficient diAnd non-rigid deformation coefficient omegaiUntil the average distance D between the trajectory and the measurement is smaller than the corresponding threshold Dt, obtaining the optimal correlation result of the trajectory and the measurement, and the flow chart of the step of integrating registration and correlation is shown in fig. 3.
S4: introducing the correlation result and the transformed track into a Kalman state equation, and performing re-matching on the unmatched track and the measurement, wherein the method specifically comprises the following steps: introducing the track after the registration and association integrated space transformation into a Kalman filtering method, constructing a new Kalman state prediction equation, introducing the new Kalman state prediction equation into the Kalman filtering equation to update the track state, generally setting a motion covariance matrix to be large so as to process unpredictable camera motion, updating the corresponding Kalman state according to the corresponding Kalman filtering update equation by using the corresponding relation, predicting the current moment state by using the historical motion state of the track per se for the track which is unmatched at the last moment, and then bringing the unmatched measurement of the track and the current moment into the registration and association integrated step to associate.
S5: a track management method is established, track states are updated, a track curve expression is established, and the initial and termination of tracks, false detection deletion, missed detection recovery and track smoothing are processed. The method specifically comprises the following steps:
s51: and updating the track position according to the corresponding relation and based on a Kalman filtering updating equation, updating the track size by using the average of the width and the height of the track frame and the width and the height of the corresponding measuring frame, and updating the track appearance according to the corresponding measuring appearance.
S52: storing historical track motion information, performing linear fitting on historical track positions, constructing a mathematical expression of a curve, establishing corresponding confidence of the expression, and setting high confidence of the curve expression if the curve is continuously fitted for more than a certain number of frames and the curve is smooth;
s53: and establishing a method for judging the track and the measurement state, judging that the track is terminated if the track position exceeds the image boundary or the track is not matched with the corresponding measurement in three continuous frames, and judging that the track is false detection if the track position does not have the corresponding relationship in the three continuous frames, and deleting the track. If the partial frames of the track are not matched, missing detection exists in the track, and the average of the positions of the frames before and after the missing detection frame is utilized to recover the missing detection position; and finally, smoothing the track with the judged state as termination by utilizing a polynomial smoothing algorithm, wherein a multi-target tracking schematic diagram of the unmanned aerial vehicle is shown in fig. 4.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.