CN112634325A - Unmanned aerial vehicle video multi-target tracking method - Google Patents

Unmanned aerial vehicle video multi-target tracking method Download PDF

Info

Publication number
CN112634325A
CN112634325A CN202011455322.7A CN202011455322A CN112634325A CN 112634325 A CN112634325 A CN 112634325A CN 202011455322 A CN202011455322 A CN 202011455322A CN 112634325 A CN112634325 A CN 112634325A
Authority
CN
China
Prior art keywords
track
measurement
matrix
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011455322.7A
Other languages
Chinese (zh)
Other versions
CN112634325B (en
Inventor
朱浩
余仁伟
蔡昌恺
胡满琳
陈正新
鲁尔沐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011455322.7A priority Critical patent/CN112634325B/en
Publication of CN112634325A publication Critical patent/CN112634325A/en
Application granted granted Critical
Publication of CN112634325B publication Critical patent/CN112634325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention relates to an unmanned aerial vehicle video multi-target tracking method, and belongs to the field of unmanned aerial vehicle environment perception. The method comprises the following steps: firstly, obtaining an image detection result, initializing a track by using position and size information between targets, and predicting the track state; then, carrying out registration and association integration on the track and the measurement to obtain an association result between the track and the measurement; then, introducing the correlation result and the transformed track into a Kalman state equation, and performing secondary matching on the unmatched track and the measurement; and finally, carrying out track management to realize the initial and termination, updating, false detection deletion and missed detection recovery of the track. The invention can accurately and stably realize target tracking by only utilizing the camera.

Description

Unmanned aerial vehicle video multi-target tracking method
Technical Field
The invention belongs to the field of unmanned aerial vehicle environment perception, and relates to an unmanned aerial vehicle video multi-target tracking method.
Background
Due to smart cities, smart traffic is continuously becoming a focus of social attention. In recent years, with the development of unmanned aerial vehicle technology, commercial unmanned aerial vehicles with low cost and high stability are increasingly popularized, the unmanned aerial vehicles are applied to aspects of production and life, such as traffic monitoring, public security patrol, weather early warning, agricultural production, disaster relief, aerial photography and the like, with the development of artificial intelligence technology and the continuous progress of computer image processing, video target tracking based on the unmanned aerial vehicles becomes a research hotspot, and the unmanned aerial vehicle target tracking has an extremely important role in assisting automatic driving of intelligent vehicles, campus safety monitoring, group behavior analysis, disaster relief and the like.
At present, a large number of unmanned aerial vehicle multi-target tracking algorithms are researched at home and abroad, and tracking based on detection results has become a popular tracking algorithm framework at present, but the tracking effect is limited by the quality of the detection results. Most of tracking algorithms usually only consider single appearance characteristic or motion characteristic, can not combine it effectively, and most of algorithms do not fully consider target scale change, illumination change etc. in unmanned aerial vehicle target tracking, simultaneously most of tracking algorithms do not consider when unmanned aerial vehicle flies in the air, because external factors such as air current influence down, the camera takes place unpredictable motion, causes the condition of target drift to the good or bad of direct influence tracking effect.
Disclosure of Invention
In view of this, the invention aims to provide an unmanned aerial vehicle video multi-target tracking method, which is used in application scenes such as video behavior analysis, scene understanding, traffic management, safety prevention and control and the like and mainly aims at solving the problem of unmanned aerial vehicle video multi-target tracking that target omission, false detection and position drift occur under complex environments such as illumination change, airflow interference and the like. The application of the unmanned aerial vehicle multi-target tracking system in environment perception is realized, and a multi-target tracking solution for the unmanned aerial vehicle in a complex environment is provided; and a registration and association integrated method is utilized, so that the tracking process is simplified, and the tracking accuracy and the tracking rate are improved.
In order to achieve the purpose, the invention provides the following technical scheme:
an unmanned aerial vehicle video multi-target tracking method specifically comprises the following steps:
s1: acquiring an image detection result, and initializing a track by using position and size information between front and rear frame targets;
s2: predicting the position state of the track;
s3: carrying out registration and association integration on the track and the measurement by adopting a thin plate spline interpolation registration algorithm, and iteratively solving the corresponding relation between the track and the measurement;
s4: introducing the corresponding relation between the track and the measurement and the transformed track into a Kalman state equation, and performing secondary matching on the unmatched track and the measurement;
s5: a track management method is established, track states are updated, a track curve expression is established, and the initial and termination of tracks, false detection deletion, missed detection recovery and track smoothing are processed.
Further, step S1 specifically includes: and performing initial track operation on the detection result of the first 5 frames of images of the video through the obtained image detection result, calculating a similarity cost matrix by using the position proximity and the size information between the measurement of the two frames, and judging whether the targets are similar or not by setting a corresponding threshold value so as to be related to each other.
Further, step S2 specifically includes: and finally, linearly fusing the state prediction based on the one-step prediction and the state prediction based on the curve fitting through the confidence coefficient of the curve prediction obtained from the track management to obtain a track prediction position.
Further, step S3 specifically includes the following steps:
s31: obtaining a track to be matched and measurement, and taking the position of a central point represented by a target rectangular frame to define a predicted track lattice as x ═ xiI is 1, …, n and y is yj,j=1,…,m};
S32: calculating a predicted trajectory xiAnd measure yjMatrix of appearance differences between
Figure BDA0002828560030000021
By adopting a pre-training depth feature network VGG19, removing the full connection layer thereof, extracting the depth convolution feature maps of the track and measured conv3-4, conv4-4 and conv5-4, extracting the directional gradient histogram feature of the target, fusing the features by using a linear combination mode to obtain the image feature vector of the target track and the target measurement, calculating the appearance similarity of the track and the target measurement by using a bilinear similarity measurement method, and taking the negative logarithm of a similarity matrix to obtain an appearance difference matrix
Figure BDA0002828560030000022
S33: calculating a predicted trajectory xiAnd measure yjSize difference matrix between
Figure BDA0002828560030000023
Obtaining the width and height of the track and the measurement, obtaining the area between the target track and the target measurement, calculating the width similarity, the length similarity and the area similarity between the track and the measurement, obtaining a size similarity matrix through linear combination, and obtaining a size difference matrix by taking the negative logarithm of the similarity matrix
Figure BDA0002828560030000024
S34: calculating a predicted trajectory xiAnd measure yjMotion difference matrix between
Figure BDA0002828560030000025
Obtaining a motion difference matrix by calculating Euclidean distance between the track and the measurement
Figure BDA0002828560030000026
S35: after the appearance difference matrix, the size difference matrix and the motion difference matrix are obtained, the appearance difference matrix, the size difference matrix and the motion difference matrix are fused through linear combination to obtain a mixed characteristic difference matrix
Figure BDA0002828560030000027
A spatial transformation function f (x) between the trajectory and the measurements is then constructedi);
S36: constructing a registration and association integrated target function between the track and the measurement based on the mixed feature difference matrix and the spatial transformation function;
s37: and iteratively solving the registration and association integrated objective function by using a linear distribution algorithm and orthogonal triangular decomposition until the average distance between the track and the measurement is smaller than a corresponding threshold value, and acquiring an optimal corresponding relation matrix between the track and the measurement.
Further, in step S35, the spatial transformation function between the constructed trajectory and the measurement is:
Figure BDA0002828560030000031
wherein d represents an affine coefficient matrix, ω represents a non-rigid deformation coefficient matrix,
Figure BDA0002828560030000032
is the kernel vector of the thin-plate spline interpolation function, which is an n-dimensional row vector and defined in the two-dimensional image as:
Figure BDA0002828560030000033
xcis a set of control points selected from the lattice x.
Further, in step S36, the registration-association-integrated objective function between the trajectory and the measurement is constructed based on the mixed feature difference matrix and the spatial transformation function as follows:
Figure BDA0002828560030000034
wherein, P (L)ij) Is the correlation probability of the track and the measurement, when the track xiAnd measure yjWhen corresponding, P (L)ij) Value 1, otherwise P (L)ij) The value is 0 and the normalization parameter λ is used to adjust the non-rigid deformation coefficient matrix ω.
Further, step S37 specifically includes: after a track and measurement registration correlation integrated objective function is constructed, a linear distribution algorithm is used for solving a mixed characteristic difference matrix L of the track and the measurementijObtaining the association probability P (L)ij) Then, the target function is subjected to orthogonal triangular decomposition to obtain an affine coefficient diAnd non-rigid deformation coefficient omegaiAnd (4) iteratively solving the step until the average distance D between the track and the measurement is smaller than the corresponding threshold Dt, and obtaining the optimal correlation result of the track and the measurement.
Further, step S4 specifically includes: and solving the transformed track by the registration and association integrated method, substituting the transformed track into a Kalman state prediction equation, updating the corresponding track Kalman state according to the obtained corresponding relation, predicting the current time state of the track unmatched at the previous time by using the historical motion state of the track, and then substituting the unmatched track and the unmatched measurement at the current time into the registration and association integrated step for association.
Further, step S5 specifically includes the following steps:
s51: updating the track position according to the corresponding relation and based on a Kalman filtering updating equation, updating the track size by utilizing the average of the width and the height of the track frame and the width and the height of the corresponding measuring frame, and updating the track appearance according to the corresponding measuring appearance;
s52: storing historical track motion information, performing linear fitting on historical track positions, constructing a mathematical expression of a curve, establishing corresponding confidence of the expression, and setting high confidence of the curve expression if the curve is continuously fitted for more than a certain number of frames and the curve is smooth;
s53: and establishing a method for judging the track and the measurement state, judging that the track is terminated if the track position exceeds the image boundary or the track is not matched with the corresponding measurement in three continuous frames, and judging that the track is false detection if the track position does not have the corresponding relationship in the three continuous frames, and deleting the track. If the partial frames of the track are not matched, missing detection exists in the track, and the average of the positions of the frames before and after the missing detection frame is utilized to recover the missing detection position; and finally, smoothing the track with the judged state as termination by utilizing a polynomial smoothing algorithm.
The invention has the beneficial effects that:
1) the method can reduce the implementation cost, simultaneously avoids processing huge data when the laser radar is used, has no limit on the target type and has wide applicability range;
2) compared with other existing target tracking methods, the target tracking method has the advantages that the missing detection and the false detection of the unmanned aerial vehicle in the target tracking process are considered, the situations that the unmanned aerial vehicle has a wide visual field and a large number of targets are considered, the unpredictable camera motion of the unmanned aerial vehicle during air flight is also fully considered, and the target tracking problem of the unmanned aerial vehicle during air flight is reflected more comprehensively;
3) compared with the traditional target tracking algorithm, the method has the advantages that single target feature is adopted, the method combines a deep learning network to extract appearance features, and combines position, size and motion features, so that the target tracking accuracy is improved;
4) the invention applies the registration to the data association and provides a registration association integration method, which is beneficial to simplifying the tracking flow and improving the tracking accuracy and the tracking rate.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a main flow chart of the unmanned aerial vehicle video multi-target tracking method of the present invention;
FIG. 2 is a schematic diagram of a spatial transformation according to an embodiment of the present invention;
FIG. 3 is a flowchart of an embodiment of the invention for registration and association integration;
fig. 4 is a schematic diagram of multi-target tracking of the unmanned aerial vehicle in the embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to 4, in the present embodiment, a low-light-level camera is used as an image data acquisition sensor, and an algorithm is written in a VS2019 environment to implement a multi-target tracking method for an unmanned aerial vehicle.
An unmanned aerial vehicle video multi-target tracking method is shown in fig. 1 and specifically comprises the following steps:
s1: initializing a track according to the position and size information measured by the previous and next frames, specifically comprising:
carrying out initial track operation on the detection result of the image of the first 5 frames according to the obtained detection result of the image, judging the correlation between targets by utilizing the position and size information between the detection of the two frames and setting a threshold value, wherein the correlation of the size similarity mainly utilizes the difference between the sizes of the two frames to be compared with the sum of the sizes of the measurement of the two frames, and setting a corresponding threshold value to judge the correlation, and the correlation of the position similarity is shown as a formula (1):
Figure BDA0002828560030000051
in the above formula, ysIndicating the position of the measurement target at the previous time, ykAnd when the positions among the targets meet the formula, judging that the positions of the targets are similar, and finally respectively corresponding the targets of the previous 5 frames one by one through the judgment of similarity, thereby initializing the track and preparing for the association of the subsequent track and measured data.
S2: predicting the position state of the track specifically comprises the following steps:
and finally, linearly fusing the state prediction based on the one-step prediction and the state prediction based on the curve fitting through the confidence coefficient of the curve prediction obtained from the track management to obtain a track prediction position.
S3: the method for carrying out registration and association integration on the track and measurement specifically comprises the following steps:
s31: obtaining a track to be matched and measurement, and taking the position of a central point represented by a target rectangular frame to define a predicted track lattice as x ═ xiI is 1, …, n and y is yj,j=1,…,m};
S32: calculating a predicted trajectory xiAnd measure yjMatrix of appearance differences between
Figure BDA0002828560030000052
By adopting a pre-training depth feature network VGG19, removing the full connection layer thereof, extracting the depth convolution feature maps of the trajectory and measured conv3-4, conv4-4 and conv5-4, and extracting the directional gradient histogram feature of the target,fusing the features by linear combination to obtain target track and image feature vector measured by the target, calculating the appearance similarity of the track and the measurement by bilinear similarity measurement method, and taking negative logarithm to the similarity matrix to obtain appearance difference matrix
Figure BDA0002828560030000053
S33: calculating a predicted trajectory xiAnd measure yjSize difference matrix between
Figure BDA0002828560030000054
Obtaining the width and height of the track and the measurement, obtaining the area between the target track and the target measurement, calculating the width similarity, the length similarity and the area similarity between the track and the measurement, obtaining a size similarity matrix through linear combination, and obtaining a size difference matrix by taking the negative logarithm of the similarity matrix
Figure BDA0002828560030000055
S34: calculating a predicted trajectory xiAnd measure yjMotion difference cost matrix between
Figure BDA0002828560030000056
Calculating Euclidean distance between the track x and the measurement y to obtain a motion difference matrix, wherein the position of the track x changes along with the iteration of the space transformation, namely the motion difference matrix
Figure BDA0002828560030000061
S35: linearly combining the track with measured appearance, size and motion difference matrix, and mixing the feature difference matrix LijWherein
Figure BDA0002828560030000062
When track xiAnd measure yjThe more similar the features, the mixed feature difference matrix LijThe smaller, it originates from the same targetThe greater the likelihood. Subsequently, a spatial transformation function between the trajectory lattice and the measurement lattice is constructed
Figure BDA0002828560030000063
Wherein d represents an affine coefficient matrix, ω represents a non-rigid deformation coefficient matrix,
Figure BDA0002828560030000064
is the kernel vector of the thin-plate spline interpolation function, which is an n-dimensional row vector and defined in the two-dimensional image as:
Figure BDA0002828560030000065
xcis a group of control points selected from the lattice x, the schematic diagram of the spatial transformation is shown in fig. 2;
s36: based on the mixed feature difference matrix and the spatial transformation function, a registration correlation integrated objective function between the track and the measurement is constructed, as shown in formula (2):
Figure BDA0002828560030000066
wherein, P (L)ij) Is the correlation probability of the track and the measurement, when the track xiAnd measure yjWhen corresponding, P (L)ij) Value 1, otherwise P (L)ij) The value is 0, and the normalization parameter lambda is used for adjusting the non-rigid deformation coefficient matrix omega;
s37: after a track and measurement registration correlation integrated objective function is constructed, a linear distribution algorithm is used for solving a mixed characteristic difference matrix L of the track and the measurementijObtaining the association probability P (L)ij) Then, the target function is subjected to orthogonal triangular decomposition to obtain an affine coefficient diAnd non-rigid deformation coefficient omegaiUntil the average distance D between the trajectory and the measurement is smaller than the corresponding threshold Dt, obtaining the optimal correlation result of the trajectory and the measurement, and the flow chart of the step of integrating registration and correlation is shown in fig. 3.
S4: introducing the correlation result and the transformed track into a Kalman state equation, and performing re-matching on the unmatched track and the measurement, wherein the method specifically comprises the following steps: introducing the track after the registration and association integrated space transformation into a Kalman filtering method, constructing a new Kalman state prediction equation, introducing the new Kalman state prediction equation into the Kalman filtering equation to update the track state, generally setting a motion covariance matrix to be large so as to process unpredictable camera motion, updating the corresponding Kalman state according to the corresponding Kalman filtering update equation by using the corresponding relation, predicting the current moment state by using the historical motion state of the track per se for the track which is unmatched at the last moment, and then bringing the unmatched measurement of the track and the current moment into the registration and association integrated step to associate.
S5: a track management method is established, track states are updated, a track curve expression is established, and the initial and termination of tracks, false detection deletion, missed detection recovery and track smoothing are processed. The method specifically comprises the following steps:
s51: and updating the track position according to the corresponding relation and based on a Kalman filtering updating equation, updating the track size by using the average of the width and the height of the track frame and the width and the height of the corresponding measuring frame, and updating the track appearance according to the corresponding measuring appearance.
S52: storing historical track motion information, performing linear fitting on historical track positions, constructing a mathematical expression of a curve, establishing corresponding confidence of the expression, and setting high confidence of the curve expression if the curve is continuously fitted for more than a certain number of frames and the curve is smooth;
s53: and establishing a method for judging the track and the measurement state, judging that the track is terminated if the track position exceeds the image boundary or the track is not matched with the corresponding measurement in three continuous frames, and judging that the track is false detection if the track position does not have the corresponding relationship in the three continuous frames, and deleting the track. If the partial frames of the track are not matched, missing detection exists in the track, and the average of the positions of the frames before and after the missing detection frame is utilized to recover the missing detection position; and finally, smoothing the track with the judged state as termination by utilizing a polynomial smoothing algorithm, wherein a multi-target tracking schematic diagram of the unmanned aerial vehicle is shown in fig. 4.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (9)

1. An unmanned aerial vehicle video multi-target tracking method is characterized by specifically comprising the following steps:
s1: acquiring an image detection result, and initializing a track by using position and size information between front and rear frame targets;
s2: predicting the position state of the track;
s3: carrying out registration and association integration on the track and the measurement by adopting a thin plate spline interpolation registration algorithm, and iteratively solving the corresponding relation between the track and the measurement;
s4: introducing the corresponding relation between the track and the measurement and the transformed track into a Kalman state equation, and performing secondary matching on the unmatched track and the measurement;
s5: a track management method is established, track states are updated, a track curve expression is established, and the initial and termination of tracks, false detection deletion, missed detection recovery and track smoothing are processed.
2. The unmanned aerial vehicle video multi-target tracking method according to claim 1, wherein the step S1 specifically includes: and performing initial track operation on the detection result of the first 5 frames of images of the video through the obtained image detection result, calculating a similarity cost matrix by using the position proximity and the size information between the measurement of the two frames, and judging whether the targets are similar or not by setting a corresponding threshold value so as to be related to each other.
3. The unmanned aerial vehicle video multi-target tracking method according to claim 1, wherein the step S2 specifically includes: according to a curve function expression of the track obtained in track management, further fitting the curve expression to predict the state of the next moment, and predicting the position state of the track by combining one-step prediction, wherein the one-step prediction assumes that the target motion state is uniform linear motion, a Kalman filtering state transition matrix of the uniform linear motion is defined, and finally, the state prediction based on the one-step prediction and the state prediction based on the curve fitting are linearly fused through the confidence coefficient of the curve prediction obtained in the track management to obtain the track prediction position.
4. The unmanned aerial vehicle video multi-target tracking method according to claim 1, wherein the step S3 specifically comprises the following steps:
s31: obtaining a track to be matched and measurement, and taking the position of a central point represented by a target rectangular frame to define a predicted track lattice as x ═ xiI is 1, …, n and y is yj,j=1,…,m};
S32: calculating a predicted trajectory xiAnd measure yjMatrix of appearance differences between
Figure FDA0002828560020000011
By adopting a pre-training depth feature network VGG19, removing the full connection layer thereof, extracting the depth convolution feature maps of the track and measured conv3-4, conv4-4 and conv5-4, extracting the directional gradient histogram feature of the target, fusing the features by using a linear combination mode to obtain the image feature vector of the target track and the target measurement, calculating the appearance similarity of the track and the target measurement by using a bilinear similarity measurement method, and taking the negative logarithm of a similarity matrix to obtain an appearance difference matrix
Figure FDA0002828560020000012
S33: calculating a predicted trajectory xiAnd measure yjSize difference matrix between
Figure FDA0002828560020000013
By obtainingCalculating the width similarity, length similarity and area similarity between the track and the measurement, obtaining a size similarity matrix through linear combination, and obtaining a size difference matrix by taking the negative logarithm of the similarity matrix
Figure FDA0002828560020000021
S34: calculating a predicted trajectory xiAnd measure yjMotion difference matrix between
Figure FDA0002828560020000022
Obtaining a motion difference matrix by calculating Euclidean distance between the track and the measurement
Figure FDA0002828560020000023
S35: after the appearance difference matrix, the size difference matrix and the motion difference matrix are obtained, the appearance difference matrix, the size difference matrix and the motion difference matrix are fused through linear combination to obtain a mixed characteristic difference matrix
Figure FDA0002828560020000024
A spatial transformation function f (x) between the trajectory and the measurements is then constructedi);
S36: constructing a registration and association integrated target function between the track and the measurement based on the mixed feature difference matrix and the spatial transformation function;
s37: and iteratively solving the registration and association integrated objective function by using a linear distribution algorithm and orthogonal triangular decomposition until the average distance between the track and the measurement is smaller than a corresponding threshold value, and acquiring an optimal corresponding relation matrix between the track and the measurement.
5. The unmanned aerial vehicle video multi-target tracking method according to claim 4, wherein in step S35, the constructed spatial transformation function between the trajectory and the measurement is:
Figure FDA0002828560020000025
wherein d represents an affine coefficient matrix, ω represents a non-rigid deformation coefficient matrix,
Figure FDA0002828560020000026
is the kernel vector of the thin-plate spline interpolation function, which is an n-dimensional row vector and defined in the two-dimensional image as:
Figure FDA0002828560020000027
xcis a set of control points selected from the lattice x.
6. The unmanned aerial vehicle video multi-target tracking method according to claim 4, wherein in step S36, the registration and association integrated objective function between the trajectory and the measurement is constructed based on the mixed feature difference matrix and the spatial transformation function as follows:
Figure FDA0002828560020000028
wherein, P (L)ij) Is the correlation probability of the track and the measurement, when the track xiAnd measure yjWhen corresponding, P (L)ij) Value 1, otherwise P (L)ij) The value is 0 and the normalization parameter λ is used to adjust the non-rigid deformation coefficient matrix ω.
7. The unmanned aerial vehicle video multi-target tracking method according to claim 4, wherein the step S37 specifically comprises: after a track and measurement registration correlation integrated objective function is constructed, a linear distribution algorithm is used for solving a mixed characteristic difference matrix L of the track and the measurementijObtaining the association probability P (L)ij) Then, the target function is subjected to orthogonal triangular decomposition to obtain an affine coefficient diAnd non-rigid deformation coefficient omegaiUntil the average distance D between the track and the measurement is less than the corresponding threshold value Dt, obtaining the track andand measuring the optimal correlation result.
8. The unmanned aerial vehicle video multi-target tracking method according to claim 1, wherein the step S4 specifically includes: and solving the transformed track by the registration and association integrated method, substituting the transformed track into a Kalman state prediction equation, updating the corresponding track Kalman state according to the obtained corresponding relation, predicting the current time state of the track unmatched at the previous time by using the historical motion state of the track, and then substituting the unmatched track and the unmatched measurement at the current time into the registration and association integrated step for association.
9. The unmanned aerial vehicle video multi-target tracking method according to claim 1, wherein the step S5 specifically comprises the following steps:
s51: updating the track position according to the corresponding relation and based on a Kalman filtering updating equation, updating the track size by utilizing the average of the width and the height of the track frame and the width and the height of the corresponding measuring frame, and updating the track appearance according to the corresponding measuring appearance;
s52: storing historical track motion information, performing linear fitting on historical track positions, constructing a mathematical expression of a curve, establishing corresponding confidence of the expression, and setting high confidence of the curve expression if the curve is continuously fitted for more than a certain number of frames and the curve is smooth;
s53: establishing a method for judging the track and the measurement state, judging that the track is terminated if the track position exceeds the image boundary or the track is not matched with corresponding measurement in three continuous frames, and judging that the track is false detection if the track position does not have the corresponding relation in the three continuous frames and deleting the track; if the partial frames of the track are not matched, missing detection exists in the track, and the average of the positions of the frames before and after the missing detection frame is utilized to recover the missing detection position; and finally, smoothing the track with the judged state as termination by utilizing a polynomial smoothing algorithm.
CN202011455322.7A 2020-12-10 2020-12-10 Unmanned aerial vehicle video multi-target tracking method Active CN112634325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011455322.7A CN112634325B (en) 2020-12-10 2020-12-10 Unmanned aerial vehicle video multi-target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011455322.7A CN112634325B (en) 2020-12-10 2020-12-10 Unmanned aerial vehicle video multi-target tracking method

Publications (2)

Publication Number Publication Date
CN112634325A true CN112634325A (en) 2021-04-09
CN112634325B CN112634325B (en) 2022-09-09

Family

ID=75309966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011455322.7A Active CN112634325B (en) 2020-12-10 2020-12-10 Unmanned aerial vehicle video multi-target tracking method

Country Status (1)

Country Link
CN (1) CN112634325B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421288A (en) * 2021-08-23 2021-09-21 杭州云栖智慧视通科技有限公司 Static track fragment improvement method in multi-target real-time track tracking
CN113589848A (en) * 2021-09-28 2021-11-02 西湖大学 Multi-unmanned aerial vehicle detection, positioning and tracking system and method based on machine vision
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
CN113784055A (en) * 2021-11-15 2021-12-10 北京中星时代科技有限公司 Anti-unmanned aerial vehicle image communication system based on shimmer night vision technology
CN114821812A (en) * 2022-06-24 2022-07-29 西南石油大学 Deep learning-based skeleton point action recognition method for pattern skating players
CN115063454A (en) * 2022-08-16 2022-09-16 浙江所托瑞安科技集团有限公司 Multi-target tracking matching method, device, terminal and storage medium
CN116523970A (en) * 2023-07-05 2023-08-01 之江实验室 Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN117495916A (en) * 2023-12-29 2024-02-02 苏州元脑智能科技有限公司 Multi-target track association method, device, communication equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156863A (en) * 2011-05-16 2011-08-17 天津大学 Cross-camera tracking method for multiple moving targets
US20130321583A1 (en) * 2012-05-16 2013-12-05 Gregory D. Hager Imaging system and method for use of same to determine metric scale of imaged bodily anatomy
CN103761745A (en) * 2013-07-31 2014-04-30 深圳大学 Estimation method and system for lung motion model
CN106909164A (en) * 2017-02-13 2017-06-30 清华大学 A kind of unmanned plane minimum time smooth track generation method
CN107316321A (en) * 2017-06-22 2017-11-03 电子科技大学 Multiple features fusion method for tracking target and the Weight number adaptively method based on comentropy
CN109859245A (en) * 2019-01-22 2019-06-07 深圳大学 Multi-object tracking method, device and the storage medium of video object
CN110490901A (en) * 2019-07-15 2019-11-22 武汉大学 The pedestrian detection tracking of anti-attitudes vibration
CN111862145A (en) * 2019-04-24 2020-10-30 四川大学 Target tracking method based on multi-scale pedestrian detection
CN111932580A (en) * 2020-07-03 2020-11-13 江苏大学 Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156863A (en) * 2011-05-16 2011-08-17 天津大学 Cross-camera tracking method for multiple moving targets
US20130321583A1 (en) * 2012-05-16 2013-12-05 Gregory D. Hager Imaging system and method for use of same to determine metric scale of imaged bodily anatomy
CN103761745A (en) * 2013-07-31 2014-04-30 深圳大学 Estimation method and system for lung motion model
CN106909164A (en) * 2017-02-13 2017-06-30 清华大学 A kind of unmanned plane minimum time smooth track generation method
CN107316321A (en) * 2017-06-22 2017-11-03 电子科技大学 Multiple features fusion method for tracking target and the Weight number adaptively method based on comentropy
CN109859245A (en) * 2019-01-22 2019-06-07 深圳大学 Multi-object tracking method, device and the storage medium of video object
CN111862145A (en) * 2019-04-24 2020-10-30 四川大学 Target tracking method based on multi-scale pedestrian detection
CN110490901A (en) * 2019-07-15 2019-11-22 武汉大学 The pedestrian detection tracking of anti-attitudes vibration
CN111932580A (en) * 2020-07-03 2020-11-13 江苏大学 Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡昌恺 等: "基于航迹全局和局部混合特征的航迹关联算法", 《仪器仪表学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
CN113674317B (en) * 2021-08-10 2024-04-26 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device for high-level video
CN113421288A (en) * 2021-08-23 2021-09-21 杭州云栖智慧视通科技有限公司 Static track fragment improvement method in multi-target real-time track tracking
CN113589848A (en) * 2021-09-28 2021-11-02 西湖大学 Multi-unmanned aerial vehicle detection, positioning and tracking system and method based on machine vision
CN113589848B (en) * 2021-09-28 2022-02-08 西湖大学 Multi-unmanned aerial vehicle detection, positioning and tracking system and method based on machine vision
CN113784055A (en) * 2021-11-15 2021-12-10 北京中星时代科技有限公司 Anti-unmanned aerial vehicle image communication system based on shimmer night vision technology
CN114821812A (en) * 2022-06-24 2022-07-29 西南石油大学 Deep learning-based skeleton point action recognition method for pattern skating players
CN115063454A (en) * 2022-08-16 2022-09-16 浙江所托瑞安科技集团有限公司 Multi-target tracking matching method, device, terminal and storage medium
CN116523970A (en) * 2023-07-05 2023-08-01 之江实验室 Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN116523970B (en) * 2023-07-05 2023-10-20 之江实验室 Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN117495916A (en) * 2023-12-29 2024-02-02 苏州元脑智能科技有限公司 Multi-target track association method, device, communication equipment and storage medium
CN117495916B (en) * 2023-12-29 2024-03-01 苏州元脑智能科技有限公司 Multi-target track association method, device, communication equipment and storage medium

Also Published As

Publication number Publication date
CN112634325B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN112634325B (en) Unmanned aerial vehicle video multi-target tracking method
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN108242079B (en) VSLAM method based on multi-feature visual odometer and graph optimization model
WO2019101220A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
Cannons A review of visual tracking
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
CN110490907B (en) Moving target tracking method based on multi-target feature and improved correlation filter
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN106709472A (en) Video target detecting and tracking method based on optical flow features
US20070237359A1 (en) Method and apparatus for adaptive mean shift tracking
CN110766723B (en) Unmanned aerial vehicle target tracking method and system based on color histogram similarity
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
CN105760831A (en) Pedestrian tracking method based on low-altitude aerial photographing infrared video
CN109448023B (en) Satellite video small target real-time tracking method
CN111144213A (en) Object detection method and related equipment
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN110245566B (en) Infrared target remote tracking method based on background features
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN115273034A (en) Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion
CN116266360A (en) Vehicle target detection tracking method based on multi-source information fusion
CN110992424B (en) Positioning method and system based on binocular vision
CN111429485A (en) Cross-modal filtering tracking method based on self-adaptive regularization and high-reliability updating
CN113689459B (en) Real-time tracking and mapping method based on GMM and YOLO under dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant