CN111681260A - Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle - Google Patents
Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle Download PDFInfo
- Publication number
- CN111681260A CN111681260A CN202010544149.1A CN202010544149A CN111681260A CN 111681260 A CN111681260 A CN 111681260A CN 202010544149 A CN202010544149 A CN 202010544149A CN 111681260 A CN111681260 A CN 111681260A
- Authority
- CN
- China
- Prior art keywords
- tracking
- frame
- track
- target
- iou
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 47
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000010923 batch production Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/259—Fusion by voting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-target tracking method and a multi-target tracking system for aerial images of unmanned aerial vehicles, which comprise the following steps: a step of target detection is carried out on the input image sequence by a Faster RCNN detection network; the IOU Tracker executes multi-target tracking on the detected image sequence to form an initial tracking track; the KCF tracker performs updating compensation on the initial tracking tracks, and performs forward updating and backward updating steps on each initial track in sequence; and designing a tracklet votte (trajectory voting) algorithm, and for the trajectory after KCF updating, calculating the tracklet vote by using a greedy algorithm and an IOU (intersection and merge ratio), and classifying and fusing the multi-segment trajectory. The method can realize the multi-target tracking effect with high robustness, high precision and high efficiency under the condition that the number of targets in the aerial image of the unmanned aerial vehicle is large and the target size is small.
Description
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a multi-target tracking method for aerial images of unmanned aerial vehicles.
Background
Multi-target tracking (MOT), which aims to track all objects of interest in a video sequence, plays a crucial role in applications such as video surveillance and automatic driving. In recent years, camera-equipped drones or general-purpose drones have been rapidly deployed in a wide range of applications, including agriculture, aerial photography, rapid delivery and surveillance, and the like. And the image that unmanned aerial vehicle shot has the characteristics that the target kind and quantity are many, and mostly be little target. Therefore, it is a challenging and research-worthy problem to perform multi-objective tracking of visual data collected from these platforms.
The multi-target tracking algorithm is divided into online tracking and offline tracking. The online method uses only the previous and current frames and is therefore suitable for real-time applications. The SORT algorithm proposed in reference document [1] (Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcrofo. simple online and temporal tracking. in 2016 IEEEInternational Conference on Image Processing (ICIP), pages 3464-3468. IEEE,2016.) belongs to the online real-time tracking algorithm. The method uses a kalman filter to predict the new position of the bounding box, and then performs a data correlation process using an intersection ratio (IOU) to compute a cost matrix. Although SORT achieves both good speed and accuracy, it suffers from heavy ID switching due to short-term motion information. On the other hand, Deep SORT proposed by the comparison document [2] (Nicolai Wojke, Alex Bewley, and Dietrich paulsus. simple online and realtime with a Deep association metric. in 2017 IEEE International Conference Image Processing (ICIP), pages 3645-3649. IEEE,2017.) introduces object re-identification (REID) as appearance information to handle long-term occlusion, resulting in a more robust and efficient algorithm. In contrast, an offline approach may access the entire sequence and may perform global optimization on data associations. These batch processes typically formulate MOT as a network traffic problem. The best solution can be found using K Shortest Path (KSP), continuous shortest path (SSP), and Dynamic Programming (DP). Offline methods can correct early errors in online methods and generally show better performance but are not suitable for time-critical applications. The contrast file [3] (northwest industrial university. aerial video target tracking method based on correlation filtering and significance detection: china, 201710310244.3[ P ].2017-05-05) combines the histogram of direction gradients and the gray scale feature by using a tracking method based on correlation filtering and significance detection, so that the tracking result has strong robustness to factors such as illumination change, noise, occlusion and the like, and when the tracking fails due to serious occlusion of the target, the target can be detected again through a re-detection mechanism, so that the target can be continuously tracked for a long time. Most of the algorithms are traditional characteristics of targets, and for the situations that the number of targets in an aerial video is large and the targets are small, the algorithms are frequently subjected to false detection in the re-detection process, so that crosstalk among similar targets is caused. The method adopts the deep neural network to extract the features, and has better discrimination on the target under the extreme environment compared with the traditional features.
Disclosure of Invention
1. Objects of the invention
Aiming at the defect that the existing multi-target tracking algorithm is difficult to be applied to the unmanned aerial vehicle aerial photography scene, the invention provides a multi-target tracking method and a tracking system for unmanned aerial vehicle aerial photography images. The tracking algorithm adopts a deep neural network to extract features, the IOU Tracker is used for initial tracking, the KCF Tracker is used for making up the lost track followed by the IOU Tracker, and the tracklet vote algorithm is used for classifying and splicing multiple sections of tracks, so that the multi-target tracking algorithm with high precision, high robustness and high efficiency is realized.
2. The technical scheme adopted by the invention
The invention provides a multi-target tracking method for aerial images of unmanned aerial vehicles, which comprises the following steps:
a target detection step, namely performing target detection on the input image sequence by fast RCNN;
an initial tracking track forming step, wherein an IOU Tracker executes multi-target tracking on the detected image sequence to form an initial tracking track;
and updating and compensating, namely updating and compensating the initial tracking tracks by the KCF tracker, and sequentially executing forward updating and backward updating on each initial track, wherein the number of frames of the forward updating and the backward updating is P frames respectively.
And a classification and fusion step, namely classifying and fusing the multi-section tracks by utilizing a greedy algorithm and IOU calculation for the tracks after KCF update according to a tracklet votate algorithm.
Further, the target detection step: carrying out data enhancement on an unmanned aerial vehicle aerial data set, wherein the data enhancement comprises image translation, zooming, channel transformation and illumination change; and inputting the image sequence I into a deep neural network (Faster RCNN) to carry out target detection to obtain a detected result sequence I', wherein a pre-training model of the Faster RCNN is a pre-training model of ResNet50 on ImageNet.
Further, an initial tracking trajectory forming step: i 'is sent into an IOU Tracker of the multi-target tracking network, and the IOUTtracker adds all detection frames of the first frame in I' into a tracking queue QtThen the second frame is taken as the current frame FcWill FcAll the detection frames bbox _ now and QtAll detection frames bbox _ pre in the system perform IOU operation, and bbox _ now respectively obtains corresponding maximum IOU values IOU _ max.
Further, the method specifically comprises the following steps: setting a threshold value sigma IOU if IOU _ max>σ IOU, then FcThe detection box in (1) is added into a tracking queue QtAs a trace frame to match at FcThereby forming a continuous track;
setting threshold values sigma h and tmin, calculating the number t of target occurrences before the frame, if sigma h is less than or equal to IOU _ max is less than or equal to sigma IOU and t is>tmin, which indicates that it is a normal tracking object and has left the screen in the current frame, thus removing the tracking object from the tracking queue Qt;
If IOU _ max and t do not satisfy the above constraints, it indicates that a new object appears in the current frame Fc, and the new object is used as the object to be tracked and the detection frame is added to the tracking queue QtPerforming the following steps;
according to the sequence of the input sequence I', the above operations are circulated until the last frame is completed to obtain a multi-section tracking track T1、T2、T3…Tn(ii) a IOU tracker records a unique index I 'for each frame of processed picture'1,I'2,I'3,…I'n。
Further, the updating the compensation step, the step S300, includes:
tracking a trajectory T for each segmenti(i-1 … n), respectively using a KCF tracker to perform forward tracking once, wherein an initial detection frame of the KCF tracker is a detection frame bbox _ last corresponding to a last frame of the track; starting tracking with the frame, and connecting according to the forward directionContinuously predicting P frame, adding the P frame into corresponding tracking track to finally obtain updated and compensated tracking track T1_f、T2_f、T3_f…Tn_f。
Further, the compensated tracking trajectory T is updated for each segmenti_f(i-1 … n), respectively using KCF to perform backward tracking once, wherein the initial detection frame of the KCF tracker is the detection frame bbox _ first corresponding to the first frame of the track, starting tracking with the frame, continuously predicting P frames according to the opposite forward direction, adding the P frames into the corresponding tracking track, and finally obtaining the updated and compensated tracking track T1_fb、T2_fb、T3_fb…Tn_fb。
Still further, the classifying and fusing step includes: defining a variable tIoI, defining a threshold tIoI _ sigma, and designing a trackletvote algorithm; because each frame image is uniquely labeled in the IOU Tracker, the finally obtained tracking track T is obtained1_fb、T2_fb、T3_fb…Tn_fbComputing T using a greedy algorithmi_fb(i-1 … n) and Tj_fb(i ≠ 1 … n, i ≠ j) as to the degree of overlap in time and space.
Further, the number of frames with different track overlap labels is denoted as N, then for each frame, two tracks of each frame calculate spatial IOU, for a certain frame, if IOU is greater than threshold σ, these two frames are regarded as the same object, and in this N frame, if N 'frames meet the above condition, then tloi is equal to N'/N; if tIoI>And tIoI _ sigma, if the two tracks are considered to be the same track, combining the tracks. Meanwhile, the combined track is taken as a whole and is compared with the rest tracks until the tracklet vote algorithm of all the tracks is completed, and the finally obtained track is marked as T1_e、T2_e、T3_e…Tm_e。
The invention provides a multi-target tracking system for aerial images of unmanned aerial vehicles, which is used for tracking by using the multi-target tracking method.
3. Advantageous effects adopted by the present invention
(1) The invention provides a novel multi-target tracking framework. Firstly, target detection is carried out by using a neural network, and then initial multi-target tracking is carried out by using an IOUTrack. As the IOU Tracker is susceptible to missing detection, the invention adds a KCF tracking network behind the IOU Tracker to update and compensate each track. And performing a tracklet vote algorithm on the compensated tracks, and further integrating and connecting the tracks belonging to the same target. The framework provided by the invention has high-precision, high-robustness and high-efficiency detection effects on multi-target tracking in aerial images of unmanned aerial vehicles.
(2) The invention provides an improved KCF method. After IOUTracker is used, the problem of frame break (one track cannot be predicted in all and is predicted as a plurality of sub-segments) is still inevitably encountered, and the final score is greatly reduced. The present invention uses KCF to make an update to the IOUTracker result. And inputting all picture sequence groups behind the IOU Tracker into a KCF Tracker, and sequentially performing forward update compensation and backward update compensation on each tracking track by the KCF to obtain an updated track.
(3) The invention provides a tracklet vote algorithm. The KCF is used to predict the result of the next several frames according to the existing result by using the conventional algorithm, which is equivalent to supplementing some lost information. After the KCF updates the tracks, where the tracks normally overlap, the invention uses a voting method based on IOU, if the voting result of the overlapping part between the tracks is larger than a certain threshold value, the two tracks are regarded as the same object and merged and then compared with the rest.
Drawings
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a schematic diagram of the target detection network in this embodiment example.
Fig. 3 is a network overall framework diagram in this embodiment example.
Fig. 4 is a KCF update compensation chart in the present embodiment example.
Fig. 5 is a tracklet vote trajectory fusion diagram in this embodiment example.
Fig. 6 is a graph showing the tracking effect in the present embodiment.
Detailed Description
The technical solutions in the examples of the present invention are clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
The invention provides a multi-target tracking method for aerial images of unmanned aerial vehicles, which comprises the following specific steps:
(1) and performing data enhancement (image translation, zooming, channel transformation and illumination change) on the unmanned aerial vehicle aerial data set.
(2) And inputting the image sequence I into a deep neural network fast RCNN for target detection to obtain a detected result sequence I'.
(3) I 'is sent into an IOU Tracker of the multi-target tracking network, and the IOU Tracker adds all detection frames of the first frame in the I' into a tracking queue QtThen the second frame is taken as the current frame FcWill FcAll the detection frames bbox _ now and QtAll detection frames bbox _ pre in the system perform IOU operation, and bbox _ now respectively obtains corresponding maximum IOU values IOU _ max.
(4) Setting a threshold value sigma IO U if IOU _ max>σ IO U, then FcThe detection box in (1) is added into a tracking queue QtAs a trace frame to match at FcThereby forming a continuous track.
(5) Setting threshold values sigma h and t min, calculating the frequency t of the target appearing before the frame, and if sigma h is less than or equal to IO U _ max is less than or equal to sigma IO U and t>t min, which indicates a normal tracking object and has left the screen in the current frame, so the tracking object is removed from the tracking queue Qt。
(6) If IOU _ max and t do not satisfy the constraints of (4) and (5), a new object appears in the current frame Fc as waiting forThe tracked object adds the detection frame to the tracking queue QtIn (1).
(7) According to the sequence of the input sequence I', the operations of (4), (5) and (6) are circulated until the last frame is completed, and a multi-section tracking track T is obtained1、T2、T3…Tn. IOU tracker records a unique index I 'for each frame of processed picture'1,I'2,I'3,…I'n。
(8) Tracking a trajectory T for each segmentiAnd (i is 1 … n), performing Forward tracking (forwarding tracking) by using a KCF tracker, wherein an initial detection frame of the KCF tracker is a detection frame bbox _ last corresponding to the last frame of the track. Starting tracking with the frame, continuously predicting P frames according to the forward direction, adding the P frames into the corresponding tracking track, and finally obtaining the updated and compensated tracking track T1_f、T2_f、T3_f…Tn_f。
(9) On the basis of (8), track T is tracked for each segmenti_f(i is 1 … n), respectively using KCF to perform backward tracking (Backwardtracking) once, wherein the initial detection frame of the KCF tracker is the detection frame bbox _ first corresponding to the first frame of the track, starting tracking with the frame, continuously predicting P frames according to the opposite forward direction, adding the P frames into the corresponding tracking track, and finally obtaining the updated and compensated tracking track T1_fb、T2_fb、T3_fb…Tn_fb。
(10) Defining a variable tIoI, defining a threshold tIoI _ sigma, and designing a tracklet vote algorithm. Since each frame image is uniquely labeled in (7), the tracking trace T finally obtained is obtained1_fb、T2_fb、T3_fb…Tn_fbComputing T using a greedy algorithmi_fb(i-1 … n) and Tj_fb(i ≠ 1 … n, i ≠ j) as to the degree of overlap in time and space.
(11) The number of frames with different track overlap labels is denoted N, then for each frame of the N frames two tracks of each frame calculate the spatial IOU, and for a frame we regard these two frames as the same object if the IOU is larger than the threshold σ, and in this N frame, if N 'frames occur that satisfy the above, then tloi equals N'/N. And if the tIoI is greater than the tIoI _ sigma, the tracks are considered to be the same track, and track combination is performed. Meanwhile, the combined track is taken as a whole and then compared with the rest tracks. Until the tracklet vote algorithm for all traces is completed.
In FIG. 4, the blue rectangle in the first graph represents the result after IOU Tracker tracking; the second graph represents the result after prediction tracking towards the direction t (forward), and orange represents the result after updating compensation; the third graph represents the result after prediction tracking in the opposite direction of t (backward), and green represents the result after update compensation.
In fig. 5, blue and orange in the first graph represent two tracks, which have N frames labeled the same (overlap) in the t direction; the second graph represents the spatial IOU calculated for both tracks of each frame in N frames, where the IOU of N' frames is greater than the threshold σ, i.e., tloi > tloi _ σ; so that the two tracks are shown as belonging to a complete track, in the third diagram. May be combined into one track.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (9)
1. The utility model provides a multi-target tracking method towards unmanned aerial vehicle image of taking photo by plane which characterized in that includes:
a target detection step, namely performing target detection on the input image sequence by fast RCNN;
an initial tracking track forming step, wherein an IOU Tracker executes multi-target tracking on the detected image sequence to form an initial tracking track;
and updating and compensating, namely updating and compensating the initial tracking tracks by the KCF tracker, and sequentially executing forward updating and backward updating on each initial track, wherein the number of frames of the forward updating and the backward updating is P frames respectively.
And a classification and fusion step, namely classifying and fusing the multi-section tracks by utilizing a greedy algorithm and IOU calculation for the tracks after KCF update according to a tracklet vote algorithm.
2. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 1, characterized in that the target detection step: carrying out data enhancement on an unmanned aerial vehicle aerial data set, wherein the data enhancement comprises image translation, zooming, channel transformation and illumination change; and inputting the image sequence I into a deep neural network FasterRCNN for target detection to obtain a detected result sequence I', wherein a pre-training model of the FasterRCNN is a pre-training model of a feature extraction network ResNet50 on ImageNet.
3. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 1, characterized in that the initial tracking trajectory forming step: i 'is sent into an IOU Tracker of the multi-target tracking network, and the IOU Tracker adds all detection frames of the first frame in the I' into a tracking queue QtThen the second frame is taken as the current frame FcWill FcAll the detection frames bbox _ now and QtAll detection frames bbox _ pre in the system perform IOU operation, and bbox _ now respectively obtains corresponding maximum IOU values IOU _ max.
4. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 3, is characterized in that: setting a threshold value sigma IO U if IOU _ max>σ IO U, then FcThe detection box in (1) is added into a tracking queue QtAs a trace frame to match at FcThereby forming a continuous track;
setting threshold values sigma h and t min, calculating the frequency t of the target appearing before the frame, and if sigma h is less than or equal to IO U _ max is less than or equal to sigma IO U and t>t min, which indicates a normal tracking object and has left the screen in the current frame, so the tracking object is removed from the tracking queue Qt;
If IOU _ max and t do not satisfy the above constraints, the description is given inA new object appears in the current frame Fc, and the detection frame is added to the tracking queue Q as the object to be trackedtPerforming the following steps;
according to the sequence of the input sequence I', the above operations are circulated until the last frame is completed to obtain a multi-section tracking track T1、T2、T3…Tn(ii) a IOU tracker records a unique index I 'for each frame of processed picture'1,I′2,I′3,…I′n。
5. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 1, wherein the step of updating the compensation, step S300, comprises:
tracking a trajectory T for each segmenti(i-1 … n), respectively using a KCF tracker to perform forward tracking once, wherein an initial detection frame of the KCF tracker is a detection frame bbox _ last corresponding to a last frame of the track; starting tracking with the frame, continuously predicting P frames according to the forward direction, adding the P frames into the corresponding tracking track, and finally obtaining the updated and compensated tracking track T1_f、T2_f、T3_f…Tn_f。
6. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 5, wherein the compensated tracking trajectory T is updated for each segmenti_f(i-1 … n), respectively using KCF to perform backward tracking once, wherein the initial detection frame of the KCF tracker is the detection frame bbox _ first corresponding to the first frame of the track, starting tracking with the frame, continuously predicting P frames according to the opposite forward direction, adding the P frames into the corresponding tracking track, and finally obtaining the updated and compensated tracking track T1_fb、T2_fb、T3_fb…Tn_fb。
7. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 1, wherein the classifying and fusing steps include: defining a variable tIoI, defining a threshold tIoI _ sigma, and designing a tracklet vote algorithm; since in IOUTrackEach frame image is uniquely labeled, so that the resulting tracking trajectory T is obtained1_fb、T2_fb、T3_fb…Tn_fbComputing T using a greedy algorithmi_fb(i-1 … n) and Tj_fb(i ≠ 1 … n, i ≠ j) as to the degree of overlap in time and space.
8. The multi-target tracking method for aerial images of unmanned aerial vehicles according to claim 7, wherein the number of frames with different track overlapping labels is denoted as N, then for each N frame, two tracks of each frame calculate the spatial IOU, for a certain frame, if the IOU is greater than a threshold σ, the two frames are regarded as the same object, and in the N frame, if N 'frames meet the above condition, then tloi is N'/N; if tIoI>And tIoI _ sigma, if the two tracks are considered to be the same track, combining the tracks. Meanwhile, the combined track is taken as a whole and then compared with the rest tracks until the tracklet algorithm of all the tracks is completed, and the finally obtained track is marked as T1_e、T2_e、T3_e…Tm_e。
9. The utility model provides a multi-target tracking system towards unmanned aerial vehicle image of taking photo by plane which characterized in that: tracking using the multi-target tracking method as described in any one of 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010544149.1A CN111681260A (en) | 2020-06-15 | 2020-06-15 | Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010544149.1A CN111681260A (en) | 2020-06-15 | 2020-06-15 | Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111681260A true CN111681260A (en) | 2020-09-18 |
Family
ID=72436097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010544149.1A Pending CN111681260A (en) | 2020-06-15 | 2020-06-15 | Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111681260A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160342837A1 (en) * | 2015-05-19 | 2016-11-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Apparatus and method for object tracking |
CN108053427A (en) * | 2017-10-31 | 2018-05-18 | 深圳大学 | A kind of modified multi-object tracking method, system and device based on KCF and Kalman |
CN110929560A (en) * | 2019-10-11 | 2020-03-27 | 杭州电子科技大学 | Video semi-automatic target labeling method integrating target detection and tracking |
CN111145215A (en) * | 2019-12-25 | 2020-05-12 | 北京迈格威科技有限公司 | Target tracking method and device |
-
2020
- 2020-06-15 CN CN202010544149.1A patent/CN111681260A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160342837A1 (en) * | 2015-05-19 | 2016-11-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Apparatus and method for object tracking |
CN108053427A (en) * | 2017-10-31 | 2018-05-18 | 深圳大学 | A kind of modified multi-object tracking method, system and device based on KCF and Kalman |
CN110929560A (en) * | 2019-10-11 | 2020-03-27 | 杭州电子科技大学 | Video semi-automatic target labeling method integrating target detection and tracking |
CN111145215A (en) * | 2019-12-25 | 2020-05-12 | 北京迈格威科技有限公司 | Target tracking method and device |
Non-Patent Citations (2)
Title |
---|
ERIK BOCHINSKI 等: "High-Speed Tracking-by-DetectionWithout Using Image Information" * |
JOAO F. HENRIQUES 等: "High-Speed Tracking with Kernelized Correlation Filters" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
CN114217626B (en) * | 2021-12-14 | 2022-06-28 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle routing inspection video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111460926B (en) | Video pedestrian detection method fusing multi-target tracking clues | |
CN110443827B (en) | Unmanned aerial vehicle video single-target long-term tracking method based on improved twin network | |
CN113807187B (en) | Unmanned aerial vehicle video multi-target tracking method based on attention feature fusion | |
Zhou et al. | Deep alignment network based multi-person tracking with occlusion and motion reasoning | |
CN110781262B (en) | Semantic map construction method based on visual SLAM | |
CN113870335A (en) | Monocular depth estimation method based on multi-scale feature fusion | |
CN110310305B (en) | Target tracking method and device based on BSSD detection and Kalman filtering | |
CN112016461A (en) | Multi-target behavior identification method and system | |
Li et al. | Multi-camera vehicle tracking system for ai city challenge 2022 | |
CN114913206A (en) | Multi-target tracking method and system based on multi-mode fusion | |
CN114092519A (en) | Video multi-target tracking method using convolutional neural network and bidirectional matching algorithm | |
CN116402850A (en) | Multi-target tracking method for intelligent driving | |
CN111882581B (en) | Multi-target tracking method for depth feature association | |
CN115239765A (en) | Infrared image target tracking system and method based on multi-scale deformable attention | |
CN112489064B (en) | Panorama segmentation method based on edge scaling correction | |
CN114120202A (en) | Semi-supervised video target segmentation method based on multi-scale target model and feature fusion | |
CN111681260A (en) | Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle | |
CN113361475A (en) | Multi-spectral pedestrian detection method based on multi-stage feature fusion information multiplexing | |
CN110942463B (en) | Video target segmentation method based on generation countermeasure network | |
CN115100565B (en) | Multi-target tracking method based on spatial correlation and optical flow registration | |
CN112907634B (en) | Vehicle tracking method based on unmanned aerial vehicle | |
CN112116634B (en) | Multi-target tracking method of semi-online machine | |
Yuyao et al. | The infrared-visible complementary recognition network based on context information | |
Wzorek et al. | Pedestrian detection with high-resolution event camera | |
Xiong et al. | Multi-object tracking based on deep associated features for UAV applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200918 |