CN108108697B - Real-time unmanned aerial vehicle video target detection and tracking method - Google Patents

Real-time unmanned aerial vehicle video target detection and tracking method Download PDF

Info

Publication number
CN108108697B
CN108108697B CN201711415848.0A CN201711415848A CN108108697B CN 108108697 B CN108108697 B CN 108108697B CN 201711415848 A CN201711415848 A CN 201711415848A CN 108108697 B CN108108697 B CN 108108697B
Authority
CN
China
Prior art keywords
target
list
weight
value
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711415848.0A
Other languages
Chinese (zh)
Other versions
CN108108697A (en
Inventor
文义红
刘春华
马健
王津
马晖
于君娜
刘让国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN201711415848.0A priority Critical patent/CN108108697B/en
Publication of CN108108697A publication Critical patent/CN108108697A/en
Application granted granted Critical
Publication of CN108108697B publication Critical patent/CN108108697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a computing method for real-time detection and tracking of a specific target (mainly aiming at moving targets such as tanks, automobiles, ships and warships) in an unmanned aerial vehicle video, and relates to a computer vision technology, a machine learning technology and photogrammetry. On the basis of learning the existing target, the target is detected in the current video frame and matched with the target in the previous frame, the successfully matched target is updated, the old target is tracked, the newly added target is confirmed, the target set of the current frame is finally obtained, and the geographic position of the target can be calculated and output according to the coordinates of the target in the camera. In the application of the unmanned aerial vehicle, the invention can find and automatically track and accurately position objects with military value, such as tanks, automobiles and ships, in real time.

Description

Real-time unmanned aerial vehicle video target detection and tracking method
Technical Field
The invention relates to the technical field of target detection and tracking in computer vision. Specifically, the invention relates to a method for automatically finding and tracking a specific target from an imaging device in the flight process of an unmanned aerial vehicle, and relates to the technical fields of computer vision, machine learning and photogrammetry.
Background
The detection and tracking of the specific target are key technologies of digital video analysis and machine vision, have urgent requirements in unmanned aerial vehicle application, and particularly detect, track and position the specific target (tank, vehicle, warship, oil depot and the like) with military value in the real-time flight process, and can return target position information in real time, thereby facilitating subsequent accurate striking. The labor intensity of operators can be reduced, and the system reliability is enhanced; and the communication with the operator can be reduced, and the operator is prevented from being exposed due to the spy of the signal.
The video target detection method is mainly based on the static scenes of cameras such as video monitoring and the like, and mainly comprises a frame difference method, an optical flow method and a background difference method, wherein the frame difference method and the background difference method assume that the background is static and are not suitable for the video of the unmanned aerial vehicle. The optical flow method establishes a motion field according to two adjacent frames to associate each pixel of a current frame with a corresponding pixel of a next frame, but the establishment of a basic equation for calculating the optical flow is based on the condition that the basic assumption of brightness conservation is that the condition is difficult to satisfy in a real scene.
The target tracking method is always a hotspot of computer vision, new methods are also endless, and the current main methods are tracking algorithms based on kernel density estimation theory, probability statistics and machine learning. The tracking algorithm based on the kernel density estimation theory is taken as a classical algorithm and plays an important role in early target tracking. However, the mean shift variable needs to be deduced in an analytic form, so that the extraction of the features is limited, and certain limitations are realized; the tracking algorithm based on probability statistics has good performance in solving the nonlinear problem, but the same complicated calculation influences the tracking speed, and the real-time performance is poor; the tracking algorithm based on learning generally does not need complex calculation and has good adaptability, but the problem of drift caused by template updating still exists in the tracking process. And the existing algorithm can not completely solve the problem of long-time tracking of the unmanned target.
Disclosure of Invention
The invention aims to provide a method for automatically finding and tracking a specific target from a camera in the flight process of an unmanned aerial vehicle, and realizes scenes that the unmanned aerial vehicle needs to accurately observe a ground running target, such as reconnaissance and striking of a tactical unmanned aerial vehicle on the ground moving target, directional tracking of a police unmanned aerial vehicle on the moving target, and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a real-time unmanned aerial vehicle video target detection and tracking method comprises the following steps:
(1) initialization: dividing a video frame image into a detection area and a tracking area, and defining the appearance attribute of a target; the tracking area is an overlapping area of the current video frame image and the previous video frame image, and other areas are detection areas; the area of the tracking area is 0 under the limit condition;
(2) detecting a background area of a current video frame image;
(3) if the current video frame image is the first frame, performing target detection in the whole current video frame image; otherwise, performing target detection in the intersection area of the detection area of the current video frame image and the background area; extracting relevant features of the target after target detection, matching the relevant features with appearance attributes of the target, and putting the target with the feature attributes meeting requirements into a current frame target list L;
(4) obtaining the last frame target list LpThe current frame target list L and the previous frame target list L are comparedpThe targets in the method are matched one by one, pairwise comparison is carried out according to a matching algorithm to form a target matching matrix, then an optimal value is sequentially extracted from the target matching matrix to form an optimal matching list, and three conditions are divided after matching: if the targetT is the target in the optimal matching list, i.e. in the current frame target list L and the previous frame target list LpIf the target T exists in the target T, the step (5) is carried out; if the target T exists in the target list L of the current frame, but the target list L of the previous framepIf the target T does not exist, the step (6) is carried out; if there is no target T in the current frame target list L, but the previous frame target list LpIf the target T exists, the step (7) is carried out;
(5) setting the weight of the target T as the target T is LpAdding the weight in the previous frame to a set weight step S, and selecting a target T from a previous frame target list LpDeleting, and entering the step (8);
(6) setting the target T as a suspected target, and setting the initial weight value of the target T as VorgEntering the step (8);
(7) judging whether the weight of the target T exceeds the set threshold value V1If yes, tracking by adopting a target tracking algorithm to obtain a target of the current frame, and setting the weight of the target of the current frame as VmaxAdding the current frame target list L and the target T from LpDeleting, and entering the step (8); otherwise, the weight of the target T is reduced by the weight step S, and if the reduced weight is larger than the specified threshold VminAdding the current frame target list L and the target T from LpDeleting the target T from the L directly otherwisepDeleting, and entering the step (8);
(8) merging LpWhen the current frame is detected to be L, all the weight values in the current frame target list L are found to exceed the set threshold value V2The positions of all targets are calculated according to the attitude, the speed and the position of the unmanned aerial vehicle, and the positions are output to the ground or a weapon through a wireless system;
(9) taking the current frame target list L as the previous frame target list LpAnd (3) acquiring a next video frame image, taking the next video frame image as the current video frame image, and turning to the step (2).
Wherein, the step (2) comprises the following steps:
(2.1) filtering the current video frame image;
(2.2) obtaining the edge of the current video frame image by using a canny operator;
(2.3) calculating the circumscribed rectangles of all the edges;
(2.4) searching 20 longest edges of all the edges, and dividing the 20 longest edges into 2 groups, wherein the 10 longest edges and the other 10 edges are obtained;
(2.5) selecting 5 strips from the 2 groups respectively, combining 10 edges, and ensuring that the area of a circumscribed rectangle of the selected 10 edges is minimum;
(2.6) finding the minimum bounding rectangle as the background area.
Wherein, in step (4), the current frame target list L and the previous frame target list L are listedpThe target in the method is matched one by one, pairwise comparison is carried out according to a matching algorithm to form a target matching matrix, then an optimal value is sequentially extracted from the target matching matrix to form an optimal matching list, and the method specifically comprises the following steps:
(4.1) establishing a matching matrix M, wherein the number of columns is the number of data in the current frame target list L, and the number of rows is the previous frame target list LpThe number of middle data, or the number of rows is the number of data in the current frame target list L, and the number of columns is the previous frame target list LpThe number of the data;
(4.2) all initial values of the matrix are a weight S which cannot occur;
(4.3) for all targets in the current frame target list L and the previous frame target list LpAll targets in the list L are matched one by one, and any target T in the list L is matchedlAnd list LpAny one of the targets TpThe matching value V is used as the weight of the corresponding position in the matching matrix M to obtain a target matching matrix;
(4.4) taking the optimal value superior to the threshold T1 from the target matching matrix, extracting the row number and the column number, considering that the matching is correct, and assigning other weights of the row and the column where the optimal value is located as S;
(4.5) repeating the step (4.4) until no weight is better than the threshold T1 in the matrix, and forming an optimal matching list by matching the correct target.
Wherein, any target T in the list L in the step (4.3)lAnd list LpAny one of the targets TpIs specified by the matching value VThe calculation method is as follows:
(4.3.1) position comparison: if the position interval is larger than the maximum interval d defined in the appearance attribute of the target, the weight value is a set value V1If the position interval is the minimum interval defined in the appearance attribute, the weight is a set value V2(ii) a Linearly assigning if the position interval is between the maximum interval and the minimum interval;
(4.3.2) area comparison: if the area proportion value is the minimum area defined in the appearance attribute, the weight value is a set value V3(ii) a If the area proportion value is larger than the maximum area defined in the appearance attribute, the weight value is a set value V4(ii) a If the area proportion value is between the minimum area and the maximum area, linearly assigning a value;
(4.3.3) aspect ratio comparison: if the length-width ratio is the same, the weight is a set value V5(ii) a If the length-width ratio is larger than the maximum length-width ratio defined in the appearance attribute, the weight is a set value V6(ii) a Assigning a linear value if the aspect ratio is between 1 and the maximum aspect ratio;
(4.3.4) mean comparison: adopting the brightness mean value, if the mean value difference is 0, the weight value is a set value V7(ii) a If the mean difference is above the maximum brightness mean defined in the appearance attribute, the weight is a set value V8(ii) a If the average value difference is between 0 and the maximum brightness average value, linearly assigning a value;
(4.3.5) variance comparison: using the variance of brightness, if the difference of variance is 0, the weight is a set value V9(ii) a If the variance difference is above the maximum brightness variance defined in the appearance attribute, the weight is a set value V10(ii) a Linear assignment if the difference between the variances is between 0 and the maximum luminance variance;
(4.3.6) comprehensive weight: and averaging the weights to obtain a pairwise comparison target matching matrix.
Compared with the background technology, the invention has the following advantages:
the invention simultaneously uses an algorithm framework combining automatic detection and tracking aiming at a specific target, thereby realizing the automatic discovery and tracking of the target.
2 the invention adopts the method of detecting the background first and then detecting the target in the partial target detection process, thereby reducing the search amount.
The invention realizes the real-time target detection and tracking processing of the airborne image.
Drawings
FIG. 1 is a flow chart of the operation of an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a normal situation of the detection area and the tracking area.
Fig. 3 is a schematic view of a lens tilting situation.
Fig. 4 is a schematic diagram of the case of no tracking area.
Detailed Description
The working flow of the embodiment of the present invention is shown in fig. 1, and each detailed problem involved in the technical solution of the present invention is further described below with reference to fig. 1. It should be noted that the described embodiments are only intended to facilitate the understanding of the invention and do not have any limiting effect.
(1) And (5) initializing.
(1.1) defining a rectangular frame of the detection area and the tracking area by the video frame image. Assuming that the unmanned aerial vehicle speed is v1 and the target moving speed is v2, the maximum moving distance of the target in the picture is d ═ v1+ v2 × t, (where t is the video frame image sampling time interval), the visual angle range of the video frame image is d1 × d2, and the position of the detection region in the image can be calculated by matching the unmanned aerial vehicle parameters, as shown in fig. 2, when the visual angle is inclined, the rectangular region in fig. 3 can be used as the tracking region in this case. In principle, in the case of a frame with sufficient computing power, the set tracking area should be smaller than the actual computing area, such as the above-mentioned uncertain parameters or the onboard equipment with sufficient computing power, and the whole frame can be defined as the detection area, as shown in fig. 4.
(1.2) defining target appearance attributes. The purpose of defining the appearance attribute of the target is mainly to reduce the false alarm of the target, and the minimum length, the maximum length, the minimum width, the maximum width, the minimum area, the maximum area, the minimum aspect ratio, the maximum brightness variance, the maximum brightness mean value and the like of the target in the image are adopted in the implementation process. In this example, the target is obtained through pre-training, and all large values are enlarged to 1.2 times, and small values are reduced to 0.8 times.
(2) And detecting a background area. The target tracked by the embodiment is a vehicle, and for the vehicle, namely the background area is a road. The rapid detection algorithm of the road is as follows:
(2.1) filtering the image;
(2.2) obtaining the edge of the image by using a canny operator;
(2.3) calculating the circumscribed rectangles of all the edges;
(2.4) searching 20 longest edges of all the edges, and dividing the 20 longest edges into 2 groups, wherein the 10 longest edges and the other 10 edges are obtained;
(2.5) selecting 5 strips from the 2 groups respectively to combine 10 edges, and ensuring that the area of the circumscribed rectangle of the selected 10 edges is the minimum.
And (2.6) defining the minimum bounding rectangle found as the road area.
(3) If the current video frame image is the first frame, performing target detection in the whole current video frame image; otherwise, performing target detection in the intersection area of the target detection area of the current video frame image and the background area. After the target is detected, the statistical characteristics of the target, including shape attribute, feature point, color attribute, gray variance, background attribute, etc., are extracted, and the shape attribute is placed in the current frame target list L according with the requirement through attribute comparison.
(3.1) target detection algorithm, using opencv2.4.13 self-contained CascadeClassifier (the algorithm is based on learning and needs to carry out sample learning in advance according to tools and documents provided by opencv)
And (3.2) extracting the characteristics of the target, including the point characteristics (SIFT points), the length, the width, the area, the aspect ratio, the highest value and the lowest value of the brightness, and the positions of corresponding pixels.
And (3.3) comparing the statistical result with the statistical result of the appearance attribute of the 1.2 defined target according to the existing statistical result, filtering the target with the attribute obviously not meeting the requirement, and taking the target meeting the requirement as a correct target to be placed in a current frame target list L.
(4) Get the last oneFrame object list LpThe current frame target list L and the previous frame target list L are comparedpThe targets in the method are matched one by one, pairwise comparison is carried out according to a matching algorithm to form a target matching matrix, then an optimal value is sequentially extracted from the target matching matrix to form an optimal matching list, and three conditions are divided after matching: if the target T is the target in the optimal matching list, i.e. in the current frame target list L and the previous frame target list LpIf the target T exists in the target T, the step (5) is carried out; if the target T exists in the target list L of the current frame, but the target list L of the previous framepIf the target T does not exist, the step (6) is carried out; if there is no target T in the current frame target list L, but the previous frame target list LpIf the target T exists, the step (7) is carried out; the method specifically comprises the following steps:
(4.1) establishing a matching matrix, wherein the number of columns is the number of data in the current frame target L, and the number of rows is the previous frame target list LpThe number of middle data, or the number of rows is the number of data in the current frame target list L, and the number of columns is the previous frame target list LpThe number of the data;
the initial weights of the matrices are all-1. Then, two-by-two comparison is started, and the target serial number in L is assumed to be c, LpThe target sequence number in (1) is r;
(4.2) position comparison: if the position interval is larger than d in (1.1), the weight is 0, and if the position interval is 0, the weight is 1.5; linearly assigning values between d and 0; (linear assignment is assigned by a linear function, with the two endpoints of the linear function being 0,1.5 and d, 0);
(4.2) area comparison: if the area proportion value is 1, the weight is 1; more than 10, the weight is 0; linearly assigning the value between 10 and 1;
(4.3) aspect ratio comparison: if the length-width ratio is the same, the weight is 1; if the length-width ratio is more than 10 times, the weight is 0; 1-10, linearly assigning;
(4.4) mean comparison: adopting a brightness mean value, and if the mean value difference is 0, setting the weight value as 1; if the average value difference is more than 50, the weight is 0; performing linear assignment between 0 and 50;
(4.5) variance comparison: adopting brightness variance, and if the difference of the variances is 0, the weight is 1; if the variance difference is more than 20, the weight is 0; performing linear assignment between 0 and 50;
(4.6) composite score: averaging the weights to obtain a target matching matrix for pairwise comparison;
(4.7) extracting the maximum value which is greater than the threshold value T1 from the target matching matrix, extracting the row number and the column number, considering that the matching is correct, and assigning other values of the row and the column where the matching is located as-1;
(4.8) repeating the step (4.7) until no weight value in the matrix is larger than the value of the threshold T1, and forming an optimal matching list by matching correct targets.
(5) Setting the weight of the target T as the target T is LpAdding the weight in the previous frame to a set weight step S, and selecting a target T from a previous frame target list LpDeleting, and entering the step (8);
(6) setting the target T as a suspected target, and setting the initial weight value of the target T as VorgEntering the step (8);
(7) judging whether the weight of the target T exceeds the set threshold value V1If yes, tracking by adopting a target tracking algorithm to obtain a target of the current frame, and setting the weight of the target of the current frame as VmaxAdding the current frame target list L and the target T from LpDeleting, and entering the step (8); otherwise, the weight of the target T is reduced by the weight step S, and if the reduced weight is larger than the specified threshold VminThen add to the target list L and get the target T from LpDeleting the target T from the L directly otherwisepDeleting, and entering the step (8);
(8) merging LpWhen the current frame is detected to be L, all the weight values in the current frame target list L are found to exceed the set threshold value V2The positions of all targets are calculated according to the attitude, the speed and the position of the unmanned aerial vehicle, and the positions are output to the ground or a weapon through a wireless system;
(8.1) calculating the coordinate transformation from the terrestrial coordinate system to the local geographical coordinate system
The transformation matrix R1 is: r1=Rx T(-(90°-B0))Rz T(90°+L0)
Rx (θ), Ry (θ), Rz (θ) in this equation and the following equations represent coordinate rotation matrices that rotate by θ angles around the X-axis, Y-axis, and Z-axis, respectively. B0 denotes the current latitude of the airplane and L0 denotes the longitude of the airplane.
(8.2) calculating the coordinate transformation from the local geographic coordinate system to the body coordinate system
The transformation matrix R2 is:
Figure BDA0001521936620000112
wherein psi is a course angle, theta is a pitch angle, and gamma is a roll angle.
(8.3) calculating the coordinate transformation from the machine coordinate system to the platform coordinate system
The transformation matrix R3 is: r3=Rz(ζ)Ry(η)
wherein eta is an azimuth angle and a zeta platform rolling angle.
(8.4) computing the coordinate transformation of the platform coordinate System to the Camera coordinate System
The transformation matrix R4 is: r4=Rzc)Rxc)
(8.5) calculating the conversion from the camera coordinate system to the terrestrial coordinate system, and obtaining by using the transformation matrix:
Figure BDA0001521936620000111
(9) taking the current frame target list L as the previous frame target list LpAnd (3) acquiring a next video frame image, taking the next video frame image as the current video frame image, and turning to the step (2).
While the invention has been described in its preferred embodiments, it will be understood that certain modifications may be made without departing from the spirit of the invention as defined in the appended claims.

Claims (4)

1. A real-time unmanned aerial vehicle video target detection and tracking method is characterized by comprising the following steps:
(1) initialization: dividing a video frame image into a detection area and a tracking area, and defining the appearance attribute of a target; the tracking area is an overlapping area of the current video frame image and the previous video frame image, and other areas are detection areas; the area of the tracking area is 0 under the limit condition;
(2) detecting a background area of a current video frame image;
(3) if the current video frame image is the first frame, performing target detection in the whole current video frame image; otherwise, performing target detection in the intersection area of the detection area of the current video frame image and the background area; extracting relevant features of the target after target detection, matching the relevant features with appearance attributes of the target, and putting the target with the feature attributes meeting requirements into a current frame target list L;
(4) obtaining the last frame target list LpThe current frame target list L and the previous frame target list L are comparedpThe targets in the method are matched one by one, pairwise comparison is carried out according to a matching algorithm to form a target matching matrix, then an optimal value is sequentially extracted from the target matching matrix to form an optimal matching list, and three conditions are divided after matching: if the target T is the target in the optimal matching list, i.e. in the current frame target list L and the previous frame target list LpIf the target T exists in the target T, the step (5) is carried out; if the target T exists in the target list L of the current frame, but the target list L of the previous framepIf the target T does not exist, the step (6) is carried out; if there is no target T in the current frame target list L, but the previous frame target list LpIf the target T exists, the step (7) is carried out;
(5) setting the weight of the target T as the target T is LpAdding the weight in the previous frame to a set weight step S, and selecting a target T from a previous frame target list LpDeleting, and entering the step (8);
(6) setting the target T as a suspected target, and setting the initial weight value of the target T as VorgEntering the step (8);
(7) judging whether the weight of the target T exceeds the set threshold value V1If yes, tracking by adopting a target tracking algorithm to obtainSetting the weight of the target of the current frame as VmaxAdding the current frame target list L and the target T from LpDeleting, and entering the step (8); otherwise, the weight of the target T is reduced by the weight step S, and if the reduced weight is larger than the specified threshold VminAdding the current frame target list L and the target T from LpDeleting the target T from the L directly otherwisepDeleting, and entering the step (8);
(8) merging LpWhen the current frame is detected to be L, all the weight values in the current frame target list L are found to exceed the set threshold value V2The positions of all targets are calculated according to the attitude, the speed and the position of the unmanned aerial vehicle, and the positions are output to the ground or a weapon through a wireless system;
(9) taking the current frame target list L as the previous frame target list LpAnd (3) acquiring a next video frame image, taking the next video frame image as the current video frame image, and turning to the step (2).
2. The real-time unmanned aerial vehicle video target detection and tracking method of claim 1, characterized in that: the step (2) specifically comprises the following steps:
(2.1) filtering the current video frame image;
(2.2) obtaining the edge of the current video frame image by using a canny operator;
(2.3) calculating the circumscribed rectangles of all the edges;
(2.4) searching 20 longest edges of all the edges, and dividing the 20 longest edges into 2 groups, wherein the 10 longest edges and the other 10 edges are obtained;
(2.5) selecting 5 strips from the 2 groups respectively, combining 10 edges, and ensuring that the area of a circumscribed rectangle of the selected 10 edges is minimum;
(2.6) finding the minimum bounding rectangle as the background area.
3. The real-time unmanned aerial vehicle video target detection and tracking method of claim 1, characterized in that: in the step (4), the current frame target list L and the previous frame target list L are listedpAre matched one by one according toThe matching algorithm compares every two to form a target matching matrix, then extracts an optimal value from the target matching matrix in sequence to form an optimal matching list, and specifically comprises the following steps:
(4.1) establishing a matching matrix M, wherein the number of columns is the number of data in the current frame target list L, and the number of rows is the previous frame target list LpThe number of middle data, or the number of rows is the number of data in the current frame target list L, and the number of columns is the previous frame target list LpThe number of the data;
(4.2) setting all initial values of the matrix as a set weight S;
(4.3) for all targets in the current frame target list L and the previous frame target list LpAll targets in the list L are matched one by one, and any target T in the list L is matchedlAnd list LpAny one of the targets TpThe matching value V is used as the weight of the corresponding position in the matching matrix M to obtain a target matching matrix;
(4.4) taking the optimal value superior to the threshold T1 from the target matching matrix, extracting the row number and the column number, considering that the matching is correct, and assigning other weights of the row and the column where the optimal value is located as S;
(4.5) repeating the step (4.4) until no weight is better than the threshold T1 in the matrix, and forming an optimal matching list by matching the correct target.
4. The real-time unmanned aerial vehicle video target detection and tracking method of claim 3, characterized in that: any target T in the list L in step (4.3)lAnd list LpAny one of the targets TpThe specific calculation method of the matching value V is as follows:
(4.3.1) position comparison: if the position interval is larger than the maximum interval d defined in the appearance attribute of the target, the weight value is a set value V1If the position interval is the minimum interval defined in the appearance attribute, the weight is a set value V2(ii) a Linearly assigning if the position interval is between the maximum interval and the minimum interval;
(4.3.2) area comparison: if the area proportion value is the minimum area defined in the appearance attribute, the weight value is a set value V3(ii) a If the area proportion value is larger than the maximum area defined in the appearance attribute, the weight value is a set value V4(ii) a If the area proportion value is between the minimum area and the maximum area, linearly assigning a value;
(4.3.3) aspect ratio comparison: if the length-width ratio is the same, the weight is a set value V5(ii) a If the length-width ratio is larger than the maximum length-width ratio defined in the appearance attribute, the weight is a set value V6(ii) a Assigning a linear value if the aspect ratio is between 1 and the maximum aspect ratio;
(4.3.4) mean comparison: adopting the brightness mean value, if the mean value difference is 0, the weight value is a set value V7(ii) a If the mean difference is above the maximum brightness mean defined in the appearance attribute, the weight is a set value V8(ii) a If the average value difference is between 0 and the maximum brightness average value, linearly assigning a value;
(4.3.5) variance comparison: using the variance of brightness, if the difference of variance is 0, the weight is a set value V9(ii) a If the variance difference is above the maximum brightness variance defined in the appearance attribute, the weight is a set value V10(ii) a Linear assignment if the difference between the variances is between 0 and the maximum luminance variance;
(4.3.6) comprehensive weight: and averaging the weights to obtain a pairwise comparison target matching matrix.
CN201711415848.0A 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method Active CN108108697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711415848.0A CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711415848.0A CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Publications (2)

Publication Number Publication Date
CN108108697A CN108108697A (en) 2018-06-01
CN108108697B true CN108108697B (en) 2020-05-19

Family

ID=62212615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711415848.0A Active CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Country Status (1)

Country Link
CN (1) CN108108697B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118523B (en) * 2018-09-20 2022-04-22 电子科技大学 Image target tracking method based on YOLO
CN110163889A (en) * 2018-10-15 2019-08-23 腾讯科技(深圳)有限公司 Method for tracking target, target tracker, target following equipment
CN113112505B (en) * 2018-10-15 2022-04-29 华为技术有限公司 Image processing method, device and equipment
CN109409283B (en) * 2018-10-24 2022-04-05 深圳市锦润防务科技有限公司 Method, system and storage medium for tracking and monitoring sea surface ship
CN110503663B (en) * 2019-07-22 2022-10-14 电子科技大学 Random multi-target automatic detection tracking method based on frame extraction detection
CN110826497B (en) * 2019-11-07 2022-12-02 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance method and storage medium
CN111311640B (en) * 2020-02-21 2022-11-01 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle identification and tracking method based on motion estimation
CN111361570B (en) * 2020-03-09 2021-06-18 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
CN113096165B (en) * 2021-04-16 2022-02-18 无锡物联网创新中心有限公司 Target object positioning method and device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408983A (en) * 2008-10-29 2009-04-15 南京邮电大学 Multi-object tracking method based on particle filtering and movable contour model
CN101714252A (en) * 2009-11-26 2010-05-26 上海电机学院 Method for extracting road in SAR image
CN101807303A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Tracking device based on multiple-target mean shift
CN102316307A (en) * 2011-08-22 2012-01-11 安防科技(中国)有限公司 Road traffic video detection method and apparatus thereof
CN103500322A (en) * 2013-09-10 2014-01-08 北京航空航天大学 Automatic lane line identification method based on low-altitude aerial images
CN104657978A (en) * 2014-12-24 2015-05-27 福州大学 Road extracting method based on shape characteristics of roads of remote sensing images
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN105261035A (en) * 2015-09-15 2016-01-20 杭州中威电子股份有限公司 Method and device for tracking moving objects on highway
CN105353772A (en) * 2015-11-16 2016-02-24 中国航天时代电子公司 Visual servo control method for unmanned aerial vehicle maneuvering target locating and tracking
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106228577A (en) * 2016-07-28 2016-12-14 西华大学 A kind of dynamic background modeling method and device, foreground detection method and device
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486827B2 (en) * 2005-01-21 2009-02-03 Seiko Epson Corporation Efficient and robust algorithm for video sequence matching

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408983A (en) * 2008-10-29 2009-04-15 南京邮电大学 Multi-object tracking method based on particle filtering and movable contour model
CN101714252A (en) * 2009-11-26 2010-05-26 上海电机学院 Method for extracting road in SAR image
CN101807303A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Tracking device based on multiple-target mean shift
CN102316307A (en) * 2011-08-22 2012-01-11 安防科技(中国)有限公司 Road traffic video detection method and apparatus thereof
CN103500322A (en) * 2013-09-10 2014-01-08 北京航空航天大学 Automatic lane line identification method based on low-altitude aerial images
CN104657978A (en) * 2014-12-24 2015-05-27 福州大学 Road extracting method based on shape characteristics of roads of remote sensing images
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN105261035A (en) * 2015-09-15 2016-01-20 杭州中威电子股份有限公司 Method and device for tracking moving objects on highway
CN105353772A (en) * 2015-11-16 2016-02-24 中国航天时代电子公司 Visual servo control method for unmanned aerial vehicle maneuvering target locating and tracking
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106228577A (en) * 2016-07-28 2016-12-14 西华大学 A kind of dynamic background modeling method and device, foreground detection method and device
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
particle PHD filtering for multi-target visual tracking;Emillo Maggio 等;《2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP"07》;20070704;第1101-1104页 *
基于mean-shift和粒子滤波的两步多目标跟踪方法;李红波 等;《重庆邮电大学学报(自然科学版)》;20100228;第22卷(第1期);第112-121页 *
遥感图像道路边缘检测与路面提取方法研究;谭媛;《中国优秀硕士学位论文全文数据库_信息科技辑》;20160415(第4期);第I140-412页第I140-412页 *

Also Published As

Publication number Publication date
CN108108697A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108108697B (en) Real-time unmanned aerial vehicle video target detection and tracking method
CN110674746B (en) Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium
US8446468B1 (en) Moving object detection using a mobile infrared camera
CN108734103B (en) Method for detecting and tracking moving target in satellite video
EP2713308B1 (en) Method and system for using fingerprints to track moving objects in video
EP3224808B1 (en) Method and system for processing a sequence of images to identify, track, and/or target an object on a body of water
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN104881650A (en) Vehicle tracking method based on unmanned aerial vehicle (UAV) dynamic platform
CN105225251B (en) Over the horizon movement overseas target based on machine vision quickly identifies and positioner and method
CN107563370B (en) Visual attention mechanism-based marine infrared target detection method
CN112215074A (en) Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
US20190311209A1 (en) Feature Recognition Assisted Super-resolution Method
CN111009008B (en) Self-learning strategy-based automatic airport airplane tagging method
CN108198417A (en) A kind of road cruising inspection system based on unmanned plane
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN114089786A (en) Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
FAN et al. Robust lane detection and tracking based on machine vision
CN112686921B (en) Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
CN103778625A (en) Surface feature intelligent searching technique based on remote sensing image variation detecting algorithm
Cao et al. Visual attention accelerated vehicle detection in low-altitude airborne video of urban environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant