CN108108697A - A kind of real-time UAV Video object detecting and tracking method - Google Patents

A kind of real-time UAV Video object detecting and tracking method Download PDF

Info

Publication number
CN108108697A
CN108108697A CN201711415848.0A CN201711415848A CN108108697A CN 108108697 A CN108108697 A CN 108108697A CN 201711415848 A CN201711415848 A CN 201711415848A CN 108108697 A CN108108697 A CN 108108697A
Authority
CN
China
Prior art keywords
target
weights
value
object listing
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711415848.0A
Other languages
Chinese (zh)
Other versions
CN108108697B (en
Inventor
文义红
刘春华
马健
王津
马晖
于君娜
刘让国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN201711415848.0A priority Critical patent/CN108108697B/en
Publication of CN108108697A publication Critical patent/CN108108697A/en
Application granted granted Critical
Publication of CN108108697B publication Critical patent/CN108108697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

Detect the computational methods with tracking in real time the invention discloses specific objective in a kind of UAV Video (mainly for moving targets such as tank, automobile, naval vessels), it is related to computer vision technique, machine learning techniques and photogrammetric.It is on the basis of to existing target study, target is detected in current video frame, and it is matched with former frame target, the target of successful match is updated, is confirmed to old target into line trace, to newly-increased target, finally obtain the target collection of present frame, and can coordinate calculates output target geographic position in the camera according to target.In unmanned plane application, the present invention can have found and have the target of military value with enemy is accurately positioned from motion tracking in real time, such as tank, automobile, naval vessel.

Description

A kind of real-time UAV Video object detecting and tracking method
Technical field
The present invention relates to object detecting and tracking technical fields in computer vision.Exactly, the present invention is that one kind exists Automatically find that specific objective is gone forward side by side the method for line trace during unmanned plane during flying from imaging device, the technical field bag being related to Include computer vision, machine learning and photogrammetric.
Background technology
Detection and tracking to specific objective are the key technologies of digital video analysis and machine vision, should in unmanned plane There is urgent demand in, to specific objective (tank, vehicle, army with military value particularly during real-time flight Warship, oil depot etc.) it is detected, tracks and positions, target position information can be returned in real time, convenient for follow-up precision strike.It can either Reduce the labor intensity of operator, strengthening system reliability;The communication with operator can be reduced again, and anti-stop signal is detectd It examines and operator is caused to be exposed.
Video object detection method is based primarily upon the static scene of the cameras such as video monitoring, mainly there is frame difference method, light stream Three kinds of method and Background difference, wherein frame difference method and Background difference assumes that stationary background, is not appropriate for UAV Video.Optical flow method Sports ground is established according to adjacent two frame to associate each pixel of present frame pixel corresponding with next frame, but calculates light stream The establishment of fundamental equation is built upon brightness conservation basic assumption under the conditions of this, is difficult to meet this in reality scene Part.
Method for tracking target is always the hot spot of computer vision, and new method also emerges in an endless stream, and main method is based at present Track algorithm, the track algorithm based on probability statistics, the track algorithm based on machine learning of Density Estimator theory.Based on core The track algorithm of density estimation theory is played an important role as a kind of classic algorithm in early stage target following.But it is needed Average drifting variable is released by analytical form, limit its extraction to feature, there is certain limitation;Based on probability statistics Track algorithm have a good performance on nonlinear problem is solved, but its similary complicated calculating affects tracking velocity, Real-time is poor;Complicated calculating is generally not required in track algorithm based on study, has good adaptability, but it is being tracked There are still the drifting problems brought due to template renewal in the process.And existing algorithm all can not also be fully solved unmanned plane The problem of target tracks for a long time.
The content of the invention
The object of the present invention is to provide one kind to find specific objective automatically from camera during unmanned plane during flying It goes forward side by side the method for line trace, realizes tactical drone to the strike of ground moving target reconnaissance, police unmanned plane to mobile target Directed tracing etc. needs scene of the unmanned plane to ground operational objective progress accurate surveying.
In order to achieve the above object, the present invention uses following technical proposals:
A kind of real-time UAV Video object detecting and tracking method, comprises the following steps:
(1) initialize:Video frame images are divided into detection zone and tracing area, and define the appearance attribute of target;Institute The tracing area stated is the region that current video two field picture is overlapped with a upper video frame images, other regions are detection zone;Pole Tracing area area is 0 in the case of limit;
(2) background area detection is carried out to current video two field picture;
(3) if current video two field picture is first frame, target detection is carried out in view picture current video two field picture;It is no Then, target detection is carried out in the detection zone of current video two field picture and the intersecting area of background area;It is carried after target detection The correlated characteristic of target is taken, and is matched with the appearance attribute of target, the satisfactory target of characteristic attribute is put into currently Frame object listing L;
(4) previous frame object listing L is obtainedp, by present frame object listing L and previous frame object listing LpIn target into Row matches one by one, is compared two-by-two according to matching algorithm, object matching matrix is formed, then successively from object matching matrix Optimal value is extracted, forms Optimum Matching list, point three kinds of situations after matching:If target T is the mesh in Optimum Matching list Mark, i.e., in present frame object listing L with previous frame object listing LpIn there are target T, then be transferred to step (5);If work as There are target T in previous frame object listing L, but previous frame object listing LpIn there is no target T, then be transferred to step (6);It is if current There is no target T in frame object listing L, but previous frame object listing LpPresent in target T, then be transferred to step (7);
(5) weights of target T are set to target T in LpIn weights plus setting weights step-length S, and by target T from Previous frame object listing LpMiddle deletion enters step (8);
(6) target T is set to suspected target, and the initial weight of target T is set to Vorg, enter step (8);
(7) whether the weights for judging target T are more than set threshold value V1, if it is, being carried out using target tracking algorism Tracking, obtains the target of present frame, the weights of the target of present frame is set to VmaxAnd add in present frame object listing L, and will Target T is from LpMiddle deletion enters step (8);Otherwise the weights of target T are reduced into weights step-length S, if the weights after reducing are big In specified threshold Vmin, then it adds in present frame object listing L, and by target T from LpMiddle deletion, otherwise directly by target T from Lp Middle deletion enters step (8);
(8) L is mergedpInto L, it is more than set threshold value V then to find out all weights in present frame object listing L2Mesh Mark, according to the posture, speed and position of unmanned plane, calculates the position of all targets, and pass through wireless system export to ground or Person's weapon;
(9) using present frame object listing L as previous frame object listing Lp, next video frame images are obtained, are regarded next Frequency two field picture is transferred to step (2) as current video two field picture.
Wherein, step (2) specifically includes following steps:
(2.1) current video two field picture is filtered;
(2.2) edge of current video two field picture is obtained using canny operators;
(2.3) boundary rectangle at all edges is calculated;
(2.4) all longest 20 edges in edge are found, and are divided into 2 groups, longest 10, other 10;
(2.5) 5 are respectively selected in 2 groups, is combined into 10 edges, it is ensured that the boundary rectangle area on 10 sides elected is most It is small;
(2.6) minimum enclosed rectangle found is background area.
Wherein, by present frame object listing L and previous frame object listing L in step (4)pIn target carry out one by one Match somebody with somebody, compared two-by-two according to matching algorithm, form object matching matrix, then extracted successively from object matching matrix optimal Value forms Optimum Matching list, is specially:
(4.1) matching matrix M is established, columns is data amount check in present frame object listing L, and line number is previous frame target List LpMiddle data amount check or line number are data amount check in present frame object listing L, and columns is previous frame object listing LpIn Data amount check;
(4.2) all initial values of matrix are a weights S that impossible occur;
(4.3) to all targets in present frame object listing L and previous frame object listing LpIn all targets one by one Matching, by the either objective T in list LlWith list LpIn either objective TpMatching value V as corresponding to position in matching matrix M The weights put obtain object matching matrix;
(4.4) optimal value better than threshold value T1 is taken from object matching matrix, extracts line number and row number, it is believed that matching is just Really, and by other weights of the row and column where it is assigned a value of S;
(4.5) step (4.4) is repeated, until not having the weights better than threshold value T1 in matrix, correct target shape will be matched Into Optimum Matching list.
Wherein, the either objective T in step (4.3) in list LlWith list LpIn either objective TpMatching value V tool Body computational methods are as follows:
Compare (4.3.1) position:If location interval is more than the largest interval d defined in the appearance attribute of target, weigh It is worth for setting value V1If location interval is the minimum interval defined in appearance attribute, weights are setting value V2;If position It is spaced between largest interval and minimum interval, then linear assignment;
(4.3.2) area compares:If area ratio value is the minimum area defined in appearance attribute, weights are setting Value V3;If area ratio value is more than the maximum area defined in appearance attribute, weights are setting value V4;If area ratio It is worth between minimum area and maximum area, then linear assignment;
(4.3.3) length-width ratio compares:If length-width ratio is identical, weights are setting value V5;If length and width ratio is more than outer The maximum length-width ratio defined in attribute is seen, then weights are setting value V6;If length-width ratio is between 1 and maximum length-width ratio, line Property assignment;
(4.3.4) average compares:Luminance mean value is used, if value difference is 0, weights are setting value V7;If average Difference is more than the high-high brightness average defined in appearance attribute, then weights are setting value V8;If equal value difference is in 0 and high-high brightness Between average, then linear assignment;
(4.3.5) variance compares:Using brightness variance, if difference between the variances are 0, weights are setting value V9;If side Difference is more than the high-high brightness variance defined in appearance attribute, then weights are setting value V10;If difference between the variances 0 with most Between big brightness variance, then linear assignment;
(4.3.6) integrates weights:Above-mentioned weights are averaged, the object matching matrix compared two-by-two.
The present invention has the following advantages that compared with background technology:
1 present invention, simultaneously using automatic detection with tracking the algorithm frame being combined, realizes target for specific objective It is automatic to find and track.
2 present invention use in partial target detection process first detects background, then detects mesh calibration method, reduces search Amount.
3 present invention realize the detection of onboard image real-time target and are handled with tracking.
Description of the drawings
Fig. 1 is the work flow diagram of the embodiment of the present invention.
Fig. 2 is the normal condition schematic diagram of detection zone and tracing area.
Fig. 3 is camera lens inclination conditions schematic diagram.
Fig. 4 is no tracing area situation schematic diagram.
Specific embodiment
The workflow of the embodiment of the present invention is as shown in Figure 1, involved in 1 technical solution of the present invention below in conjunction with the accompanying drawings Each detailed problem is further described.It should be pointed out that described embodiment be intended merely to facilitate the understanding of the present invention and Any restriction effect is not played.
(1) initialize.
(1.1) video frame images are defined into detection zone and tracing area rectangle frame.Assuming that unmanned plane speed is v1, target Movement velocity is v2, then maximum moving distance of the target in picture is d=(v1+v2) * t, and (wherein t adopts for video frame images Sample time interval), the angular field of view of video frame images is d1 × d2, coordinates unmanned plane parameter, can calculate detection zone and scheme Position as in as shown in Figure 2, when visual angle tilts, can use the rectangular area of Fig. 3 as tracking in this case Region.In principle, computing capability it is enough ask frame under, the tracing area of setting should be less than the region of actual calculating processing, such as Above-mentioned parameter is not known or airborne equipment computing capability is enough, can entire picture be all defined as detection zone, such as Fig. 4 institutes Show.
(1.2) target appearance attribute is defined.The purpose of target appearance attribute is defined primarily to reducing target false-alarm, Target minimum length in the picture, maximum length, minimum widith, maximum width, minimum area, most are employed in implementation process Large area, minimum length-width ratio, maximum length-width ratio, high-high brightness variance, high-high brightness average etc..In the present embodiment by advance Trained target obtains, and all big values are amplified to 1.2 times, and small value is contracted to 0.8 times.
(2) background area is detected.The target of the present embodiment tracking is vehicle, and for vehicle, i.e. its background area is road. The fast algorithm of detecting of road is as follows:
(2.1) image is filtered;
(2.2) edge of image is obtained using canny operators;
(2.3) boundary rectangle at all edges is calculated;
(2.4) all longest 20 edges in edge are found, and are divided into 2 groups, longest 10, other 10;
(2.5) 5 are respectively selected in 2 groups, is combined into 10 edges, it is ensured that the boundary rectangle area on 10 sides elected is most It is small.
(2.6) minimum enclosed rectangle found is defined as roadway area.
(3) if current video two field picture is first frame, target detection is carried out in view picture current video two field picture;It is no Then, target detection is carried out in the intersecting area of the object detection area of current video two field picture and background area.Detect target The statistical property of target is extracted afterwards, including shape attribute, characteristic point, color attribute, gray variance, background attribute etc., by belonging to Property compare, shape attribute is satisfactory to be put into present frame object listing L.
(3.1) algorithm of target detection, using the CascadeClassifier that opencv2.4.13 carries, (algorithm is base In study algorithm, it is necessary to according to opencv provide instrument and document carry out sample learning in advance)
(3.2) clarification of objective is extracted, including point feature (SIFT points), the length of boundary rectangle, width, area, length and width Than, the peak of brightness, minimum and the position where respective pixel.
(3.3) it is compared with 1.2 define target appearance statistics of attributes results, attribute is bright according to existing statistical result It shows undesirable target to be filtered, satisfactory target is put into present frame object listing L as correct target.
(4) previous frame object listing L is obtainedp, by present frame object listing L and previous frame object listing LpIn target into Row matches one by one, is compared two-by-two according to matching algorithm, object matching matrix is formed, then successively from object matching matrix Optimal value is extracted, forms Optimum Matching list, point three kinds of situations after matching:If target T is the mesh in Optimum Matching list Mark, i.e., in present frame object listing L with previous frame object listing LpIn there are target T, then be transferred to step (5);If work as There are target T in previous frame object listing L, but previous frame object listing LpIn there is no target T, then be transferred to step (6);It is if current There is no target T in frame object listing L, but previous frame object listing LpPresent in target T, then be transferred to step (7);Specifically For:
(4.1) matching matrix is established, columns is data amount check in present frame target L, and line number is previous frame object listing LpMiddle data amount check or line number are data amount check in present frame object listing L, and columns is previous frame object listing LpMiddle data Number;
The initial weight of matrix is -1.Then start to compare two-by-two, it is assumed that the target sequence number in L is c, LpIn target Serial number r;
(4.2) position is compared:If location interval is more than the d in (1.1), weights 0, if location interval is 0, For 1.5;Between d~0, linear assignment;(linear assignment is to carry out assignment, two endpoints of linear function by linear function For 0,1.5 and d, 0);
(4.2) area compares:If area ratio value is 1, weights 1;It is 0 more than 10 weights;Between 10~1, line Property assignment;
(4.3) length-width ratio compares:If length-width ratio is identical, weights 1,;If length and width ratio is more than 10 times, weights 0; Between 1~10, linear assignment;
(4.4) average compares:Luminance mean value is used, if value difference is 0, weights 1;If equal value difference more than 50, Weights are 0;Between 0~50, linear assignment;
(4.5) variance compares:Using brightness variance, if difference between the variances are 0, weights 1;If variance difference 20 with On, weights 0;Between 0~50, linear assignment;
(4.6) comprehensive score:Above-mentioned weights are averaged, the object matching matrix compared two-by-two;
(4.7) maximum more than threshold value T1 is taken from object matching matrix, extracts line number and row number, it is believed that matching is just Really, and by the other values of the row and column where it is assigned a value of -1;
(4.8) step (4.7) is repeated, until the value for not having weights to be more than threshold value T1 in matrix, correct target will be matched Form Optimum Matching list.
(5) weights of target T are set to target T in LpIn weights plus setting weights step-length S, and by target T from Previous frame object listing LpMiddle deletion enters step (8);
(6) target T is set to suspected target, and the initial weight of target T is set to Vorg, enter step (8);
(7) whether the weights for judging target T are more than set threshold value V1, if it is, being carried out using target tracking algorism Tracking, obtains the target of present frame, the weights of the target of present frame is set to VmaxAnd add in present frame object listing L, and will Target T is from LpMiddle deletion enters step (8);Otherwise the weights of target T are reduced into weights step-length S, if the weights after reducing are big In specified threshold Vmin, then it adds in object listing L, and by target T from LpMiddle deletion, otherwise directly by target T from LpMiddle deletion, It enters step (8);
(8) L is mergedpInto L, it is more than set threshold value V then to find out all weights in present frame object listing L2Mesh Mark, according to the posture, speed and position of unmanned plane, calculates the position of all targets, and pass through wireless system export to ground or Person's weapon;
(8.1) coordinate transform of the terrestrial coordinate system to local geographic coordinate system is calculated
Its transformation matrix R1 is:R1=Rx T(-(90°-B0))Rz T(90°+L0)
This formula and it is following it is various in Rx (θ), Ry (θ), Rz (θ), represent the seat around X-axis, Y-axis and Z axis rotation θ angles respectively Mark spin matrix.B0 represents the current latitude of aircraft, and L0 represents aircraft longitude.
(8.2) coordinate transform of the local geographic coordinate system to body coordinate system is calculated
Its transformation matrix R2 is:
Wherein, ψ is course angle, and θ is pitch angle, and γ is roll angle.
(8.3) coordinate transform when body coordinate system to platform coordinate system is calculated
Its transformation matrix R3 is:R3=Rz(ζ)Ry(η)
Wherein, η is azimuth, ζ platform rolls angle.
(8.4) computing platform coordinate system is to the coordinate transform of camera coordinates system
Its transformation matrix R4 is:R4=Rzc)Rxc)
(8.5) camera coordinates system is calculated to the conversion of terrestrial coordinate system, is obtained using above-mentioned transformation matrix:
(9) using present frame object listing L as previous frame object listing Lp, next video frame images are obtained, are regarded next Frequency two field picture is transferred to step (2) as current video two field picture.
Although the preferred forms of the present invention illustrate the present invention, it being understood, however, that are wanted without departing substantially from right On the premise of seeking book defined value invention essence, some modifications can be made to the present invention.

Claims (4)

  1. A kind of 1. real-time UAV Video object detecting and tracking method, which is characterized in that comprise the following steps:
    (1) initialize:Video frame images are divided into detection zone and tracing area, and define the appearance attribute of target;Described Tracing area is the region that current video two field picture is overlapped with a upper video frame images, other regions are detection zone;Limit feelings Tracing area area is 0 under condition;
    (2) background area detection is carried out to current video two field picture;
    (3) if current video two field picture is first frame, target detection is carried out in view picture current video two field picture;Otherwise, Target detection is carried out in the detection zone of current video two field picture and the intersecting area of background area;Mesh is extracted after target detection Target correlated characteristic, and matched with the appearance attribute of target, the satisfactory target of characteristic attribute is put into present frame mesh Mark list L;
    (4) previous frame object listing L is obtainedp, by present frame object listing L and previous frame object listing LpIn target carry out by One matching, is compared two-by-two according to matching algorithm, is formed object matching matrix, is then extracted successively from object matching matrix Optimal value forms Optimum Matching list, point three kinds of situations after matching:If target T is the target in Optimum Matching list, i.e., In present frame object listing L with previous frame object listing LpIn there are target T, then be transferred to step (5);If present frame mesh It marks in list L there are target T, but previous frame object listing LpIn there is no target T, then be transferred to step (6);If present frame target There is no target T in list L, but previous frame object listing LpPresent in target T, then be transferred to step (7);
    (5) weights of target T are set to target T in LpIn weights plus setting weights step-length S, and by target T from previous frame Object listing LpMiddle deletion enters step (8);
    (6) target T is set to suspected target, and the initial weight of target T is set to Vorg, enter step (8);
    (7) whether the weights for judging target T are more than set threshold value V1, if it is, using target tracking algorism into line trace, The target of present frame is obtained, the weights of the target of present frame are set to VmaxAnd add in present frame object listing L, and by target T From LpMiddle deletion enters step (8);Otherwise the weights of target T are reduced into weights step-length S, referred to if the weights after reducing are more than Determine threshold value Vmin, then it adds in present frame object listing L, and by target T from LpMiddle deletion, otherwise directly by target T from LpIn delete It removes, enters step (8);
    (8) L is mergedpInto L, it is more than set threshold value V then to find out all weights in present frame object listing L2Target, root According to the posture, speed and position of unmanned plane, the position of all targets is calculated, and passes through wireless system and exports to ground or force Device;
    (9) using present frame object listing L as previous frame object listing Lp, next video frame images are obtained, by next video frame figure As current video two field picture, being transferred to step (2).
  2. 2. a kind of real-time UAV Video object detecting and tracking method according to claim 1, it is characterised in that:Step (2) following steps are specifically included:
    (2.1) current video two field picture is filtered;
    (2.2) edge of current video two field picture is obtained using canny operators;
    (2.3) boundary rectangle at all edges is calculated;
    (2.4) all longest 20 edges in edge are found, and are divided into 2 groups, longest 10, other 10;
    (2.5) 5 are respectively selected in 2 groups, is combined into 10 edges, it is ensured that the boundary rectangle area on 10 sides elected is minimum;
    (2.6) minimum enclosed rectangle found is background area.
  3. 3. a kind of real-time UAV Video object detecting and tracking method according to claim 1, it is characterised in that:Step (4) by present frame object listing L and previous frame object listing L inpIn target matched one by one, carried out according to matching algorithm Compare two-by-two, form object matching matrix, then extract optimal value from object matching matrix successively, form Optimum Matching row Table specifically includes following steps:
    (4.1) matching matrix M is established, columns is data amount check in present frame object listing L, and line number is previous frame object listing LpMiddle data amount check or line number are data amount check in present frame object listing L, and columns is previous frame object listing LpMiddle data Number;
    (4.2) all initial values of matrix are a setting weights S;
    (4.3) to all targets in present frame object listing L and previous frame object listing LpIn all targets match one by one, By the either objective T in list LlWith list LpIn either objective TpMatching value V as correspondence position in matching matrix M Weights obtain object matching matrix;
    (4.4) optimal value better than threshold value T1 is taken from object matching matrix, extracts line number and row number, it is believed that matching is correct, And other weights of the row and column where it are assigned a value of S;
    (4.5) step (4.4) is repeated, until not having the weights better than threshold value T1 in matrix, correct target will be matched and formed most Excellent list of matches.
  4. 4. a kind of real-time UAV Video object detecting and tracking method according to claim 3, it is characterised in that:Step (4.3) the either objective T in list LlWith list LpIn either objective TpMatching value V circular it is as follows:
    Compare (4.3.1) position:If location interval is more than the largest interval d defined in the appearance attribute of target, weights are Setting value V1If location interval is the minimum interval defined in appearance attribute, weights are setting value V2;If location interval Between largest interval and minimum interval, then linear assignment;
    (4.3.2) area compares:If area ratio value is the minimum area defined in appearance attribute, weights are setting value V3; If area ratio value is more than the maximum area defined in appearance attribute, weights are setting value V4;If area ratio value exists Between minimum area and maximum area, then linear assignment;
    (4.3.3) length-width ratio compares:If length-width ratio is identical, weights are setting value V5;If length and width ratio is more than appearance attribute Defined in maximum length-width ratio, then weights be setting value V6;If length-width ratio is between 1 and maximum length-width ratio, linear assignment;
    (4.3.4) average compares:Luminance mean value is used, if value difference is 0, weights are setting value V7;If equal value difference is outside It sees more than the high-high brightness average defined in attribute, then weights are setting value V8;If equal value difference 0 with high-high brightness average it Between, then linear assignment;
    (4.3.5) variance compares:Using brightness variance, if difference between the variances are 0, weights are setting value V9;If variance difference More than the high-high brightness variance defined in appearance attribute, then weights are setting value V10;If difference between the variances are in 0 and high-high brightness Between variance, then linear assignment;
    (4.3.6) integrates weights:Above-mentioned weights are averaged, the object matching matrix compared two-by-two.
CN201711415848.0A 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method Active CN108108697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711415848.0A CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711415848.0A CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Publications (2)

Publication Number Publication Date
CN108108697A true CN108108697A (en) 2018-06-01
CN108108697B CN108108697B (en) 2020-05-19

Family

ID=62212615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711415848.0A Active CN108108697B (en) 2017-12-25 2017-12-25 Real-time unmanned aerial vehicle video target detection and tracking method

Country Status (1)

Country Link
CN (1) CN108108697B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118523A (en) * 2018-09-20 2019-01-01 电子科技大学 A kind of tracking image target method based on YOLO
CN109409283A (en) * 2018-10-24 2019-03-01 深圳市锦润防务科技有限公司 A kind of method, system and the storage medium of surface vessel tracking and monitoring
CN110163889A (en) * 2018-10-15 2019-08-23 腾讯科技(深圳)有限公司 Method for tracking target, target tracker, target following equipment
CN110503663A (en) * 2019-07-22 2019-11-26 电子科技大学 A kind of random multi-target automatic detection tracking based on pumping frame detection
CN110826497A (en) * 2019-11-07 2020-02-21 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance method and storage medium
CN111311640A (en) * 2020-02-21 2020-06-19 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle identification and tracking method based on motion estimation
CN111361570A (en) * 2020-03-09 2020-07-03 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
CN112840376A (en) * 2018-10-15 2021-05-25 华为技术有限公司 Image processing method, device and equipment
CN113096165A (en) * 2021-04-16 2021-07-09 无锡物联网创新中心有限公司 Target object positioning method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182368A1 (en) * 2005-01-21 2006-08-17 Changick Kim Efficient and robust algorithm for video sequence matching
CN101408983A (en) * 2008-10-29 2009-04-15 南京邮电大学 Multi-object tracking method based on particle filtering and movable contour model
CN101714252A (en) * 2009-11-26 2010-05-26 上海电机学院 Method for extracting road in SAR image
CN101807303A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Tracking device based on multiple-target mean shift
CN102316307A (en) * 2011-08-22 2012-01-11 安防科技(中国)有限公司 Road traffic video detection method and apparatus thereof
CN103500322A (en) * 2013-09-10 2014-01-08 北京航空航天大学 Automatic lane line identification method based on low-altitude aerial images
CN104657978A (en) * 2014-12-24 2015-05-27 福州大学 Road extracting method based on shape characteristics of roads of remote sensing images
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN105261035A (en) * 2015-09-15 2016-01-20 杭州中威电子股份有限公司 Method and device for tracking moving objects on highway
CN105353772A (en) * 2015-11-16 2016-02-24 中国航天时代电子公司 Visual servo control method for unmanned aerial vehicle maneuvering target locating and tracking
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106228577A (en) * 2016-07-28 2016-12-14 西华大学 A kind of dynamic background modeling method and device, foreground detection method and device
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182368A1 (en) * 2005-01-21 2006-08-17 Changick Kim Efficient and robust algorithm for video sequence matching
CN101408983A (en) * 2008-10-29 2009-04-15 南京邮电大学 Multi-object tracking method based on particle filtering and movable contour model
CN101714252A (en) * 2009-11-26 2010-05-26 上海电机学院 Method for extracting road in SAR image
CN101807303A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Tracking device based on multiple-target mean shift
CN102316307A (en) * 2011-08-22 2012-01-11 安防科技(中国)有限公司 Road traffic video detection method and apparatus thereof
CN103500322A (en) * 2013-09-10 2014-01-08 北京航空航天大学 Automatic lane line identification method based on low-altitude aerial images
CN104657978A (en) * 2014-12-24 2015-05-27 福州大学 Road extracting method based on shape characteristics of roads of remote sensing images
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN105261035A (en) * 2015-09-15 2016-01-20 杭州中威电子股份有限公司 Method and device for tracking moving objects on highway
CN105353772A (en) * 2015-11-16 2016-02-24 中国航天时代电子公司 Visual servo control method for unmanned aerial vehicle maneuvering target locating and tracking
CN105549614A (en) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 Target tracking method of unmanned plane
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106228577A (en) * 2016-07-28 2016-12-14 西华大学 A kind of dynamic background modeling method and device, foreground detection method and device
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EMILLO MAGGIO 等: "particle PHD filtering for multi-target visual tracking", 《2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING-ICASSP"07》 *
李红波 等: "基于mean-shift和粒子滤波的两步多目标跟踪方法", 《重庆邮电大学学报(自然科学版)》 *
谭媛: "遥感图像道路边缘检测与路面提取方法研究", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118523A (en) * 2018-09-20 2019-01-01 电子科技大学 A kind of tracking image target method based on YOLO
CN109118523B (en) * 2018-09-20 2022-04-22 电子科技大学 Image target tracking method based on YOLO
CN110163889A (en) * 2018-10-15 2019-08-23 腾讯科技(深圳)有限公司 Method for tracking target, target tracker, target following equipment
US12026863B2 (en) 2018-10-15 2024-07-02 Huawei Technologies Co., Ltd. Image processing method and apparatus, and device
CN112840376A (en) * 2018-10-15 2021-05-25 华为技术有限公司 Image processing method, device and equipment
CN109409283B (en) * 2018-10-24 2022-04-05 深圳市锦润防务科技有限公司 Method, system and storage medium for tracking and monitoring sea surface ship
CN109409283A (en) * 2018-10-24 2019-03-01 深圳市锦润防务科技有限公司 A kind of method, system and the storage medium of surface vessel tracking and monitoring
CN110503663A (en) * 2019-07-22 2019-11-26 电子科技大学 A kind of random multi-target automatic detection tracking based on pumping frame detection
CN110503663B (en) * 2019-07-22 2022-10-14 电子科技大学 Random multi-target automatic detection tracking method based on frame extraction detection
CN110826497A (en) * 2019-11-07 2020-02-21 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance method and storage medium
CN110826497B (en) * 2019-11-07 2022-12-02 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance method and storage medium
CN111311640B (en) * 2020-02-21 2022-11-01 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle identification and tracking method based on motion estimation
CN111311640A (en) * 2020-02-21 2020-06-19 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle identification and tracking method based on motion estimation
CN111361570B (en) * 2020-03-09 2021-06-18 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
CN111361570A (en) * 2020-03-09 2020-07-03 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
CN113096165B (en) * 2021-04-16 2022-02-18 无锡物联网创新中心有限公司 Target object positioning method and device
CN113096165A (en) * 2021-04-16 2021-07-09 无锡物联网创新中心有限公司 Target object positioning method and device

Also Published As

Publication number Publication date
CN108108697B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN108108697A (en) A kind of real-time UAV Video object detecting and tracking method
CN107235044B (en) A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
Caraffi et al. Off-road path and obstacle detection using decision networks and stereo vision
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN104820998B (en) A kind of human testing based on unmanned motor platform and tracking and device
US8446468B1 (en) Moving object detection using a mobile infrared camera
CN103778645B (en) Circular target real-time tracking method based on images
CN104881650A (en) Vehicle tracking method based on unmanned aerial vehicle (UAV) dynamic platform
CN105550692B (en) The homing vector landing concept of unmanned plane based on marker color and contour detecting
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN105225251B (en) Over the horizon movement overseas target based on machine vision quickly identifies and positioner and method
KR101261409B1 (en) System for recognizing road markings of image
CN108198417B (en) A kind of road cruising inspection system based on unmanned plane
CN112488061B (en) Multi-aircraft detection and tracking method combined with ADS-B information
CN111968128A (en) Unmanned aerial vehicle visual attitude and position resolving method based on image markers
CN105718872A (en) Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN109492525B (en) Method for measuring engineering parameters of base station antenna
CN104809433A (en) Zebra stripe detection method based on maximum stable region and random sampling
CN111598952A (en) Multi-scale cooperative target design and online detection and identification method and system
CN109165602A (en) A kind of black smoke vehicle detection method based on video analysis
CN114495068B (en) Pavement health detection method based on human-computer interaction and deep learning
CN110472628A (en) A kind of improvement Faster R-CNN network detection floating material method based on video features
CN103455815A (en) Self-adaptive license plate character segmentation method in complex scene
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN110517291A (en) A kind of road vehicle tracking based on multiple feature spaces fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant