CN106981073B - A kind of ground moving object method for real time tracking and system based on unmanned plane - Google Patents

A kind of ground moving object method for real time tracking and system based on unmanned plane Download PDF

Info

Publication number
CN106981073B
CN106981073B CN201710206676.XA CN201710206676A CN106981073B CN 106981073 B CN106981073 B CN 106981073B CN 201710206676 A CN201710206676 A CN 201710206676A CN 106981073 B CN106981073 B CN 106981073B
Authority
CN
China
Prior art keywords
image
target
pixel
unmanned plane
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710206676.XA
Other languages
Chinese (zh)
Other versions
CN106981073A (en
Inventor
谭冠政
李波
刘西亚
陈佳庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201710206676.XA priority Critical patent/CN106981073B/en
Publication of CN106981073A publication Critical patent/CN106981073A/en
Application granted granted Critical
Publication of CN106981073B publication Critical patent/CN106981073B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The invention discloses a kind of ground moving object method for real time tracking and system based on unmanned plane, the image sequence that the object detection and recognition resume module video camera of starting ground control station is passed back, obtains target rectangle frame size and centre coordinate on earth station's display screen;Start target tracking module, target is tracked using algorithm fusion strategy, if tracking is effectively, exports target positioning result to trace command generation module;If no-fix starts target search module to target, searches target and export target positioning result to trace command generation module;The requirement at earth station's display screen center need to be navigated to according to target image, trace command generation module generates unmanned plane position and attitude regulating command, and uploads to system for flight control computer by radio transmission apparatus and adjusted in real time to its pose.Matching efficiency of the present invention is high, it is easy to accomplish, it can be effectively carried out target identification, avoid the influence of ambient noise.

Description

A kind of ground moving object method for real time tracking and system based on unmanned plane
Technical field
The invention belongs to Navigation of Pilotless Aircraft field, computer vision fields, and in particular to be carried out using unmanned plane to target The automatic method detected with tracking.
Background technique
Unmanned plane has the advantages such as high maneuverability, high-resolution, good concealment, flexible operation.So target reconnaissance with There is huge advantage in tracking field, bigger than traditional fixing camera monitoring range, and which are mainly applied to aerial reconnaissance round the clock, traffic Monitoring, the fields such as military mapping.Ground moving object is tracked and analyzed using the video sensor of UAV flight, There is great practice significance on civilian and military.
It is all that special pass is needed to some when video camera is static for most of video monitoring system The region of note is monitored.Background is static, and the moving target as prospect is mobile, target inspection in this case Surveying only need to make Background difference, can obtain good effect.But in many cases, such as using unmanned plane as the camera shooting of carrier The image sequence background of object detecting and tracking under machine, shooting is often continually changing, has being not fixed property, this feelings The detection of target to be tracked under condition seems abnormal difficult with tracking.
Secondly, the tracking for a single goal, does not represent and there was only single movement object in the visual field of unmanned plane, but There is the object of multiple movements in scene, the detection and tracking to real interesting target cause interference, not can be carried out target Effective identification.There are also ambient noise presence, such as since shade or the influence of illumination etc. cause the target extracted endless There is cavity at whole or center, in these cases, identifies the detection of target and causes bigger difficulty.
The explanation of nouns used in the present invention is as follows:
Unmanned plane: being the not manned aircraft manipulated using radio robot and the presetting apparatus provided for oneself, including Unmanned fixed-wing aircraft, unmanned helicopter and multi-rotor unmanned aerial vehicle etc..
Radio transmission apparatus: a kind of communication equipment using MAVlink agreement, communications band are generally 2.4G.
Shi-Tomasi angle point: a kind of detection method of image characteristic point, the local feature of representative image, to the bright of image Variation, smear out effect, rotationally-varying and visual angle change etc. are spent, higher robustness is all had.
FRI: the neighborhood image centered on angle point, the present invention in take size be 30 × 30 square areas.
Bhattacharyya coefficient: the numerical value of metric objective model and the interregional similarity degree of candidate family, numerical value are got over Small, region similitude is bigger;Conversely, region similitude is bigger.
Summary of the invention
The present invention is intended to provide a kind of ground moving object method for real time tracking and system based on unmanned plane, solves existing The difficult problem of target detection identification in technology.
In order to solve the above technical problems, the technical scheme adopted by the invention is that: a kind of ground motion based on unmanned plane Object real-time tracking method, comprising the following steps:
1) unmanned plane is gone on patrol according to scheduled flight path, and the image sequence of shooting is transferred to ground control station, Detect the interesting target of unmanned aerial vehicle vision off field;
2) the two dimensional image rectangle frame size and center location information of above-mentioned interesting target are extracted;
3) the two dimensional image rectangle frame size and center location information are utilized, mean shift algorithm and Kalman's filter are merged The output data of wave algorithm exports final goal positioning result using the form of data weighting.
After step 3), according to the target positioning result, unmanned plane during flying mode is adjusted, moving target is made to be located at ground It stands display screen central area.
The specific implementation process of step 2) includes:
1) the Shi-Tomasi angle point set of adjacent two frame of unmanned plane shooting image sequence is extracted respectively;
2) synthesis base description is constructed respectively to the Shi-Tomasi angle point set of two field pictures;
3) characteristic matching is carried out to the Shi-Tomasi angle point set with synthesis base description, obtains the figure of adjacent two frame As corners Matching pair;
4) corners Matching pair obtained to step 3), estimates background motion transformation matrix using RANSAC method, goes forward side by side Row image background motion compensation;
5) operation of frame difference is made to the consecutive frame image after motion compensation, obtains frame difference image, and by frame difference image binaryzation;
6) morphologic filtering operation is made to frame difference image, carries out target information separation and extraction, obtain target rectangle frame Size and center location information.
Adjacent all angle point synthesis bases of two field pictures describe sub specific generating process and include:
1) binary conversion treatment is carried out to each characteristic point neighborhood image FRI in adjacent two field pictures, and calculates characteristic point The average gray value of neighborhood image FRI, when the pixel point value in characteristic point neighborhood image FRI is greater than average gray value, the then picture Vegetarian refreshments value is set to 1;Otherwise, 0 is set;
2) the characteristic point neighborhood image FRI of 30 × 30 sizes all in adjacent two field pictures is divided into 6 × 6 sizes is 5 × 5 subregion, synthesis basic image are the square of 5 × 5 black and white elements composition;The synthesis basic image black pixel point Number is the half of FRI subregion pixel, synthesizes the number of basic imageWherein, N is the pixel of FRI subregion Number;K is the number for synthesizing black picture element in basic image;
3) for any one characteristic point neighborhood image FRI in step 2), by all sons of this feature vertex neighborhood image FRI Region is compared with sequence from left to right, from top to bottom with synthesis basic image set, and each sub-regions all generate one 9 Dimensional vector combines respective 9 dimensional vector of 36 sub-regions, eventually forms synthesis base description of one 324 dimension.
The 9 dimensional vector generation method of a sub-regions of the characteristic point neighborhood image FRI an are as follows: sub-regions and synthesis base The fiducial value of a synthesis basic image is the two identical number of black picture element at same pixel in image collection, synthesizes base The sequence that image collection is compared is from left to right, from top to bottom that then a sub-regions are according to above-mentioned comparison rule and ratio All synthesis basic image is compared one by one in relatively sequence, with synthesis basic image set, obtains 9 integer values, composition 9 tie up to Amount.
Target information separation and the specific steps extracted include:
A) each filtered frame difference image of frame is traversed, the sequence of traversal is from top to bottom, from left to right;
If b) pixel meets: the pixel value after binaryzation is 1 and does not number, and new volume is assigned to the pixel Number;
C) traversal imparts the eight neighborhood of the pixel of new number, according to the condition in step b), gives the 8 of the condition of satisfaction The new number of pixel in neighborhood, and the new number is identical as the pixel number for imparting new number;For being unsatisfactory for item Pixel in eight fields of part, return step b);
D) after the pixel for being 1 all pixels value in frame difference image has traversed and all numbers of volume, operation terminates.
The determination method of the rectangle frame includes: after the filtered frame difference image of each frame is scanned, and pixel is 1 There is number, numbering identical is then same object, links together and just constitutes moving object, it is assumed that there is m moving object, For first moving object, rectangle frame acquisition methods are as follows: successively begun stepping through from first labeled pixel, until The last one labeled pixel is traversed, under the minimum value of x coordinate and y-coordinate and maximum value save in label pixel Come, is denoted as xmin, ymin, xmax, ymax, with (xmin,ymin),(xmax,ymax) angle steel joint of the two o'clock as rectangle frame, draw rectangle Frame.
The present invention also provides a kind of systems of ground moving object real-time tracking, comprising:
The image sequence of shooting is transferred to ground control for being gone on patrol according to scheduled flight path by unmanned plane It stands;
Radio transmission apparatus: the data transmission between unmanned plane and ground control station provides a kind of communication mode;
Ground control station extracts the X-Y scheme of interesting target for detecting the interesting target of unmanned aerial vehicle vision off field As rectangle frame size and center location information, and the two dimensional image rectangle frame size and center location information are utilized, fusion is equal The output data of value drift algorithm and Kalman filtering algorithm is exported final that target positioning is tied using the form of data weighting Fruit.
Correspondingly, the system further include: trace command generation module, for adjusting nothing according to the target positioning result Man-machine offline mode makes moving target be located at earth station's display screen central area.
The ground control station includes:
Detection and identification module, for detecting the interesting target of unmanned aerial vehicle vision off field, and extract interesting target Two dimensional image rectangle frame size and center location information;
Target tracking module merges average drifting and calculates using the two dimensional image rectangle frame size and center location information The output data of method and Kalman filtering algorithm obtains target positioning result using the form output of data weighting is final.
Target search module, when losing tracking target, which relocates target using a kind of sequence search method.
Trace command generation module, according to imaging region of the tracking target in earth station's display screen, generate accordingly with Track instruction, so that target is located at display screen center.
Compared with prior art, the advantageous effect of present invention is that: the detection of target of the present invention and tracking process are not The artificial synthesis base description son progress Feature Points Matching participated in the overall process, used is needed, is had to dimension rotation, illumination, smear out effect There is robustness, matching efficiency is high, and the generation for synthesizing base description is not related to floating-point operation, flat to the hardware of processing image Platform has friendly, it is easy to accomplish, it can be effectively carried out target identification, avoid the influence of ambient noise.
Detailed description of the invention
Fig. 1 is UAV system structure composition figure;
Fig. 2 is the flow chart of background motion model parameters estimation method of the UAV system based on synthesis base description;
Fig. 3 is that target information separates and extracts figure;
Fig. 4 (a) synthesizes basic image set;The FRI of Fig. 4 (b) binaryzation;The first sub-regions and first of Fig. 4 (c) FRI A synthesis basic image fiducial value;The first sub-regions of Fig. 4 (d) FRI and second synthesis basic image fiducial value;
Fig. 5 is moving target separation and information extraction flow chart;
Fig. 6 is UAV system algorithm fusion and search strategy flow chart;
Fig. 7 is UAV system search sequence hierarchical strategy flow chart;
Fig. 8 is UAV system earth station, domain, display screen subsection schematic diagram;
Fig. 9 is the arbitrarily upper and lower frame image of unmanned aerial vehicle vision frequency sequence;
Figure 10 is the corners Matching image based on synthesis base description;
Figure 11 is frame difference detection result image;
Figure 12 is the target detection image after morphologic filtering;
Figure 13 is target separation and information extraction image.
Specific embodiment
Fig. 1 is UAV system composition figure comprising unmanned plane, video camera, radio transmission apparatus and ground control station.Nothing The man-machine carrier as video camera, expands the coverage of video camera.Radio transmission apparatus is that unmanned plane acquires image sequence Lower biography flies to control to instruct to upload with earth station provides communication means;Ground control station include four modules, respectively target detection with Identification module, target tracking module, target search module, trace command generation module.
The specific implementation method of UAV system tracking is as follows:
1, the flight range that unmanned plane is specified by user is gone on patrol, video camera handle using the flight track planned in advance The image sequence of shooting is handled by the object detection and recognition module that radio transmission apparatus descends into ground control station, is obtained Target is obtained in earth station's display screen imaging position and rectangle frame size.Two frame of arbitrary neighborhood of the image sequence of unmanned plane shooting is such as Shown in Fig. 9.
2, start the object detection and recognition module of unmanned plane, the interesting target of detection unmanned aerial vehicle vision off field, and extract Target rectangle frame size on a display screen and center location information.Object detection and recognition module is divided into two processes and carries out. Background motion model parameters estimation based on synthesis base description is separated and is extracted with target information.Tool is explained below in first process The implementation method of body, such as the flow chart that Fig. 2 is a kind of background motion model parameters estimation method based on synthesis base description:
1) characteristic point for extracting start frame since Shi-Tomasi angle point has high efficiency, therefore uses this characteristic point.If Determining start frame is X, and it is as follows to define an auto-correlation function F at pixel s:
Wherein δ s indicates that displacement, W indicate the wide window centered on S
First order Taylor expansion is carried out to X (s+ δ s), above formula can be rewritten as follows:
Wherein △ X is image single order derived function, and Λ is concentration matrix.Feature point extraction standard is concentration matrix characteristic value Minimum value is greater than a constant, it may be assumed that
Q (s)=min { λ12}>K (3)
Wherein K is empirical value, between generally 0.05-0.5.
2) binaryzation of angle point neighborhood generally takes the square neighborhood of characteristic point 30 × 30 relatively reasonable, can take into account multiple Miscellaneous degree and accuracy.Next descriptor is generated, binary conversion treatment is carried out to FRI, the average gray of feature vertex neighborhood need to be calculated Value, the average gray value calculation formula of FRI are as follows:
In formula, it is here 900 that p, which is the number of pixels of FRI,;I (x, y) is the grey scale pixel value of certain point in FRI.
Then, when the pixel point value in feature vertex neighborhood is greater than g, then the pixel point value is set to 1;When in feature vertex neighborhood Pixel point value is less than g, then the pixel point value is set to 0.Thus process, the FRI of available binaryzation, it can retain key point Structural information in neighborhood lays the foundation for the description generation of lower step characteristic point.
3) construction corner description symbol, is first divided into 30 × 30 FRI 6 × 65 × 5 subregions, in order to make FRI Subregion with synthesis basic image carry out corresponding element compared with, one synthesis basic image size it is equal with the subregion of FRI.It closes It is a square area at basic image, is composed of black and white elements, can be determined by following composite basis function Synthesize the number of basic image.
In formula, N is the number of pixels of subregion;K is the number for synthesizing black picture element in basic image;M indicates of SBI Number, can uniquely characterize a characteristic point.
In order to improve the real-time of algorithm, of course, it is desirable to the fewer the number for synthesizing basic image the better, when K is the half of N, Function has minimum value.K result is decimal, carries out adding 1 floor operation.For example, 30 × 30 FRI is divided into 6 × 65 × 5 sub-districts Domain, then N is 13, and the number for synthesizing basic image is 13ln (25/13) or 9;30 × 30 FRI is divided into 2 15 × 15 subregions, Then N is 450, and the number for synthesizing basic image is 113ln (225/113) or 78.With 5 × 5 subregion example of Fig. 4 (a)~Fig. 4 (d) Son carries out illustrating for algorithm:
Fig. 4 (a) is to synthesize basic image collection and be made of 9 synthesis basic images, each synthesis basic image region has 13 Pixel is black, remaining point is white, this 13 black color dots are distributed in 5 × 5 region using pseudo-random fashion, but necessary Guarantee that the distribution pattern of each synthesis basic image is different.Fig. 4 (b) is the FRI after binaryzation, and it is divided into 36 5 × 5 subregion.From left to right, the first sub-regions are synthesized basic image with each and are compared by sequence from top to bottom, The rule compared is to see that the two identical number of black color dots, sub-regions each in this way at same pixel can all generate one 9 The vector of dimension, the range here it is the descriptor of subregion, and each component are (0,13).
Further in accordance with comparative sequence above, the description of remaining 35 sub-regions is obtained.Finally combine retouching for 36 sub-regions Son is stated, description of one 324 dimension is eventually formed.Wherein Fig. 4 (c) is that the first sub-regions and first synthesis basic image compare Obtained description, being worth is 6;Description that Fig. 4 (d) obtains for the comparison of the first sub-regions and second synthesis basic image, Value is 7.
4) corners Matching based on synthesis base description.The matched success of characteristic point, it is meant that the two characteristic points " distance " be it is shortest, the most common method for measuring this distance has an Euclidean distance, mahalanobis distance etc., but its answering of calculating Polygamy is that high dimension vector institute is unacceptable.Based on this, carry out measures characteristic point " distance " using L1 norm.In order to illustrate characteristic point The matching process of collection, it is now assumed that there is m characteristic point in the present frame of video sequence, next frame has n characteristic point, then on measuring The L1 norm such as following formula of characteristic point distance in lower two frames:
xiIndicate that i-th of synthesis base of present frame describes son, yjIndicate j-th of synthesis base description of next frame image, w table Show the dimension of description, contains 324 components.
Synthesis base describes sub- Computing Principle as shown in figure 5, every a line indicates description an of angle point, recycling L1 norm Distance is calculated, angle point 1 is 3 at a distance from angle point 2 in Fig. 5.By respectively arbitrary special in the available two images of previous step The distance for levying point, in order to reduce the probability of error hiding, using a kind of cross-matched method: calculate i-th angle point in present frame with N distance value is obtained in the L1 norm distance d of all angle points of next frame, and selected distance minimum value is candidate matches point, is denoted as yj;In j-th of angle point point for according to the method described above, calculating next frame at a distance from all angle points of previous frame, m distance is obtained Minimum value obtained in it can be determined that x if t=j labeled as t by valueiWith yjIt is no to match correctly a pair of of characteristic point Then think to match wrong.As shown in Figure 10, the corners Matching figure of Aerial Images is obtained for cross-matched method.
5) angle point (exterior point) in moving object is excluded using RANSAC algorithm, then removes estimation background changing matrix.Estimation The kinematic parameter of background, it is desirable to which corners Matching is to as far as possible from background angle point group, for the corners Matching in previous step It is right, it needs to exclude the error interference of moving target corners Matching pair using RANSAC algorithm, mends the background motion calculated It is more accurate to repay parameter.Since the image variation used is eight parametric projective transformation model, so at least needing four groups of matchings To background changing matrix is solved, wherein eight parametric projective transformation models are as follows;
The algorithmic procedure that RANSAC algorithm calculates Background Motion Compensation matrix is as follows:
A) defining all matching double points of two images first is population sample D, arbitrarily chooses four groups of match points as one A sample data Ji, and context parameter model H (J) is calculated according to sample data.
B) the example H (J being calculated by previous stepi), determine totality D in H (Ji) between geometric distance < threshold value d With the constituted set of point, and it is denoted as S (H (Ji)), referred to as example H (Ji) consistent collection.
C) by a) calculating another consistent collection S (H (J with b) two stepsk)), if S (H (Ji))>S(H(Jk)), then retain one Cause collection S (H (Ji));Conversely, then retaining consistent collection S (H (Jk))。
D) pass through K random sampling, select the matching of maximum number unanimously concentrated to as correct matching pair, that is, carry on the back Scape angle point group.
E) by determining background angle point group, background motion transformation matrix H is calculated using least square method.
Wherein the determination of d and k parameter is respectively that such as formula (8), (9) calculate:
D=‖ xi-Hxi‖ (8)
In formula, xiFor a data point of population sample;The probability of w preferably sample (interior point).
Second process of object detection and recognition, target information separation and the process extracted are as shown in figure 3, specific embodiment party Method is as follows:
1) to calculate frame difference image, because there are multiple mobile objects in unmanned plane visual field, therefore after using a kind of frame previous frame Calculus of finite differences detects all moving object, and calculation formula is as follows:
Wherein Xt-2,Xt-1,XtFor three frame of arbitrary continuation of video sequence;WithIt is background changing matrix;Et-1 Image is removed for frame subtractive.The Aerial Images of unmanned plane pass through the step process, as shown in figure 11.
2) binaryzation of frame difference image, the image binaryzation that step S301 is obtained using suitable threshold value.
3) morphologic filtering operates, and the binary image obtained by step 302 filters it using morphological operation, in this way The segmentation effect of each Moving Objects can be made to become apparent from.Morphological operation process is as follows:
A) Image erosion is carried out to it, to reject isolated noise spot.
B) image expansion is carried out to it again, exactly expands the edge of target, fill and lead up lacked hole, keep profile smoother.
After Mathematical Morphologyization processing, testing result is fuller, and target area becomes apparent from, and is more advantageous to each Moving Objects Segmentation and information extraction.Figure 12 is the figure of taking photo by plane after morphologic filtering.
4) separation and extraction of target information, in order to separate multiple moving objects of each frame, it is necessary first to each fortune Animal body carries out connection association, each moving object of every frame is labeled as different numbers, finally identical regional choice Out.It realizes the above object, commonly uses sequence notation method again, this method can complete the label to moving object and divide From usually to each frame using sequence progress picture element scan from top to bottom from left to right.The pixel mould used in the method Plate is 3*3 size, the specific steps are as follows:
A) pixel traversal carried out to each frame, the sequence of traversal is from top to bottom from left to right.
If b) pixel meets two conditions: the pixel value after binaryzation is 1 and does not number, and is assigned to the pixel Give new number.
C) eight neighborhood for finding pixel in b) is traversed, repeats the condition in b), gives and be identically numbered.
D) when the condition in c) is unsatisfactory for, operation b) is repeated.
E) after the point for being 1 all pixels value in image has traversed and all numbers of volume, operation terminates.
After each frame is scanned, the number that has that pixel is 1, numbering identical is then object, is linked together just Component movement object, it is assumed that have m object, now by taking first moving object as an example, rectangle frame acquisition methods are as follows: successively from First labeled pixel traverses to the last one and is labeled pixel, most by x coordinate and y-coordinate in label pixel Small value and maximum value, which are left, to be come, and x is denoted asmin, ymin, xmax, ymax, rectangle frame can be drawn with that.Usually with (xmin,ymin), (xmax,ymax) angle steel joint of the two o'clock as rectangle frame, draw rectangle frame.The rectangle frame acquisition methods of other moving objects are same On.Effect of two frame of unmanned plane image sequence arbitrary neighborhood after the step process, as shown in figure 13.
3, start target tracking module, the tracking target rectangle frame position obtained by previous step and size information, input Into two track algorithms of tracking module, the carrying out practically process of the step is as follows:
1) first assume that target movement model obeys uniform velocity model, Kalman filtering exports positioning result, is denoted as the first mesh Mark true value ykf
Kalman filter utilizes transition model from the status predication current state previously estimated, and more with current state New current survey is as follows, wherein
Kalman filtering gain K is recycled to go to calculate current state true value b (t):
Assuming that current kinetic target movement model is uniform motion, A and M are set according to the model.Wherein A is shape State transfer matrix ωtTransition model error is controlled, M is calculation matrix, εtIndicate measurement error.Wherein VωAnd VεIt is ω respectivelytWith εtCovariance.In our application, the size and location of the bounding box for the object that we will test is assigned as state change It measures b (t), initialized card Thalmann filter.
2) average drifting track algorithm is utilized, the position of target template is provided via object detection and recognition module, So positioning objective result can be exported, it is denoted as the second target true value yms.Mean shift algorithm detailed process is highly developed, therefore It does not repeat herein.
3) weighted sum data fusion method is used, positioning result of the target when not losing is exported.If losing target, Search module is enabled, objective result is relocated.
The first object true value y exported by the first stepkfAnd the second target true value y of second step outputms, with following strategy The Weighted Fusion for carrying out data, using Bhattacharyya coefficient, come metric objective model and candidate region, (the second target is true Value) degree of similarity, when similarity be greater than 0.8 when, it is believed that the second target true value is completely credible;When similarity is small greater than 0.5 When 0.8, the second target true value is not exclusively trusted, carry out data weighting mixing operation;When similarity is less than 0.5, it is believed that mesh Mark is blocked or the variation of dbjective state, it is believed that target is blocked or the variation of dbjective state, it is believed that target is lost It loses, target search module need to be started and relocate target;Three kinds of above-mentioned situation data fusion modes can by formula (13), (14), (15) determine respectively:
ρ < 0.5, y=NULL (15)
In formula, ρ is similarity;D is empirical value;yms,ykfRespectively mean shift algorithm and Kalman filtering algorithm Target value.
From the foregoing, it will be observed that when output valve is NULL, convergence strategy algorithm think target being lost due to blocking etc., UAV system can be switched to target search module from tracking module automatically, relocate target in the region of earth station's display screen Position.
4) such as Fig. 4 is search sequence flow chart, and when losing tracking target, starting target searches plain module, module use The reason of a kind of searching method of sequence is divided to two levels, loses to target is more targeted, and search efficiency is higher.
First layer, the equidistant search of before and after frames difference, yk+1=yk+ △ y, wherein △ y=yk-yk-1
A) assume that currently processed image sequence is k-th frame, ykFor the center of its K moment target, default image is tracked Sequence target's center is followed successively by y0, y1..., yk-1,yk,yk+1,…。
B) using the equidistant formula of frame difference, the center of K+1 frame is calculated according to k-th frame picture position point, then with For the same size of rectangle frame that the position takes object detection and recognition module to export as candidate target, the color for calculating its target is straight Fang Tu, then the similarity with target template is calculated, if similarity is greater than the threshold value 0.75 of setting, chooses and trust candidate's mould Plate has found target;Otherwise, distrust, into second layer search strategy.
The second layer, part/global search strategy, first local search, i.e., first in the subregion that previous frame target is lost, It is re-searched for using the method for particle filter, is specifically exactly, if 6th area of the target in video camera imaging visual field Domain is lost, then preferentially uniformly sprays N number of particle in the area, be repositioned onto target;If can not also find target in K frame, Subregion particle filter method is then used, in the region 1-9, uses particle filter tracking method respectively, each region can A tracking result is filtered out, then using a kind of each region of Weighted Fusion as a result, the last position for retrieving target.
4, the target positioning result exported according to previous step enables trace command generation module, and adjustment unmanned plane flies mode, Moving target is set to be located at picture centre region.Such as figure five is picture portion Field Number, and it is raw to enable trace command using this subregion The flight control system of unmanned plane is sent a command to by wireless transport module at module, offline mode is adjusted, makes target current Moment imaging region is mobile to central area (the 5th region).Specifically, the adjustment mode of trace command generation module is as follows:
5th area: picture centre region keeps the flight attitude of unmanned plane constant if target's center's point is located at the region, Any trace command is not generated.
1st area: if target's center's point is located at the region, trace command module generates left front offline mode, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
2nd area: if target's center's point is located at the region, trace command module generates offline mode forwards, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
3rd area: if target's center's point is located at the region, trace command module generates right front offline mode, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
4th area: if target's center's point is located at the region, trace command module generates offline mode to the left, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
6th area: if target's center's point is located at the region, trace command module generates right offline mode, controls nobody Machine flight attitude makes target image central point be located at picture centre region.
7th area: if target's center's point is located at the region, trace command module generates left back offline mode, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
8th area: if target's center's point is located at the region, trace command module generates rearward offline mode, control Unmanned plane during flying posture makes target image central point be located at picture centre region.
9th area: if target's center's point is located at the region, trace command module generates lower right offline mode, control Unmanned plane during flying posture makes target image central point be located at picture centre region.

Claims (6)

1. a kind of ground moving object method for real time tracking based on unmanned plane, which comprises the following steps:
1) unmanned plane is gone on patrol according to scheduled flight path, and the image sequence shot to ground visual field is transferred to ground control System station, detects the interesting target of unmanned aerial vehicle vision off field;
2) the two dimensional image rectangle frame size and center location information of above-mentioned interesting target are extracted;
3) the two dimensional image rectangle frame size and center location information are utilized, mean shift algorithm is merged and Kalman filtering is calculated The output data of method exports final goal positioning result using the form of data weighting;
The specific implementation process of step 2) includes:
I) the Shi-Tomasi angle point set of adjacent two frame of the image sequence of unmanned plane shooting is extracted respectively;
Ii synthesis base description) is constructed respectively to the Shi-Tomasi angle point set of two field pictures;
Iii characteristic matching) is carried out to the Shi-Tomasi angle point set with synthesis base description, obtains the image of adjacent two frame Corners Matching pair;
Iv) the corners Matching pair obtained to step iii), estimates background motion transformation matrix using RANSAC method, and carry out Image background motion compensation;
V) operation of frame difference is made to the consecutive frame image after motion compensation, obtains frame difference image, and by frame difference image binaryzation;
Vi morphologic filtering operation) is made to frame difference image, carries out target information separation and extraction, obtains two dimensional image rectangle frame Size and center location information.
2. the ground moving object method for real time tracking according to claim 1 based on unmanned plane, which is characterized in that step 3) after, according to the target positioning result, unmanned plane during flying mode is adjusted, moving target is made to be located at ground control station display screen Central area.
3. the ground moving object method for real time tracking according to claim 1 based on unmanned plane, which is characterized in that adjacent All angle point synthesis bases of two field pictures describe sub specific generating process and include:
1) binary conversion treatment is carried out to each characteristic point neighborhood image FRI in adjacent two field pictures, and calculates feature vertex neighborhood The average gray value of image FRI, when the pixel point value in characteristic point neighborhood image FRI is greater than average gray value, the then pixel Value is set to 1;Otherwise, 0 is set;
2) the characteristic point neighborhood image FRI of 30 × 30 sizes all in adjacent two field pictures is divided into 6 × 6 sizes is 5 × 5 Subregion, synthesis basic image be 5 × 5 black and white elements composition square;The synthesis basic image black pixel point number For the half of FRI subregion pixel, the number of basic image is synthesizedWherein, N is the number of pixels of FRI subregion;K For the number of black picture element in synthesis basic image;
3) for any one characteristic point neighborhood image FRI in step 2), by all subregions of this feature vertex neighborhood image FRI With from left to right, from top to bottom sequence with synthesis basic image set be compared, each sub-regions all generate one 9 tie up to Amount combines respective 9 dimensional vector of 36 sub-regions, eventually forms synthesis base description of one 324 dimension.
4. the ground moving object method for real time tracking according to claim 3 based on unmanned plane, which is characterized in that described The 9 dimensional vector generation method of a sub-regions of characteristic point neighborhood image FRI are as follows: one in a sub-regions and synthesis basic image set It is a synthesis basic image fiducial value be both at same pixel the identical number of black picture element, synthesis basic image set by than Compared with sequence be from left to right, from top to bottom, then a sub-regions are according to above-mentioned comparison rule and comparative sequence, with synthesis All synthesis basic images are compared one by one in basic image set, obtain 9 integer values, form 9 dimensional vectors.
5. the ground moving object method for real time tracking according to claim 1 based on unmanned plane, which is characterized in that target Information separation and the specific steps extracted include:
A) each filtered frame difference image of frame is traversed, the sequence of traversal is from top to bottom, from left to right;
If b) pixel meets: the pixel value after binaryzation is 1 and does not number, and new number is assigned to the pixel;
C) traversal imparts the eight neighborhood of the pixel of new number, according to the condition in step b), gives 8 neighborhoods of the condition of satisfaction The new number of interior pixel, and the new number is identical as the pixel number for imparting new number;For being unsatisfactory for condition Pixel in eight fields, return step b);
D) after the pixel for being 1 all pixels value in frame difference image has traversed and all numbers of volume, operation terminates.
6. the ground moving object method for real time tracking according to claim 5 based on unmanned plane, which is characterized in that described The determination method of rectangle frame includes: after the filtered frame difference image of each frame is scanned, and the number that has that pixel is 1 is compiled Number identical is then same object, links together and just constitutes moving object, it is assumed that has m moving object, for first Moving object, rectangle frame acquisition methods are as follows: successively beginning stepping through from first labeled pixel, until having traversed last The minimum value of x coordinate and y-coordinate in labeled pixel is preserved with maximum value, is denoted as by one labeled pixel xmin, ymin, xmax, ymax, with (xmin,ymin),(xmax,ymax) angle steel joint of the two o'clock as rectangle frame, draw rectangle frame.
CN201710206676.XA 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane Expired - Fee Related CN106981073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710206676.XA CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710206676.XA CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Publications (2)

Publication Number Publication Date
CN106981073A CN106981073A (en) 2017-07-25
CN106981073B true CN106981073B (en) 2019-08-06

Family

ID=59339192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710206676.XA Expired - Fee Related CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Country Status (1)

Country Link
CN (1) CN106981073B (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409354B (en) * 2017-08-18 2021-09-21 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller
CN107505951B (en) * 2017-08-29 2020-08-21 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer readable storage medium
US10719087B2 (en) 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium
CN107590450A (en) * 2017-09-01 2018-01-16 歌尔科技有限公司 A kind of labeling method of moving target, device and unmanned plane
CN107909600B (en) * 2017-11-04 2021-05-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle real-time moving target classification and detection method based on vision
CN108286959A (en) * 2017-12-14 2018-07-17 彩虹无人机科技有限公司 A kind of O-E Payload for UAV is detectd to be calculated and display methods according to region
CN108108697B (en) * 2017-12-25 2020-05-19 中国电子科技集团公司第五十四研究所 Real-time unmanned aerial vehicle video target detection and tracking method
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN108573498B (en) * 2018-03-08 2019-04-26 上海申雪供应链管理有限公司 The instant tracking system of driving vehicle based on unmanned plane
CN109032166B (en) * 2018-03-08 2020-01-21 深圳中琛源科技股份有限公司 Method for immediately tracking running vehicle based on unmanned aerial vehicle
CN109902591B (en) * 2018-03-13 2023-10-27 北京影谱科技股份有限公司 Automobile searching system
CN108446634B (en) * 2018-03-20 2020-06-09 北京天睿空间科技股份有限公司 Aircraft continuous tracking method based on combination of video analysis and positioning information
CN108534797A (en) * 2018-04-13 2018-09-14 北京航空航天大学 A kind of real-time high-precision visual odometry method
CN109446901B (en) * 2018-09-21 2020-10-27 北京晶品特装科技有限责任公司 Embedded transplantation real-time humanoid target automatic identification algorithm
DE102018123411A1 (en) * 2018-09-24 2020-03-26 Autel Robotics Europe Gmbh Target observation method, associated device and system
CN109376660B (en) * 2018-10-26 2022-04-08 天宇经纬(北京)科技有限公司 Target monitoring method, device and system
CN109765939A (en) * 2018-12-21 2019-05-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Cloud platform control method, device and the storage medium of unmanned plane
CN109828488A (en) * 2018-12-27 2019-05-31 北京航天福道高技术股份有限公司 The double optical detection tracking systems of acquisition transmission integration
CN109933087B (en) * 2019-03-18 2021-12-10 西安爱生技术集团公司 Unmanned aerial vehicle and ground maneuvering target virtual formation battle position keeping control method
CN110189297B (en) * 2019-04-18 2021-02-19 杭州电子科技大学 Magnetic material appearance defect detection method based on gray level co-occurrence matrix
CN110097586B (en) * 2019-04-30 2023-05-30 青岛海信网络科技股份有限公司 Face detection tracking method and device
CN110120077B (en) * 2019-05-06 2021-06-11 航天东方红卫星有限公司 Area array camera in-orbit relative radiation calibration method based on satellite attitude adjustment
CN110473229B (en) * 2019-08-21 2022-03-29 上海无线电设备研究所 Moving object detection method based on independent motion characteristic clustering
CN110930455B (en) * 2019-11-29 2023-12-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN111160304B (en) * 2019-12-31 2022-03-29 华中科技大学 Local frame difference and multi-frame fusion ground moving target detection and tracking method
CN113496136A (en) * 2020-03-18 2021-10-12 中强光电股份有限公司 Unmanned aerial vehicle and image identification method thereof
CN111476116A (en) * 2020-03-24 2020-07-31 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method
CN111898434B (en) * 2020-06-28 2021-03-19 江苏柏勋科技发展有限公司 Video detection and analysis system
CN111798434A (en) * 2020-07-08 2020-10-20 哈尔滨体育学院 Martial arts competition area detection method based on Ranpac model
CN113906360A (en) * 2020-08-07 2022-01-07 深圳市大疆创新科技有限公司 Control method and device for movable platform and computer readable storage medium
CN112766103B (en) * 2021-01-07 2023-05-16 国网福建省电力有限公司泉州供电公司 Machine room inspection method and device
CN112927264B (en) * 2021-02-25 2022-12-16 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN113034547B (en) * 2021-04-07 2024-02-06 中国科学院半导体研究所 Target tracking method, digital integrated circuit chip, electronic device, and storage medium
CN113298788A (en) * 2021-05-27 2021-08-24 南京航空航天大学 Vision-based marine mobile platform tracking and identifying method
CN115984335B (en) * 2023-03-20 2023-06-23 华南农业大学 Method for acquiring characteristic parameters of fog drops based on image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101324956A (en) * 2008-07-10 2008-12-17 上海交通大学 Method for tracking anti-shield movement object based on average value wander
CN103455797A (en) * 2013-09-07 2013-12-18 西安电子科技大学 Detection and tracking method of moving small target in aerial shot video
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201421423A (en) * 2012-11-26 2014-06-01 Pixart Imaging Inc Image sensor and operating method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101324956A (en) * 2008-07-10 2008-12-17 上海交通大学 Method for tracking anti-shield movement object based on average value wander
CN103455797A (en) * 2013-09-07 2013-12-18 西安电子科技大学 Detection and tracking method of moving small target in aerial shot video
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform

Also Published As

Publication number Publication date
CN106981073A (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN106981073B (en) A kind of ground moving object method for real time tracking and system based on unmanned plane
EP2917874B1 (en) Cloud feature detection
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
CN109584213B (en) Multi-target number selection tracking method
He et al. Vision-based UAV flight control and obstacle avoidance
CN112488061B (en) Multi-aircraft detection and tracking method combined with ADS-B information
Sanfourche et al. Perception for UAV: Vision-Based Navigation and Environment Modeling.
CN108830286A (en) A kind of reconnaissance UAV moving-target detects automatically and tracking
Mondragón et al. Visual model feature tracking for UAV control
CN102722697A (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN112927264B (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN110941996A (en) Target and track augmented reality method and system based on generation of countermeasure network
Wen et al. Hybrid semi-dense 3D semantic-topological mapping from stereo visual-inertial odometry SLAM with loop closure detection
CN113486697B (en) Forest smoke and fire monitoring method based on space-based multimode image fusion
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
Xiang et al. UAV based target tracking and recognition
Chaudhary et al. Robust real-time visual tracking using dual-frame deep comparison network integrated with correlation filters
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
CN116578035A (en) Rotor unmanned aerial vehicle autonomous landing control system based on digital twin technology
CN109544597A (en) A kind of quadrotor drone method for tracking target, system and the device of view-based access control model
Brown et al. Feature-aided multiple target tracking in the image plane
Espsoito et al. A hybrid approach to detection and tracking of unmanned aerial vehicles
Bovyrin et al. Human height prediction and roads estimation for advanced video surveillance systems
CN112241180B (en) Visual processing method for landing guidance of unmanned aerial vehicle mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190806