CN106981073A - A kind of ground moving object method for real time tracking and system based on unmanned plane - Google Patents

A kind of ground moving object method for real time tracking and system based on unmanned plane Download PDF

Info

Publication number
CN106981073A
CN106981073A CN201710206676.XA CN201710206676A CN106981073A CN 106981073 A CN106981073 A CN 106981073A CN 201710206676 A CN201710206676 A CN 201710206676A CN 106981073 A CN106981073 A CN 106981073A
Authority
CN
China
Prior art keywords
target
image
pixel
unmanned plane
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710206676.XA
Other languages
Chinese (zh)
Other versions
CN106981073B (en
Inventor
谭冠政
李波
刘西亚
陈佳庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201710206676.XA priority Critical patent/CN106981073B/en
Publication of CN106981073A publication Critical patent/CN106981073A/en
Application granted granted Critical
Publication of CN106981073B publication Critical patent/CN106981073B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The invention discloses a kind of ground moving object method for real time tracking based on unmanned plane and system, start the image sequence that the object detection and recognition resume module video camera of ground control station is passed back, obtain target rectangle frame size and centre coordinate on earth station's display screen;Start target tracking module, target is tracked using algorithm fusion strategy, if tracking is effective, output target positioning result to trace command generation module;If no-fix is to target, start target search module, searching target simultaneously exports target positioning result to trace command generation module;The requirement at earth station's display screen center, trace command generation module generation unmanned plane position and attitude regulating command need to be navigated to according to target image, and system for flight control computer is uploaded to by radio transmission apparatus its pose is adjusted in real time.Matching efficiency of the present invention is high, it is easy to accomplish, it can effectively carry out target identification, it is to avoid the influence of ambient noise.

Description

A kind of ground moving object method for real time tracking and system based on unmanned plane
Technical field
The invention belongs to Navigation of Pilotless Aircraft field, computer vision field, and in particular to target is carried out using unmanned plane Automatic detection and the method for tracking.
Background technology
Unmanned plane have high maneuverability, high-resolution, good concealment, operation flexibly etc. advantage.So target reconnaissance with There is huge advantage in tracking field, and than traditional fixing camera monitoring scope greatly, it is mainly used in aerial reconnaissance round the clock, traffic Monitor, the field such as military mapping.Ground moving object is tracked and analyzed using the video sensor of UAV flight, There is great practice significance on civilian and military.
All it is close special to some needs when video camera is static for most video monitoring system The region of note is monitored.Background is static, and is mobile, target inspection in this case as the moving target of prospect Surveying only need to make Background difference, with regard to that can obtain good effect.But under many circumstances, the shooting of carrier is such as used as using unmanned plane Object detecting and tracking under machine, its image sequence background shot is often what is be continually changing, with not stationarity, this feelings The detection of target to be tracked under condition seems abnormal difficult with tracking.
Secondly, for the tracking of a single goal, do not represent and there was only single movement object in the visual field of unmanned plane, but There is the object of multiple movements in scene, the detection and tracking to real interesting target cause interference, it is impossible to carry out target Effective identification.Also ambient noise is present, and causes the target extracted endless such as due to the influence of shade or illumination There is cavity at Zheng Huo centers, in these cases, often cause the detection identification of target to cause bigger difficulty.
The explanation of nouns used in the present invention is as follows:
Unmanned plane:It is the not manned aircraft manipulated using radio robot and the presetting apparatus provided for oneself, including Unmanned fixed-wing aircraft, depopulated helicopter and multi-rotor unmanned aerial vehicle etc..
Radio transmission apparatus:A kind of communication equipment of use MAVlink agreements, communications band is generally 2.4G.
Shi-Tomasi angle points:A kind of detection method of image characteristic point, the local feature of representative image, to the bright of image Change, smear out effect, rotationally-varying and visual angle change etc. are spent, higher robustness is respectively provided with.
FRI:It is 30 × 30 square areas that size is taken in neighborhood image centered on angle point, the present invention.
Bhattacharyya coefficients:The numerical value of metric objective model and the interregional similarity degree of candidate family, numerical value is got over Small, region similitude is bigger;Conversely, region similitude is bigger.
The content of the invention
The present invention is intended to provide a kind of ground moving object method for real time tracking and system based on unmanned plane, are solved existing The problem of target detection identification is difficult in technology.
In order to solve the above technical problems, the technical solution adopted in the present invention is:A kind of ground motion based on unmanned plane Object real-time tracking method, comprises the following steps:
1) unmanned plane is gone on patrol according to predetermined flight path, and the image sequence of shooting is transferred into ground control station, Detect the interesting target of unmanned aerial vehicle vision off field;
2) the two dimensional image rectangle frame size and center location information of above-mentioned interesting target are extracted;
3) the two dimensional image rectangle frame size and center location information, fusion mean shift algorithm and Kalman's filter are utilized The output data of ripple algorithm, final goal positioning result is exported using the form of data weighting.
Step 3) after, according to the target positioning result, unmanned plane during flying pattern is adjusted, moving target is located at ground Stand display screen central area.
Step 2) the process that implements include:
1) the Shi-Tomasi angle point set of adjacent two frame of unmanned plane shooting image sequence is extracted respectively;
2) the Shi-Tomasi angle points set to two field pictures constructs synthesis base description respectively;
3) characteristic matching is carried out to the Shi-Tomasi angle points set with synthesis base description, obtains the figure of adjacent two frame Image angle Point matching pair;
4) to step 3) obtain corners Matching pair, estimate background motion transformation matrix using RANSAC methods, go forward side by side Row image background motion compensation;
5) make frame difference operation to the adjacent two field picture after motion compensation, obtain frame difference image, and by frame difference image binaryzation;
6) make morphologic filtering operation to frame difference image, carry out target information separation and extraction, obtain target rectangle frame Size and center location information.
The specific generating process of adjacent all angle point synthesis base description of two field pictures includes:
1) binary conversion treatment is carried out to each characteristic point neighborhood image FRI in adjacent two field pictures, and calculates characteristic point Neighborhood image FRI average gray value, when the pixel point value in characteristic point neighborhood image FRI is more than average gray value, the then picture Vegetarian refreshments value is set to 1;Otherwise, set to 0;
2) the characteristic point neighborhood image FRI of all 30 × 30 sizes in adjacent two field pictures is divided into 6 × 6 sizes is 5 × 5 subregion, synthesis basic image is the square of 5 × 5 black and white element compositions;The synthesis basic image black pixel point Number is the half of FRI subregion pixels, synthesizes the number of basic imageWherein, N is individual for the pixel of FRI subregions Number;K is the number of black picture element in synthesis basic image;
3) for step 2) in any one characteristic point neighborhood image FRI, by this feature vertex neighborhood image FRI all sons Region is compared with order from left to right, from top to bottom with synthesis basic image set, and each sub-regions all generate one 9 Dimensional vector, combines respective 9 dimensional vector of 36 sub-regions, eventually forms synthesis base description of one 324 dimension.
The dimensional vector generation method of a sub-regions 9 of the characteristic point neighborhood image FRI is:One sub-regions and synthesis base The fiducial value of a synthesis basic image is both black picture element identical numbers at same pixel in image collection, synthesizes base The order that image collection is compared is from left to right, from top to bottom, then a sub-regions is according to above-mentioned comparison rule and ratio All synthesis basic images are compared one by one in relatively order, with synthesis basic image set, obtain 9 integer values, composition 9 tie up to Amount.
The specific steps that target information is separated and extracted include:
A) each filtered frame difference image of frame is traveled through, the order of traversal is from top to bottom, from left to right;
If b) pixel is met:Pixel value after binaryzation is 1 and not numbered, then new volume is assigned to the pixel Number;
C) traversal imparts the eight neighborhood of the pixel of new numbering, according to the condition in step b), gives the 8 of the condition of satisfaction The numbering of pixel newly in neighborhood, and the new numbering is identical with imparting the pixel number of new numbering;For being unsatisfactory for bar Pixel in eight fields of part, return to step b);
D) when all pixels value in frame difference image has been traveled through for 1 pixel and after all numbers of volume, operation terminates.
The determination method of the rectangle frame includes:After each filtered frame difference image of frame is scanned through, pixel is 1 There is numbering, numbering identical is then same object, links together and just constitutes moving object, it is assumed that has m moving object, For first moving object, rectangle frame acquisition methods are as follows:Begun stepping through successively from first labeled pixel, until Last labeled pixel has been traveled through, will have been marked in pixel under the minimum value of x coordinate and y-coordinate and maximum preservation Come, be designated as xmin, ymin, xmax, ymax, with (xmin,ymin),(xmax,ymax) 2 points of angle steel joints as rectangle frame, draw rectangle Frame.
Present invention also offers a kind of system of ground moving object real-time tracking, including:
Unmanned plane, for being gone on patrol according to predetermined flight path, ground control is transferred to by the image sequence of shooting Stand;
Radio transmission apparatus:A kind of communication mode is provided for the data transfer between unmanned plane and ground control station;
Ground control station, for detecting the interesting target of unmanned aerial vehicle vision off field, extracts the X-Y scheme of interesting target As rectangle frame size and center location information, and the two dimensional image rectangle frame size and center location information are utilized, fusion is equal The output data of value drift algorithm and Kalman filtering algorithm, it is final that target positioning is tied using the form output of data weighting Really.
Accordingly, the system also includes:Trace command generation module, for according to the target positioning result, adjusting nothing Man-machine offline mode, makes moving target be located at earth station's display screen central area.
The ground control station includes:
Detection and identification module, for detecting the interesting target of unmanned aerial vehicle vision off field, and extract interesting target Two dimensional image rectangle frame size and center location information;
Target tracking module, using the two dimensional image rectangle frame size and center location information, fusion average drifting is calculated The output data of method and Kalman filtering algorithm, target positioning result is obtained using the form output of data weighting is final.
Target search module, when losing tracking target, the module repositions target using a kind of sequence search method.
Trace command generation module, according to imaging region of the tracking target in earth station's display screen, generation it is corresponding with Track is instructed, so that target is located at display screen center.
Compared with prior art, the advantageous effect of present invention is that:The detection of target of the present invention is with tracking process not The artificial synthesis base description son progress Feature Points Matching participated in the overall process, used is needed, is had to dimension rotation, illumination, smear out effect There is robustness, matching efficiency is high, and synthesize the generation of base description and be not related to floating-point operation, it puts down to the hardware for handling image Platform has friendly, it is easy to accomplish, it can effectively carry out target identification, it is to avoid the influence of ambient noise.
Brief description of the drawings
Fig. 1 is UAS structure composition figure;
Fig. 2 is the flow chart of background motion model parameters estimation method of the UAS based on synthesis base description;
Fig. 3 is that target information is separated with extracting figure;
Fig. 4 (a) synthesizes basic image set;The FRI of Fig. 4 (b) binaryzations;Fig. 4 (c) FRI the first sub-regions and first Individual synthesis basic image fiducial value;Fig. 4 (d) FRI the first sub-regions and second synthesis basic image fiducial value;
Fig. 5 is moving target separation and information extraction flow chart;
Fig. 6 is UAS algorithm fusion and search strategy flow chart;
Fig. 7 is UAS search sequence hierarchical strategy flow chart;
Fig. 8 is UAS earth station display screen subsection domain schematic diagram;
Fig. 9 is any two field picture up and down of unmanned aerial vehicle vision frequency sequence;
Figure 10 is the corners Matching image based on synthesis base description;
Figure 11 is the poor testing result image of frame;
Figure 12 is the target detection image after morphologic filtering;
Figure 13 is target separation and information extraction image.
Embodiment
Fig. 1 is that UAS constitutes figure, and it includes unmanned plane, video camera, radio transmission apparatus and ground control station.Nothing The man-machine carrier as video camera, expands the coverage of video camera.Radio transmission apparatus is that unmanned plane gathers image sequence Lower biography flies to control to instruct to upload with earth station provides communication means;Ground control station includes four modules, respectively target detection with Identification module, target tracking module, target search module, trace command generation module.
The specific implementation method of UAS tracking is as follows:
1st, the flight range that unmanned plane is specified by user is gone on patrol, video camera handle using the flight track planned in advance The object detection and recognition module that the image sequence of shooting descends into ground control station by radio transmission apparatus is handled, and is obtained Target is obtained in earth station's display screen image space and rectangle frame size.The frame of arbitrary neighborhood two for the image sequence that unmanned plane is shot is such as Shown in Fig. 9.
2nd, start the object detection and recognition module of unmanned plane, the interesting target of detection unmanned aerial vehicle vision off field, and extract Target rectangle frame size on a display screen and center location information.Object detection and recognition module is divided into two process progress. Background motion model parameters estimation based on synthesis base description is separated and extracted with target information.Tool is explained below in first process The implementation of body, such as Fig. 2 are a kind of flow chart of the background motion model parameters estimation method based on synthesis base description:
1) characteristic point of start frame is extracted, because Shi-Tomasi angle points have high efficiency, therefore this characteristic point is used.If Start frame is determined for X, defines an auto-correlation function F at pixel s as follows:
Wherein δ s represent displacement, and W represents the wide window centered on S
First order Taylor expansion is carried out to X (s+ δ s), rewritable above formula is as follows:
Wherein △ X are image single order derived functions, and Λ is concentration matrix.Feature point extraction standard is concentration matrix characteristic value Minimum value is more than a constant, i.e.,:
Q (s)=min { λ12}>K (3)
Wherein K is between empirical value, generally 0.05-0.5.
2) binaryzation of angle point neighborhood, typically takes the square neighborhood of characteristic point 30 × 30 relatively reasonable, can take into account multiple Miscellaneous degree and the degree of accuracy.Next generation descriptor, carries out binary conversion treatment to FRI, need to calculate the average gray of feature vertex neighborhood Value, FRI average gray value calculation formula is as follows:
In formula, p is FRI number of pixels, is here 900;I (x, y) is the grey scale pixel value of certain point in FRI.
Then, when the pixel point value in feature vertex neighborhood is more than g, then the pixel point value is set to 1;When in feature vertex neighborhood Pixel point value is less than g, then the pixel point value is set to 0.Thus process, can obtain the FRI of binaryzation, and it can retain key point Structural information in neighborhood, lays the foundation for the description son generation of lower step characteristic point.
3) construction corner description is accorded with, and 30 × 30 FRI is divided into first the subregion of 6 × 65 × 5, in order to be able to make FRI Subregion with synthesis basic image carry out corresponding element compared, one synthesis basic image size it is equal with FRI subregion.Close It is a square area into basic image, is combined by black with white elements, can be determined by following composite basis function Synthesize the number of basic image.
In formula, N is the number of pixels of subregion;K is the number of black picture element in synthesis basic image;M represents SBI Number, can uniquely characterize a characteristic point.
In order to improve the real-time of algorithm, of course, it is desirable to which the number for synthesizing basic image is more few better, when K is N half, Function has minimum value.K results are decimal, are carried out plus 1 floor operation.For example, 30 × 30 FRI is divided into 6 × 65 × 5 sub-districts Domain, then N is 13, and the number of synthesis basic image is 13ln (25/13) or 9;30 × 30 FRI is divided into the subregion of 2 15 × 15, Then N is 450, and the number of synthesis basic image is 113ln (225/113) or 78.With Fig. 4 (a)~Fig. 4 (d) 5 × 5 subregions example Son carries out illustrating for algorithm:
Fig. 4 (a) is to synthesize basic image collection and be made up of 9 synthesis basic images, each synthesis basic image region has 13 Pixel is black, and remaining point is white, and this 13 black color dots are distributed in 5 × 5 region using pseudo-random fashion, but necessary Ensure that the distribution pattern of each synthesis basic image is different.Fig. 4 (b) is the FRI after binaryzation, and it is divided into 36 5 × 5 subregion.First sub-regions, are synthesized basic image with each and are compared by from left to right, order from top to bottom, The rule compared is to see both black color dots identical numbers at same pixel, and so each sub-regions can all generate one 9 The vector of dimension, here it is the descriptor of subregion, and the scope of each component is (0,13).
Further in accordance with comparative sequence above, the description of remaining 35 sub-regions is obtained.Finally combine retouching for 36 sub-regions Son is stated, description of one 324 dimension is eventually formed.Wherein Fig. 4 (c) is that the first sub-regions and first synthesis basic image compare Obtained description, is worth for 6;Fig. 4 (d) is description that the first sub-regions and second comparison for synthesizing basic image are obtained, It is worth for 7.
4) corners Matching based on synthesis base description.The success of the matching of characteristic point, it is meant that the two characteristic points " distance " is most short, and weighing the most common method of this distance has an Euclidean distance, mahalanobis distance etc., but its answering of calculating Polygamy is that high dimension vector institute is unacceptable.Based on this, measures characteristic point " distance " is carried out using L1 norms.In order to illustrate characteristic point The matching process of collection, it is now assumed that there is m characteristic point in the present frame of video sequence, next frame has n characteristic point, then in measurement The L1 norms such as following formula of characteristic point distance in lower two frames:
xiRepresent i-th of synthesis base description of present frame, yjRepresent next j-th of synthesis base description of two field picture, w tables Show the dimension of description, containing 324 components.
Synthesis base describes sub- Computing Principle as shown in figure 5, representing description of an angle point per a line, recycles L1 norms Distance is calculated, the distance of angle point 1 and angle point 2 is 3 in Figure 5.It can obtain each arbitrary special in two images by previous step Distance a little is levied, in order to reduce the probability of error hiding, using a kind of cross-matched method:Calculate i-th angle point in present frame with N distance value is obtained apart from d in the L1 norms of all angle points of next frame, and selected distance minimum value is candidate matches point, is designated as yj;In the distance for j-th of the angle point point and all angle points of previous frame for according to the method described above, calculating next frame, m distance is obtained The minimum value wherein obtained, is labeled as t, if t=j, can be determined that x by valueiWith yjIt is no to match correct a pair of characteristic points Then think that matching is wrong.As shown in Figure 10, it is corners Matching figure that cross-matched method obtains Aerial Images.
5) angle point (exterior point) in moving object is excluded using RANSAC algorithms, then goes to estimate background changing matrix.Estimation The kinematic parameter of background, it is desirable to which corners Matching to coming from background angle point group as far as possible, for the corners Matching in previous step Pair, it is necessary to exclude the error interference of moving target corners Matching pair using RANSAC algorithms, mend the background motion that calculates Repay parameter more accurate.Because the image variation used is eight parametric projective transformation model, so at least needing four groups of matchings To solving background changing matrix, wherein eight parametric projective transformation models are as follows;
The algorithmic procedure that RANSAC algorithms calculate Background Motion Compensation matrix is as follows:
A) all matching double points of two images are defined first for population sample D, are arbitrarily chosen four groups of match points and are used as one Individual sample data Ji, and context parameter model H (J) is calculated according to sample data.
B) obtained example H (J are calculated by previous stepi), it is determined that in totality D with H (Ji) between geometric distance<Threshold value d With the constituted set of point, and it is designated as S (H (Ji)), referred to as example H (Ji) consistent collection.
C) by a) calculating another consistent collection S (H (J with b) two stepsk)), if S (H (Ji))>S(H(Jk)), then retain one Cause collection S (H (Ji));Conversely, then retaining consistent collection S (H (Jk))。
D) pass through K random sampling, select the matching of consistent concentration of maximum number to as correct matching pair, that is, carrying on the back Scape angle point group.
E) by the background angle point group determined, background motion transformation matrix H is calculated using least square method.
The determination of wherein d and k is respectively that such as formula (8), (9) are calculated:
D=‖ xi-Hxi‖ (8)
In formula, xiFor a data point of population sample;The probability of w preferably samples (interior point).
The flow of second process of object detection and recognition, target information separation and extraction is as shown in figure 3, specific embodiment party Method is as follows:
1) to calculate frame difference image, because there are multiple mobile objects in unmanned plane visual field, therefore use after a kind of frame previous frame Calculus of finite differences, detects all moving objects, and its calculation formula is as follows:
Wherein Xt-2,Xt-1,XtFor the frame of arbitrary continuation three of video sequence;WithIt is background changing matrix;Et-1 Image is removed for frame subtractive.The Aerial Images of unmanned plane pass through the step process, as shown in figure 11.
2) binaryzation of frame difference image, the image binaryzation obtained using suitable threshold value to step S301.
3) morphologic filtering is operated, and the binary image obtained by step 302 is filtered using morphological operation to it, so Can become apparent from the segmentation effect of each Moving Objects.Morphological operation process is as follows:
A) Image erosion is carried out to it, to reject isolated noise spot.
B) image expansion is carried out to it again, exactly expands the edge of target, filled and led up lacked hole, make profile smoother.
After Mathematical Morphologyization processing, testing result is fuller, and target area becomes apparent from, and is more beneficial for each Moving Objects Segmentation and information extraction.Figure 12 is the figure of taking photo by plane after morphologic filtering.
4) separation and extraction of target information, in order to separate multiple moving objects of each frame, it is necessary first to each fortune Animal body carries out connection association, each moving object of every frame is labeled as into different numberings, finally identical regional choice Out.Realize the above object, commonly use sequence notation method again, this method can complete to the mark of moving object and point From generally to each frame using order progress picture element scan from top to bottom from left to right.The pixel mould used in the method Plate is 3*3 sizes, is comprised the following steps that:
A) pixel traversal is carried out to each frame, the order of traversal is from top to bottom from left to right.
If b) pixel meets two conditions:Pixel value after binaryzation is 1 and not numbered, then the pixel is assigned Give new numbering.
C) eight neighborhood that pixel is found in b) is traveled through, the condition in repeating b) is given and is identically numbered.
D) when the condition in c) is unsatisfactory for, operation b) is repeated.
E) when all pixels value in image has been traveled through for 1 point and after all numbers of volume, operation terminates.
After each frame is scanned through, pixel is 1 numbering that has, and numbering identical is then object, is linked together just Component movement object, it is assumed that have m object, now by taking first moving object as an example, rectangle frame acquisition methods are as follows:Successively from First labeled pixel is traveled through to last labeled pixel, by x coordinate and y-coordinate in mark pixel most Small value is left with maximum, is designated as xmin, ymin, xmax, ymax, rectangle frame can be drawn with that.Generally with (xmin,ymin), (xmax,ymax) 2 points of angle steel joints as rectangle frame, draw rectangle frame.The rectangle frame acquisition methods of other moving objects are same On.Effect of the frame of unmanned plane image sequence arbitrary neighborhood two after the step process, as shown in figure 13.
3rd, target tracking module, the tracking target rectangle frame position obtained by previous step and size information, input are started Into two track algorithms of tracking module, the carrying out practically process of the step is as follows:
1) first assume that target movement model obeys uniform velocity model, Kalman filtering output positioning result is designated as the first mesh Mark true value ykf
Kalman filter utilizes transition model from the status predication current state previously estimated, and with current state more New current survey is as follows, wherein
Kalman filtering gain K is recycled to go to calculate current state true value b (t):
Assuming that current kinetic target movement model is uniform motion, A and M are set according to the model.Wherein A is shape State transfer matrix ωtTransition model error is controlled, M is calculation matrix, εtRepresent measurement error.Wherein VωAnd VεIt is ω respectivelytWith εtCovariance.In our application, the size and location of the bounding box of the object detected is assigned as state and become by us Measure b (t), initialized card Thalmann filter.
2) average drifting track algorithm is utilized, the position of its To Template is provided via object detection and recognition module, So positioning objective result can be exported, the second target true value y is designated asms.Mean shift algorithm detailed process is highly developed, therefore Do not repeat herein.
3) weighted sum data fusion method, positioning result of the output target when not losing are used.If losing target, Search module is enabled, objective result is repositioned.
The first object true value y exported by the first stepkf, and the second target true value y that second step is exportedms, use following strategy The Weighted Fusion of data is carried out, (the second target is true come metric objective model and candidate region using Bhattacharyya coefficients Value) degree of similarity, when similarity be more than 0.8 when, it is believed that the second target true value is completely credible;When similarity is small more than 0.5 When 0.8, the second target true value is not exclusively trusted, carry out data weighting mixing operation;When similarity is less than 0.5, it is believed that mesh Mark blocked or dbjective state change, it is believed that target blocked or dbjective state change, it is believed that target is lost Lose, target search module need to be started and reposition target;Three kinds of above-mentioned situation data fusion modes can by formula (13), (14), (15) judge respectively:
ρ<0.5, y=NULL (15)
In formula, ρ is similarity;D is empirical value;yms,ykfRespectively mean shift algorithm and Kalman filtering algorithm Desired value.
From the foregoing, it will be observed that when output valve is NULL, convergence strategy algorithm thinks that target is lost due to the reason such as blocking, UAS automatic can be switched to target search module from tracking module, reposition target in the region of earth station's display screen Position.
4) such as Fig. 4 is search sequence flow chart, when losing tracking target, starts target and searches plain module, the module is used A kind of searching method of sequence, is divided to two levels, and the reason for being lost to target is more targeted, and search efficiency is higher.
First layer, the equidistant search of front and rear frame difference, yk+1=yk+ △ y, wherein △ y=yk-yk-1
A) it is k-th frame, y to assume currently processed image sequencekFor the center of its K moment target, default image is tracked Sequence target's center is followed successively by y0, y1..., yk-1,yk,yk+1,…。
B) using the equidistant formula of frame difference, the center of K+1 frames is calculated according to k-th frame picture position point, then with The position takes the same size of rectangle frame that object detection and recognition module is exported as candidate target, and the color for calculating its target is straight Fang Tu, then the similarity with To Template is calculated, if similarity is more than the threshold value 0.75 of setting, chooses and trust candidate's mould Plate, have found target;Otherwise, distrust, into second layer search strategy.
The second layer, part/global search strategy, first Local Search, i.e., first in the subregion that previous frame target is lost, Re-searched for using the method for particle filter, be exactly specifically, if 6th area of the target in video camera imaging visual field Domain is lost, then N number of particle is uniformly preferentially sprayed in the area, be repositioned onto target;If also target can not be found in K frame ins, Subregion particle filter method is then used, in 1-9 regions, respectively using particle filter tracking method, each region can A tracking result is filtered out, then using a kind of result in each region of Weighted Fusion, finally retrieves the position of target.
4th, the target positioning result exported according to previous step, enables trace command generation module, and adjustment unmanned plane flies pattern, Moving target is set to be located at picture centre region.Such as figure five is picture portion Field Number, and trace command life is enabled using this subregion Into module, by wireless transport module, the flight control system of unmanned plane is sent a command to, offline mode is adjusted, makes target current Moment imaging region is mobile to central area (the 5th region).Specifically, the adjustment mode of trace command generation module is as follows:
5th area:Picture centre region, if target's center's point is located at the region, keeps the flight attitude of unmanned plane constant, Any trace command is not generated.
1st area:If target's center's point is located at the region, trace command module generation left front offline mode, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
2nd area:If target's center's point is located at the region, trace command module generates offline mode forwards, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
3rd area:If target's center's point is located at the region, trace command module generation right front offline mode, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
4th area:If target's center's point is located at the region, trace command module generates offline mode to the left, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
6th area:If target's center's point is located at the region, trace command module generates right offline mode, controls nobody Machine flight attitude, makes target image central point be located at picture centre region.
7th area:If target's center's point is located at the region, trace command module generation left back offline mode, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
8th area:If target's center's point is located at the region, trace command module generates rearward offline mode, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.
9th area:If target's center's point is located at the region, trace command module generation lower right offline mode, control Unmanned plane during flying posture, makes target image central point be located at picture centre region.

Claims (10)

1. a kind of ground moving object method for real time tracking based on unmanned plane, it is characterised in that comprise the following steps:
1) unmanned plane is gone on patrol according to predetermined flight path, and ground control will be transferred to the image sequence that ground visual field is shot System station, detects the interesting target of unmanned aerial vehicle vision off field;
2) the two dimensional image rectangle frame size and center location information of above-mentioned interesting target are extracted;
3) the two dimensional image rectangle frame size and center location information are utilized, fusion mean shift algorithm and Kalman filtering are calculated The output data of method, final goal positioning result is exported using the form of data weighting.
2. the ground moving object method for real time tracking according to claim 1 based on unmanned plane, it is characterised in that step 3) after, according to the target positioning result, unmanned plane during flying pattern is adjusted, moving target is located at earth station's display screen center Region.
3. the ground moving object method for real time tracking according to claim 1 based on unmanned plane, it is characterised in that step 2) the process that implements includes:
1) the Shi-Tomasi angle point set of adjacent two frame for the image sequence that unmanned plane is shot is extracted respectively;
2) the Shi-Tomasi angle points set to two field pictures constructs synthesis base description respectively;
3) characteristic matching is carried out to the Shi-Tomasi angle points set with synthesis base description, obtains the image angle of adjacent two frame Point matching pair;
4) to step 3) obtain corners Matching pair, estimate background motion transformation matrix using RANSAC methods, and schemed As Background Motion Compensation;
5) make frame difference operation to the adjacent two field picture after motion compensation, obtain frame difference image, and by frame difference image binaryzation;
6) make morphologic filtering operation to frame difference image, carry out target information separation and extraction, obtain the size of target rectangle frame With center location information.
4. the ground moving object method for real time tracking according to claim 3 based on unmanned plane, it is characterised in that adjacent The specific generating process of all angle point synthesis base description of two field pictures includes:
1) binary conversion treatment is carried out to each characteristic point neighborhood image FRI in adjacent two field pictures, and calculates feature vertex neighborhood Image FRI average gray value, when the pixel point value in characteristic point neighborhood image FRI is more than average gray value, the then pixel Value is set to 1;Otherwise, set to 0;
2) it is 5 × 5 the characteristic point neighborhood image FRI of all 30 × 30 sizes in adjacent two field pictures to be divided into 6 × 6 sizes Subregion, synthesis basic image is the square of 5 × 5 black and white elements composition;The synthesis basic image black pixel point number For the half of FRI subregion pixels, the number of basic image is synthesizedWherein, N is the number of pixels of FRI subregions;K For the number of black picture element in synthesis basic image;
3) for step 2) in any one characteristic point neighborhood image FRI, by this feature vertex neighborhood image FRI all subregions With order from left to right, from top to bottom with synthesis basic image set be compared, each sub-regions all generate one 9 tie up to Amount, combines respective 9 dimensional vector of 36 sub-regions, eventually forms synthesis base description of one 324 dimension.
5. the ground moving object method for real time tracking according to claim 4 based on unmanned plane, it is characterised in that described The characteristic point neighborhood image FRI dimensional vector generation method of a sub-regions 9 is:One sub-regions are with synthesizing one in basic image set It is individual synthesis basic image fiducial value for both at same pixel black picture element identical numbers, synthesis basic image set by than Compared with order for from left to right, from top to bottom, then a sub-regions are according to above-mentioned comparison rule and comparative sequence, with synthesis All synthesis basic images are compared one by one in basic image set, obtain 9 integer values, constitute 9 dimensional vectors.
6. the ground moving object method for real time tracking according to claim 3 based on unmanned plane, it is characterised in that target The specific steps that information is separated and extracted include:
A) each filtered frame difference image of frame is traveled through, the order of traversal is from top to bottom, from left to right;
If b) pixel is met:Pixel value after binaryzation is 1 and not numbered, then new numbering is assigned to the pixel;
C) traversal imparts the eight neighborhood of the pixel of new numbering, according to the condition in step b), gives 8 neighborhoods of the condition of satisfaction The numbering of interior pixel newly, and the new numbering is identical with imparting the pixel number of new numbering;For being unsatisfactory for condition Pixel in eight fields, return to step b);
D) when all pixels value in frame difference image has been traveled through for 1 pixel and after all numbers of volume, operation terminates.
7. the ground moving object method for real time tracking according to claim 6 based on unmanned plane, it is characterised in that described The determination method of rectangle frame includes:After each filtered frame difference image of frame is scanned through, pixel is 1 numbering that has, volume Number identical is then same object, links together and just constitutes moving object, it is assumed that have m moving object, for first Moving object, rectangle frame acquisition methods are as follows:Begun stepping through successively from first labeled pixel, until having traveled through last One labeled pixel, the minimum value of x coordinate and y-coordinate in labeled pixel is preserved with maximum, is designated as xmin, ymin, xmax, ymax, with (xmin,ymin),(xmax,ymax) 2 points of angle steel joints as rectangle frame, draw rectangle frame.
8. a kind of system of ground moving object real-time tracking, it is characterised in that including:
Unmanned plane, for being gone on patrol according to predetermined flight path, ground control station is transferred to by the image sequence of shooting;
Ground control station, for detecting the interesting target of unmanned aerial vehicle vision off field, extracts the two dimensional image square of interesting target Shape frame size and center location information, and utilize the two dimensional image rectangle frame size and center location information, fusion average drift The output data of algorithm and Kalman filtering algorithm is moved, target positioning result is obtained using the form output of data weighting is final.
9. the system of ground moving object real-time tracking according to claim 8, it is characterised in that also include:
Trace command generation module, for according to the target positioning result, adjusting unmanned plane during flying pattern, makes moving target position In earth station's display screen central area.
10. the system of ground moving object real-time tracking according to claim 8, it is characterised in that
The ground control station includes:
Detection and identification module, for detecting the interesting target of unmanned aerial vehicle vision off field, and extract the two dimension of interesting target Image rectangle frame size and center location information;
Target tracking module, using the two dimensional image rectangle frame size and center location information, fusion mean shift algorithm and The output data of Kalman filtering algorithm, target positioning result is obtained using the form output of data weighting is final.;
Target search module, when losing tracking target, the module repositions target using a kind of sequence search method;
Trace command generation module, according to imaging region of the tracking target in earth station's display screen, the corresponding tracking of generation refers to Order, so that target is located at display screen center.
CN201710206676.XA 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane Expired - Fee Related CN106981073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710206676.XA CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710206676.XA CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Publications (2)

Publication Number Publication Date
CN106981073A true CN106981073A (en) 2017-07-25
CN106981073B CN106981073B (en) 2019-08-06

Family

ID=59339192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710206676.XA Expired - Fee Related CN106981073B (en) 2017-03-31 2017-03-31 A kind of ground moving object method for real time tracking and system based on unmanned plane

Country Status (1)

Country Link
CN (1) CN106981073B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108286959A (en) * 2017-12-14 2018-07-17 彩虹无人机科技有限公司 A kind of O-E Payload for UAV is detectd to be calculated and display methods according to region
CN108446634A (en) * 2018-03-20 2018-08-24 北京天睿空间科技股份有限公司 The aircraft combined based on video analysis and location information continues tracking
CN108534797A (en) * 2018-04-13 2018-09-14 北京航空航天大学 A kind of real-time high-precision visual odometry method
CN108573498A (en) * 2018-03-08 2018-09-25 李绪臣 The instant tracking system of driving vehicle based on unmanned plane
CN109032166A (en) * 2018-03-08 2018-12-18 李绪臣 Track the method for driving vehicle immediately based on unmanned plane
CN109376660A (en) * 2018-10-26 2019-02-22 天宇经纬(北京)科技有限公司 A kind of target monitoring method, apparatus and system
WO2019041534A1 (en) * 2017-08-29 2019-03-07 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer-readable storage medium
WO2019041569A1 (en) * 2017-09-01 2019-03-07 歌尔科技有限公司 Method and apparatus for marking moving target, and unmanned aerial vehicle
CN109446901A (en) * 2018-09-21 2019-03-08 北京晶品特装科技有限责任公司 A kind of real-time humanoid Motion parameters algorithm of embedded type transplanted
CN109765939A (en) * 2018-12-21 2019-05-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Cloud platform control method, device and the storage medium of unmanned plane
CN109828488A (en) * 2018-12-27 2019-05-31 北京航天福道高技术股份有限公司 The double optical detection tracking systems of acquisition transmission integration
CN109902591A (en) * 2018-03-13 2019-06-18 北京影谱科技股份有限公司 A kind of automobile search system
CN109933087A (en) * 2019-03-18 2019-06-25 西安爱生技术集团公司 Virtually formation battle station keeps control method for unmanned plane and ground maneuver target
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN110097586A (en) * 2019-04-30 2019-08-06 青岛海信网络科技股份有限公司 A kind of Face datection method for tracing and device
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment
CN110189297A (en) * 2019-04-18 2019-08-30 杭州电子科技大学 A kind of magnetic material open defect detection method based on gray level co-occurrence matrixes
CN110471442A (en) * 2018-09-24 2019-11-19 深圳市道通智能航空技术有限公司 A kind of target observations method, relevant device and system
CN110473229A (en) * 2019-08-21 2019-11-19 上海无线电设备研究所 A kind of moving target detecting method based on self-movement feature clustering
CN110930455A (en) * 2019-11-29 2020-03-27 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN111160304A (en) * 2019-12-31 2020-05-15 华中科技大学 Local frame difference and multi-frame fusion ground moving target detection and tracking method
US10719087B2 (en) 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium
CN111476116A (en) * 2020-03-24 2020-07-31 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method
CN111798434A (en) * 2020-07-08 2020-10-20 哈尔滨体育学院 Martial arts competition area detection method based on Ranpac model
CN111898434A (en) * 2020-06-28 2020-11-06 江苏柏勋科技发展有限公司 Screen detection and analysis system
CN112766103A (en) * 2021-01-07 2021-05-07 国网福建省电力有限公司泉州供电公司 Machine room inspection method and device
CN112927264A (en) * 2021-02-25 2021-06-08 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN113034547A (en) * 2021-04-07 2021-06-25 中国科学院半导体研究所 Target tracking method, digital integrated circuit chip, electronic device, and storage medium
CN113298788A (en) * 2021-05-27 2021-08-24 南京航空航天大学 Vision-based marine mobile platform tracking and identifying method
CN113496136A (en) * 2020-03-18 2021-10-12 中强光电股份有限公司 Unmanned aerial vehicle and image identification method thereof
CN113762252A (en) * 2017-08-18 2021-12-07 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller
WO2022027596A1 (en) * 2020-08-07 2022-02-10 深圳市大疆创新科技有限公司 Control method and device for mobile platform, and computer readable storage medium
CN115984335A (en) * 2023-03-20 2023-04-18 华南农业大学 Method for acquiring characteristic parameters of fog drops based on image processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101324956A (en) * 2008-07-10 2008-12-17 上海交通大学 Method for tracking anti-shield movement object based on average value wander
CN103455797A (en) * 2013-09-07 2013-12-18 西安电子科技大学 Detection and tracking method of moving small target in aerial shot video
US20140232893A1 (en) * 2012-11-26 2014-08-21 Pixart Imaging, Inc. Image sensor and operating method thereof
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101324956A (en) * 2008-07-10 2008-12-17 上海交通大学 Method for tracking anti-shield movement object based on average value wander
US20140232893A1 (en) * 2012-11-26 2014-08-21 Pixart Imaging, Inc. Image sensor and operating method thereof
CN103455797A (en) * 2013-09-07 2013-12-18 西安电子科技大学 Detection and tracking method of moving small target in aerial shot video
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762252A (en) * 2017-08-18 2021-12-07 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller
CN113762252B (en) * 2017-08-18 2023-10-24 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle intelligent following target determining method, unmanned aerial vehicle and remote controller
US10719087B2 (en) 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium
WO2019041534A1 (en) * 2017-08-29 2019-03-07 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer-readable storage medium
WO2019041569A1 (en) * 2017-09-01 2019-03-07 歌尔科技有限公司 Method and apparatus for marking moving target, and unmanned aerial vehicle
CN107909600B (en) * 2017-11-04 2021-05-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle real-time moving target classification and detection method based on vision
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108286959A (en) * 2017-12-14 2018-07-17 彩虹无人机科技有限公司 A kind of O-E Payload for UAV is detectd to be calculated and display methods according to region
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108108697B (en) * 2017-12-25 2020-05-19 中国电子科技集团公司第五十四研究所 Real-time unmanned aerial vehicle video target detection and tracking method
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN109032166B (en) * 2018-03-08 2020-01-21 深圳中琛源科技股份有限公司 Method for immediately tracking running vehicle based on unmanned aerial vehicle
CN108573498A (en) * 2018-03-08 2018-09-25 李绪臣 The instant tracking system of driving vehicle based on unmanned plane
CN108573498B (en) * 2018-03-08 2019-04-26 上海申雪供应链管理有限公司 The instant tracking system of driving vehicle based on unmanned plane
CN109032166A (en) * 2018-03-08 2018-12-18 李绪臣 Track the method for driving vehicle immediately based on unmanned plane
CN109902591A (en) * 2018-03-13 2019-06-18 北京影谱科技股份有限公司 A kind of automobile search system
CN109902591B (en) * 2018-03-13 2023-10-27 北京影谱科技股份有限公司 Automobile searching system
CN108446634A (en) * 2018-03-20 2018-08-24 北京天睿空间科技股份有限公司 The aircraft combined based on video analysis and location information continues tracking
CN108534797A (en) * 2018-04-13 2018-09-14 北京航空航天大学 A kind of real-time high-precision visual odometry method
CN109446901A (en) * 2018-09-21 2019-03-08 北京晶品特装科技有限责任公司 A kind of real-time humanoid Motion parameters algorithm of embedded type transplanted
CN110471442A (en) * 2018-09-24 2019-11-19 深圳市道通智能航空技术有限公司 A kind of target observations method, relevant device and system
CN109376660A (en) * 2018-10-26 2019-02-22 天宇经纬(北京)科技有限公司 A kind of target monitoring method, apparatus and system
CN109765939A (en) * 2018-12-21 2019-05-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Cloud platform control method, device and the storage medium of unmanned plane
CN109828488A (en) * 2018-12-27 2019-05-31 北京航天福道高技术股份有限公司 The double optical detection tracking systems of acquisition transmission integration
CN109933087A (en) * 2019-03-18 2019-06-25 西安爱生技术集团公司 Virtually formation battle station keeps control method for unmanned plane and ground maneuver target
CN109933087B (en) * 2019-03-18 2021-12-10 西安爱生技术集团公司 Unmanned aerial vehicle and ground maneuvering target virtual formation battle position keeping control method
CN110189297A (en) * 2019-04-18 2019-08-30 杭州电子科技大学 A kind of magnetic material open defect detection method based on gray level co-occurrence matrixes
CN110189297B (en) * 2019-04-18 2021-02-19 杭州电子科技大学 Magnetic material appearance defect detection method based on gray level co-occurrence matrix
CN110097586A (en) * 2019-04-30 2019-08-06 青岛海信网络科技股份有限公司 A kind of Face datection method for tracing and device
CN110097586B (en) * 2019-04-30 2023-05-30 青岛海信网络科技股份有限公司 Face detection tracking method and device
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment
CN110473229B (en) * 2019-08-21 2022-03-29 上海无线电设备研究所 Moving object detection method based on independent motion characteristic clustering
CN110473229A (en) * 2019-08-21 2019-11-19 上海无线电设备研究所 A kind of moving target detecting method based on self-movement feature clustering
CN110930455A (en) * 2019-11-29 2020-03-27 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN110930455B (en) * 2019-11-29 2023-12-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN111160304A (en) * 2019-12-31 2020-05-15 华中科技大学 Local frame difference and multi-frame fusion ground moving target detection and tracking method
CN111160304B (en) * 2019-12-31 2022-03-29 华中科技大学 Local frame difference and multi-frame fusion ground moving target detection and tracking method
CN113496136A (en) * 2020-03-18 2021-10-12 中强光电股份有限公司 Unmanned aerial vehicle and image identification method thereof
CN111476116A (en) * 2020-03-24 2020-07-31 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method
CN111898434A (en) * 2020-06-28 2020-11-06 江苏柏勋科技发展有限公司 Screen detection and analysis system
CN111898434B (en) * 2020-06-28 2021-03-19 江苏柏勋科技发展有限公司 Video detection and analysis system
CN111798434A (en) * 2020-07-08 2020-10-20 哈尔滨体育学院 Martial arts competition area detection method based on Ranpac model
WO2022027596A1 (en) * 2020-08-07 2022-02-10 深圳市大疆创新科技有限公司 Control method and device for mobile platform, and computer readable storage medium
CN112766103B (en) * 2021-01-07 2023-05-16 国网福建省电力有限公司泉州供电公司 Machine room inspection method and device
CN112766103A (en) * 2021-01-07 2021-05-07 国网福建省电力有限公司泉州供电公司 Machine room inspection method and device
CN112927264B (en) * 2021-02-25 2022-12-16 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN112927264A (en) * 2021-02-25 2021-06-08 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN113034547A (en) * 2021-04-07 2021-06-25 中国科学院半导体研究所 Target tracking method, digital integrated circuit chip, electronic device, and storage medium
CN113034547B (en) * 2021-04-07 2024-02-06 中国科学院半导体研究所 Target tracking method, digital integrated circuit chip, electronic device, and storage medium
CN113298788A (en) * 2021-05-27 2021-08-24 南京航空航天大学 Vision-based marine mobile platform tracking and identifying method
CN115984335A (en) * 2023-03-20 2023-04-18 华南农业大学 Method for acquiring characteristic parameters of fog drops based on image processing

Also Published As

Publication number Publication date
CN106981073B (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN106981073B (en) A kind of ground moving object method for real time tracking and system based on unmanned plane
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
Cui et al. Drones for cooperative search and rescue in post-disaster situation
CN108229587B (en) Autonomous transmission tower scanning method based on hovering state of aircraft
CN109191504A (en) A kind of unmanned plane target tracking
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
Sanfourche et al. Perception for UAV: Vision-Based Navigation and Environment Modeling.
Milford et al. Aerial SLAM with a single camera using visual expectation
CN112488061B (en) Multi-aircraft detection and tracking method combined with ADS-B information
CN108830286A (en) A kind of reconnaissance UAV moving-target detects automatically and tracking
CN102722697A (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
Holz et al. Continuous 3D sensing for navigation and SLAM in cluttered and dynamic environments
Xiang et al. UAV based target tracking and recognition
Li et al. Metric sensing and control of a quadrotor using a homography-based visual inertial fusion method
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
CN116578035A (en) Rotor unmanned aerial vehicle autonomous landing control system based on digital twin technology
Espsoito et al. A hybrid approach to detection and tracking of unmanned aerial vehicles
CN108469729A (en) A kind of human body target identification and follower method based on RGB-D information
Wang et al. Online drone-based moving target detection system in dense-obstructer environment
CN112241180B (en) Visual processing method for landing guidance of unmanned aerial vehicle mobile platform
Li-Chee-Ming et al. Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments
CN113781524A (en) Target tracking system and method based on two-dimensional label
Tang et al. Online camera-gimbal-odometry system extrinsic calibration for fixed-wing UAV swarms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190806