CN108446634A - The aircraft combined based on video analysis and location information continues tracking - Google Patents

The aircraft combined based on video analysis and location information continues tracking Download PDF

Info

Publication number
CN108446634A
CN108446634A CN201810230742.1A CN201810230742A CN108446634A CN 108446634 A CN108446634 A CN 108446634A CN 201810230742 A CN201810230742 A CN 201810230742A CN 108446634 A CN108446634 A CN 108446634A
Authority
CN
China
Prior art keywords
target
aircraft
target aircraft
tracking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810230742.1A
Other languages
Chinese (zh)
Other versions
CN108446634B (en
Inventor
栗向滨
郑文涛
王国夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Terravision Technology Co Ltd
Original Assignee
Beijing Terravision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Terravision Technology Co Ltd filed Critical Beijing Terravision Technology Co Ltd
Priority to CN201810230742.1A priority Critical patent/CN108446634B/en
Publication of CN108446634A publication Critical patent/CN108446634A/en
Application granted granted Critical
Publication of CN108446634B publication Critical patent/CN108446634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The present invention relates to a kind of aircrafts combined based on video analysis and location information to continue tracking, it calculates the image coordinate for obtaining target aircraft in video image according to target aircraft location information, monotrack is carried out to target aircraft according to monitoring video, the two is compared, as between the two away from persistently exceed effectively tracking threshold value if think monotrack lose target, after confirmation form BREAK TRACK target, by image coordinate point centered on point of the target aircraft in video image, effective following range according to the image coordinate point determines that search box carries out the identification again of target aircraft, it is preferred that the Feature Descriptor detected as target aircraft using HOG features, the position model and Scale Model for obtaining tracking filter are calculated using DSST algorithms.The present invention can judge the loss of target aircraft in time, and the data processing amount for giving loss target for change is few, is primarily useful for lasting tracking and its similar applications of the targets such as aircraft.

Description

The aircraft combined based on video analysis and location information continues tracking
Technical field
The present invention relates to a kind of aircrafts combined based on video analysis and location information to continue tracking, belongs to video prison Control and computer information technology, are primarily useful for lasting tracking and its similar applications of the targets such as aircraft.
Background technology
Existing aircraft continues tracking technique, and there are two main classes:One kind is to be based on existing aircraft location information, will be determined The aircraft longitude and latitude that position information uses is mapped to the image coordinate of aircraft monitoring or tracking system, is achieved in aircraft Identification and tracking, the latitude and longitude information (location information) of aircraft can be obtained by a variety of methods, such as scene surveillance radar (SSR), Automatic dependent surveillance broadcast (ADS-B), more base station location systems (Multilateration) etc.;Another kind of then base In to aircraft monitoring or tracking system video analysis, by video analysis realize aircraft identification and tracking, at present for Monotrack based on video analysis mainly has two big technology paths, and one is based on the method for generating model, one is bases In the method for discrimination model, target area is modeled in current video frame wherein being referred to based on the method for generating model, is regarded next It is exactly the predicted position of specified tracking target that frequency frame, which is found with the most like region of model, and main method includes Kalman's filter Wave, particle filter, mean-shift and ASMS etc.;Method based on discrimination model is then to be based on characteristics of image and machine learning The combination of method, in current video frame using target area as positive sample, background area is negative sample, is instructed with machine learning method Practice grader, it includes classical pedestrian detection to look for optimal region, main method with trained grader in next video frame (HOG features are combined with SVM classifier), Struck (SVM classifier of haar features and structuring output), TLD and DAT etc., And the method for deep learning, such as MDNet, TCNN, SiamFC, GOTURN and DLT etc..
Above-mentioned various methods are applied under each adaptive occasion and requirement respectively, but still remain respective lack It falls into.For example, the method based on existing aircraft location information is because of the reasons such as positioning information update frequency is relatively low, and positioning accuracy is low, Application scenario is very restricted, and cannot be satisfied the listed grade actual requirements of aircraft.And system is monitored or tracked based on aircraft The method of system video analysis then faces several big difficult points:Target appearance deformation, target illumination variation, target are quickly moved and are moved Fuzzy, target interference similar to background, target planar rotate in plane external rotation, target, the dimensional variation of target, target It is blocked and moves out visual field etc. with target.These technological difficulties can all cause selected tracking target to lose, i.e., tracking box with Realistic objective separation in video, influences practical utilization and extention.
In face of loss problem, problem to be treated is at first at present:When how to judge the loss and loss of target following It carves, however can really judge when target is lost on earth without a kind of algorithm in existing all algorithms at present. The prior art be typically tracking box is disappeared in video is considered target loss, such case may be used target identify again by Lost target is given for change, such as tracking clarification of objective is recorded before disappearing using target, is done entirely in video after target disappearance Then office's target detection carries out characteristic matching to the target detected, successful match can consider that target is retrieved, due to existing Target weight recognizer is all based on the regional choice strategy that whole video image carries out sliding window, without specific aim, time Complexity is high, window redundancy;Even if the target detection based on deep learning is also required to carry out target detection to whole video image, Speed is slower.In particular, target loss situation most in reality is tracking box still in picture, but not in target In physical location, false tracking, this target is caused to lose the harmfulness bigger that comparison-tracking frame disappears, it is difficult to it finds and corrects, The target that the prior art cannot still judge and correct in the case of this tracking box presence is lost.
Invention content
In order to solve the above technical problems, the present invention provides a kind of aircrafts combined based on video analysis and location information Continue tracking, this method can judge the loss of target aircraft in time, and give the data processing for losing target for change Amount is few.
The technical scheme is that:A kind of aircraft combined based on video analysis and location information continues track side Method calculates in the elements of a fix of physical world according to contained target aircraft in target aircraft location information and obtains target boat Image coordinate of the pocket in video image carries out monotrack to target aircraft according to monitoring video, will be used for monocular The image coordinate for marking the target aircraft tracking box of tracking is compared with the image coordinate of target aircraft, in setting quantity In continuous frame video image (such as continuous 180 frame video image), such as between the two away from persistently beyond effectively tracking threshold value (example As target aircraft tracks twice of width of frame), then it is assumed that monotrack does not lose target, such as has between the two away from persistently exceeding Effect tracking threshold value, then it is assumed that monotrack loses target, and the image coordinate of the target aircraft tracking box is target aviation The image coordinate of the central point of device tracking box, the point correspond to the position of the target aircraft in target aircraft location information.
It is preferred that realizing the matching of target aircraft and target aircraft tracking box according to following manner:According to all aircrafts The image coordinate in video image, calculate separately the image coordinate of each aircraft and the image of target aircraft tracking box sat Spacing between mark is found out away from target aircraft tracking box apart from nearest aircraft, if the successive frame in setting quantity regards In frequency image (such as continuous 180 frame video image), the spacing between the aircraft and target aircraft tracking box is always not Persistently exceed effectively tracking threshold value, confirm successful match, this is apart from the target aviation that nearest aircraft is the monotrack Device, this is the target aircraft corresponding to the target aircraft tracking box apart from nearest aircraft in other words, can be with the boat Aircraft identification information in pocket location information, for example, flight number, the target aircraft identification as the monotrack is believed Breath.
After confirmation form BREAK TRACK target, the identification again of target aircraft is carried out.A kind of preferred mode is:With mesh Image coordinate point centered on point of the aircraft in video image (for example, starting the present frame identified) again is marked, according to the image Effective following range (range of the covering spacing without departing from the whole region of effective tracking threshold value) of coordinate points determine search box ( Region of search in video image) identification again that carries out target aircraft, to identify that the new target aircraft of acquisition tracks again Frame carries out subsequent monotrack as the target aircraft tracking box of present frame.
The target aviation is thought highly of identification and is preferably used between the last target aircraft and target aircraft tracking box Spacing without departing from effective tracking threshold value when tracing detection used model and/or model parameter.
It is preferred that the spy detected as target aircraft using HOG (Histogram of Oriented Gradient) feature Sign description.
Tracking filter is obtained it is preferred that being calculated using DSST (Discriminatiive Scale Space Tracker) algorithm The position model and Scale Model of device.
Size estimation can be carried out using one-dimensional filtering device, two dimensional filter is used for location estimation.
Monotrack can be carried out according to the position model and Scale Model of tracking filter.When mesh in current frame image When marking the spacing between the image coordinate and the image coordinate of target aircraft tracking box of aircraft without departing from effective tracking threshold value, Learning rate parameter η according to setting is filtered the update of device position model and Scale Model.
When between the image coordinate of target aircraft in current frame image and the image coordinate of target aircraft tracking box When spacing is beyond effectively tracking threshold value, the preferably update without filter location model and Scale Model, that is, before keeping most Nearly primary filter location model and Scale Model.
It is preferred that on start frame (being referred to as first frame) image, with artificial side within target aircraft appearance profile The seed point of the selected starting of formula, the contour area of target aircraft is obtained using the image partition method of region growing, according to mesh Mark the abscissa end value and ordinate end value x of the contour area of aircraftmin、xmax、ymin、ymax, with angle point for (xmin,ymin)、 (xmax,ymin)、(xmin,ymax)、 (xmax,ymax) rectangle as starting target aircraft tracking box (P1(xmin,ymin),P2 (xmax,ymax))。
When HOG feature extractions, according to actual needs, usually should cromogram first be switched into gray-scale map and carry out Gamma corrections, Preferably, it is used for γ=0.5 of Gamma corrections.
The histogram of gradients statistical of cell (cell) can be:Gradient direction is mapped to 180 degree of range It is interior, 9 dimensional feature vectors are divided into, the gradient magnitude of pixel is projected as weights, determine to tie up to which with gradient direction It is projected.
The beneficial effects of the invention are as follows:By by the position location of target aircraft with obtained based on video analysis it is real-time Tracing positional is compared, in the presence of tracking box, it will be able to find tracking box not in the reality of target aircraft in time On the position of border or effectively in following range, avoids and judged by accident caused by tracking box exists, find losing for tracking target in time It loses, and can in time give for change by video tracking lost target aircraft according to the physical location of target aircraft, reduce mesh The data processing amount of indicated weight identification restores effectively tracking in time.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is monotrack filter schematic of the present invention.
Specific implementation mode
The present invention will be further described below in conjunction with the accompanying drawings.
It is persistently tracked as shown in Figure 1, being implemented in combination with aircraft the present invention is based on video analysis and location information, on the one hand profit The position (image coordinate) of target aircraft on the video images is calculated with the location information of aircraft, is on the other hand passed through Video analysis persistently carries out the monotrack of target aircraft, calculates the tracking box of aircraft, and the two is compared, When the two distance is more than preset threshold value, judgement aircraft tracking is lost.In the case that this tracking is lost, bright utilization The previously stored filter information for the tracking of target aircraft is tracked, according to the boat from airport or airspace management system etc. Pocket location information calculates the position of aircraft, and the detection based on filter is carried out in aircraft near zone, give for change with The target aircraft of track simultaneously continues to track.
This method mainly includes the following steps:
One, manual interaction selected target aircraft
The application scenarios of the present invention are real time monitoring video flowing, can not take and suspend video flowing then to target aircraft It carries out tracking box and draws the method taken, so in the link of manual selected aircraft, be taken based on the image segmentation side of region growing Method takes a seed point firstly the need of point within the target aircraft appearance profile to occurring in video, is given birth to by region Long image partition method obtains the starting tracking box of selected target aircraft.
Image partition method based on region growing is as follows:
1) starting seed point c (x, y) is chosen in the video frame;
2) centered on seed point c (x, y), recursive traversal is carried out to its neighborhood territory pixel;
3) one discriminate Q (c (x, y), N are designed according to the prior art for each neighborhood territory pixel N (x, ' y ') (x, ' y ')) it is made whether to belong to the differentiation of aircraft;
If 4) discriminate is true, neighborhood territory pixel N (x, ' y ') is arranged to new seed point, into 2) step, and general Results set (being the same area with seed point) is added in the point, otherwise exits this recurrence and returns to 3) step, detects next neighborhood Pixel.
For above method recursive traversal, you can the contour area for smoothly finding selected aircraft, in the region found The abscissa and ordinate x of minimum and maximum are found out in boundarymin、xmax、ymin、ymax, you can obtain target aircraft starting with Track frame (P1(xmin,ymin),P2(xmax,ymax)), the target aircraft tracking box originated can start to carry out list to it later Target following.
Two, monotrack is carried out to target aircraft
Herein, using HOG features as the feature description of target aircraft.
HOG characteristic extraction procedures are as follows:
1) gray processing processing is carried out to video frame.Because HOG feature extractions is textural characteristics, colouring information does not act as With so cromogram is first switched to gray-scale map;
2) gray level image is normalized.In order to improve robustness of the detector to disturbing factors such as illumination, need to figure As carrying out Gamma corrections, to complete the normalization to whole image, it is therefore an objective to adjust the contrast of image, reduce local light and shine With the influence caused by shade, while the interference of noise can also be reduced:
G (x, y)=F (x, y)γFormula (1)
Image after wherein G (x, y) normalization, F (x, y) they are the 1st) step treated gray level image.
3) gradient of image pixel is calculated:The horizontal direction and vertical direction of each pixel are calculated according to following formula (2) Gradient, and according to the gradient magnitude of each location of pixels of formula (3) calculating and direction.
Gradient horizontally and vertically of the image at pixel (x, y) be:
Next the gradient magnitude and gradient direction of (x, y) at pixel are calculated:
4) gradient orientation histogram of statistics cell factory (Cell or cell):Small Cell is divided an image into, In the range of gradient direction is mapped to 180 degree, the gradient magnitude of pixel is projected as weights, is determined with gradient direction It is projected to which dimension, can be typically divided between 9 dimensions.
5) gradient orientation histogram of statistics block (Block):The histogram of gradients in each cell factory is counted, is formed every Description of a cell factory is made of description of bigger, referred to as block (Block) Cell, by the feature of Cell in a block Vector, which is together in series, just constitutes the gradient orientation histogram of the block.The variation shone due to local light and prospect background comparison The variation of degree so that the variation range of gradient intensity is very big, this just needs to do local contrast normalization to gradient.Here Strategy is to carry out contrast normalization for each block, generally uses the normalization mode of L2 canonicals.
Each piece of description is thus obtained, for a characteristics of objects, block can be rectangle, can also be circle Shape, it is determined according to extracting object feature is wanted.After obtaining feature, using a Cell size as step-length on target image, inspection Whether have matched characteristics of objects, characteristics of objects matching can be based on similarity if surveying on target image, such as that Europe may be used is several In distance.
6) gradient orientation histogram of statistical window (Window):It only needs all pieces in window of HOG feature vectors It is together in series and has just obtained the HOG features of Window.
7) gradient orientation histogram of entire image is counted:Piece image can be non-overlapping be divided into multiple Window, At this moment the feature vector of all Window is together in series be exactly entire image HOG features, if the size of Window and The size of image is identical, then the HOG features of Window are exactly the HOG features of entire image, this is also that final tracking uses Feature vector.
Size estimation usually is carried out using one-dimensional filtering device, two dimensional filter is used for location estimation, and three-dimensional filter is used In the scale-space location estimation of target aircraft.The relationship of input, output and filter is as shown in Figure 2.
Consider that the d dimension HOG characteristic patterns of target aircraft indicate.If f be the target extracted from this HOG characteristic pattern with Track rectangle frame (Window).The intrinsic dimensionality of f is indicated with l ∈ { 1 ..., d }, and f is shown as per dimensional tablel.Target is to find One best filter h, each characteristic dimension include a filter hl.This target can be by minimizing cost letter Number obtains:
Here, g is to export to respond with the relevant desired Gausses of f, and the central point for responding output is tracking target aircraft Center.Parameter lambda >=0 controls the influence of regularization term.The solution of formula (4) can be acquired through transitions into frequency domain:
In above formula, H, F and G are respectively the frequency domain representation after the discrete Fourier transform of spatial domain variable h, f and g, in symbol Whippletree indicate the conjugation of corresponding frequency domain representation.
Regularization parameter λ solves the problems, such as zero-frequency component after f goes to frequency domain.It is defeated on all Block by minimizing Best filter can be obtained by going out error.However, this d × d system of linear equations for needing to solve each pixel, this for Line study application is time-consuming.The approximation of a robust in order to obtain, here the t-1 moment correlation filters H in formula (5)l's Molecule and denominator are expressed as:
Then predict that the filter of t moment is expressed as:
η is learning rate parameter in formula (8) and formula (9).Because different at many centers of target following frame surrounding sample, The tracking box of different sizes needs to calculate a scoring function, highest scoring, i.e., it is new mesh that y values are maximum in formula (10) Mark aircraft states.
Wherein, ZlTo sample the HOG characteristic frequency domain expression formulas of tracking box.
The above flow can be expressed as:
● input:
Input picture It
The position p of previous framet-1With scale st-1
Position modelAnd Scale Model
● output:
The target location p of estimationtWith scale st
Newer position modelAnd Scale Model
Wherein:
● Displacement Estimation:
In image ItPosition pt-1With scale st-1Position is sampled, z is extractedtrans
Utilize ztransWithCalculate ytans
Find maximum ytans, you can new target location p is sett,
● size estimation:
In image ItPosition ptWith scale st-1Scale is sampled, z is extractedscale
Utilize zscaleWithCalculate yscale
Find maximum yscale, you can new target scale s is sett,
● model modification:
In image ItPosition ptWith scale stExtract multisample fscaleAnd fscale
Update displacement modelWithObtain newer position modelWith
Update Scale ModelWithObtain newer Scale ModelWithSo far, you can Realize the monotrack to target aircraft.
Three, to every frame aircraft calculation of longitude & latitude image coordinate
1) the coordinate calibration based on Chosen Point
J picture point is manually selected first on raw video image as calibration point, and passes through figure in video image As the image pixel coordinates of handling implement acquisition calibration point, and record the image pixel coordinates value of calibration point.Then in outdoor scene It is found in figure calibration tool corresponding to outdoor scene marker or the index point corresponding to the calibration point being had determined in video image Longitude and latitude coordinate, and record the longitude and latitude coordinate value of calibration point.
The selected principle of calibration point is:
(a) it correctly finds for convenience corresponding to the corresponding outdoor scene marker or index point of video image calibration point Longitude and latitude coordinate needs calibration point of the selection with significant point.Such as the angle point or installation point of road sign, traffic The angle point of lane line, the angle point etc. of ground ornamental, it is therefore an objective to which the uniqueness that can easily determine calibration point facilitates calibration;
(b) value of calibration point quantity j at least should be more than or equal to 4 points, in order to reach preferable effect, be more than or equal to 8 points compared with Excellent, reason will be described hereinafter;
(c) the selected arbitrary 3 points of needs of calibration point are not arranged on the same straight line;
(d) selected calibration point is distributed in entire picture with wanting even density, it is not possible to and it is very intensive in certain locals, Other region is but very sparse;
(e) picture is bigger, if calibration point is more, accuracy in computation is higher.
2) the coordinate mapping calculation based on arest neighbors nonlinear least-square model
By the coordinate mapping calculation based on arest neighbors nonlinear least-square model can by the longitude of physical world with Latitude coordinate (Sx, Sy) is mapped to the representative corresponding picture point of pixel coordinate (Dx, Dy), that is,
Since airdrome scene flatness is high, GPS information (the longitude and latitude) conversion for not being related to height can be only considered For the algorithm of image coordinate, the conversion between image coordinate system is realized in latitude and longitude coordinates system and two dimensional surface.
So latitude and longitude coordinates turn video image coordinate to belong to two-dimensional transformations to be two-dimensional mapping, it is to close correspondingly System.
Least square method is a kind of mathematical optimization techniques.It finds the best letter of data by minimizing the quadratic sum of error Number matching.Unknown data can be easily acquired using least square method, and make these data and real data for acquiring Between error quadratic sum be minimum.Since the latitude and longitude coordinates of physical world to video image coordinate are not simple linear The relationship of mapping, so nonlinear function model of the present invention using least square method, to approach the image of non-calibration point Coordinate.
Since physical world surface condition is not quite similar, so if only establishing a nonlinear function on entire picture Model can bring larger error.The present invention using finding the method approached of arest neighbors, to each longitude and latitude establish one it is non- Linear function model seeks the parameter of corresponding model by least square method, is sat so as to find out the corresponding image of selected longitude and latitude Mark.That is each latitude and longitude coordinates (Sx, Sy), find out the difference of two squares distance of this longitude and latitude and all calibration point longitudes and latitudes first, Then several nearest calibration points of distance (Sx, Sy) are found.
The coordinate of bidding fixed point is (Sx1, Sy1), (Sx2, Sy2) ..., (Sxn, Syn) (n >=8), the coordinate of Chosen Point is (Sx, Sy), then difference of two squares distance be:
Dist1p=(Sxp-Sx)2+(Syp-Sy)2(p=1,2 ..., j) formula (11)
Then the j Dist to finding outp(p=1,2 ..., j) carries out the quicksort based on size, and method is:Pass through one It plows the data that sequence will sort and is divided into independent two parts, all data of a portion are all than the institute of another part There are data small, quicksort is according to said method then carried out respectively to this two parts data again, entire sequencer procedure can be passed Return progress, reaching entire data with this becomes ordered sequence.
By the method for quicksort, 8 calibration points nearest apart from Chosen Point (Sx, Sy) are found out, then pass through this 8 Calibration point solves the nonlinear model shape parameter based on least square method.
There are many kinds of nonlinear models, and the present invention uses quadratic polynomial model, i.e.,
WhereinChosen Point (Sx, Sy) correspondence image coordinate is represented, a, b, c, d, e, f are corresponding model parameter, (mn) longitude and latitude value of the physical world of Chosen Point (Sx, Sy) are represented.
By verifying repeatedly with grope to find, having ignored m2With n2After two, fitting effect is still preferable, so non-thread Property model can be reduced to following formula:
The method for solving nonlinear model shape parameter is described below.Without loss of generality, it is assumed that most apart from Chosen Point (Sx, Sy) The latitude and longitude coordinates of 8 close calibration points are:
(Sx1, Sy1), (Sx2, Sy2), (Sx3, Sy3), (Sx4, Sy4), (Sx5, Sy5), (Sx6, Sy6), (Sx7, Sy7), (Sx8, Sy8)
Its corresponding image abscissa is with ordinate
Assuming that there are coefficient a, b, c, d, meet following equations:
Formula (14) can be write
Cu=v formulas (15)
Wherein
Formula (15) is converted and can be obtained
U=C-1V formulas (19)
Since matrix A is non-square matrix, so A herein-1For the pseudo inverse matrix (also known as generalized inverse matrix) of matrix A, i.e.,
C-1=(CTC)-1CTFormula (20)
So for Chosen Point (Sx, Sy), abscissa valueFor
Formula (19) is substituted into formula (21), can be obtained
It can similarly obtain, if setting the column vector that ω forms as calibration point ordinate value, i.e.,
Then Chosen Point (Sx, Sy) ordinate value φ is
Method more than utilization can find out the corresponding image coordinate value of each longitude and latitude.
Calculated every frame aircraft image coordinate is matched with the target aircraft of monotrack to every frame aircraft Longitude and latitude carry out calculate can obtain the center position of every frame aircraft in the picture, the target aviation to monotrack The center position that device central point calculates in the picture with every frame aircraft is calculated into row distance
Dist2q=(Dxq-Dx)2+(Dyq-Dy)2(q=1,2 ..., k) formula (24)
Wherein (Dxq,Dyq) it is the center position that every frame aircraft calculates in the picture, (Dx, Dy) is monotrack Target aircraft central point, k framves aircraft altogether.
Quicksort is carried out to calculated distance value, you can find apart from monotrack aircraft apart from nearest boat Pocket.If the nearest aircraft of continuous 180 frame image distance be same frame aircraft, and every time distance value be single goal with Track aircraft tracks within twice of value of width of frame, that is, thinks successful match, you can obtain the flight of monotrack aircraft Code name.
So far, if every time the target aircraft of detection successful match by calculation of longitude & latitude go out position in the picture with The distance of monotrack aircraft central point.If distance value tracks twice of value of width of frame in monotrack aircraft Within, then it is assumed that monotrack does not lose target, then updates monotrack filter location model And Scale ModelIf distance value monotrack aircraft track width of frame twice of value except, Preserve the monotrack filter of distance value last time within twice of value that monotrack aircraft tracks width of frame Position modelAnd Scale ModelMonotrack filter location model is not updatedAnd Scale ModelIf continuing 180 frames still not restore, then it is assumed that single Target has been lost in target following, preserve distance value within twice of value that monotrack aircraft tracks width of frame last Secondary monotrack frame width angle value.
Four, monotrack aircraft is given for change
The longitude and latitude of the matched aircraft of previous step is calculated, you can the existing position of aircraft lost Region, and target aircraft one is scheduled in this region.
To the aircraft central point vertical direction that goes out by calculation of longitude & latitude and vertical direction grow twice preserved, The monotrack frame width angle value of distance value last time within twice of value that monotrack aircraft tracks width of frame, will This search box carries out sampling tracking box z as the Windows frames of HOG features.
With distance value monotrack aircraft track width of frame twice of value within last time single goal with Track filter location modelAnd Scale ModelAnd sampling search box z, calculate y values.
If maximum value differs smaller with second largest value at this time, target may be not appearing in picture also due to blocking etc. In face, continue to calculate at this time;If maximum value differs larger with second largest value at this time, target appears in picture, takes maximum Value can be obtained position and the scale of the former aircraft of loss, carry out monotrack to this position and scale again, former The aircraft of loss is given for change.
It is disclosed by the invention it is each preferably with optional technological means, unless otherwise indicated and one preferably or can selecting technology hand Section is that further limiting for another technological means is outer, can form several different technical solutions in any combination.

Claims (10)

1. a kind of aircraft combined based on video analysis and location information continues tracking, positioned according to target aircraft Contained target aircraft calculates the image for obtaining target aircraft in video image in the elements of a fix of physical world in information Coordinate carries out monotrack according to monitoring video to target aircraft, will be tracked for the target aircraft of monotrack The image coordinate of frame is compared with the image coordinate of target aircraft, such as the two in the continuous frame video image of setting quantity Spacing does not exceed effectively tracking threshold value persistently, then it is assumed that monotrack does not lose target, such as effective away from persistently exceeding between the two Track threshold value, then it is assumed that monotrack loses target.
2. the method as described in claim 1, it is characterised in that realize target aircraft and target aircraft according to following manner The matching of tracking box:According to the image coordinate in video image of all aircrafts, the image for calculating separately each aircraft is sat Spacing between mark and the image coordinate of target aircraft tracking box, finds out away from target aircraft tracking box apart from nearest aviation Device, if in the continuous frame video image of setting quantity, the spacing between the aircraft and target aircraft tracking box is always Do not exceed effectively tracking threshold value persistently, confirm successful match, this navigates apart from the target that nearest aircraft is the monotrack Pocket.
3. method as claimed in claim 2, it is characterised in that after confirmation form BREAK TRACK target, existed with target aircraft Point centered on image coordinate point in video image, effective following range according to the image coordinate point determine that search box carries out mesh The identification again for marking aircraft, to identify that the target aircraft of new target aircraft tracking box as the present frame of acquisition tracks again Frame carries out subsequent monotrack.
4. method as claimed in claim 3, it is characterised in that the target aviation is thought highly of identification and navigated using the last target The model and/or mould of tracing detection used when spacing between pocket and target aircraft tracking box is without departing from effective tracking threshold value Shape parameter.
5. the method as described in claim 1-4 is any, it is characterised in that the spy detected as target aircraft using HOG features Sign description.
6. method as claimed in claim 5, it is characterised in that calculate the position mould for obtaining tracking filter using DSST algorithms Type and Scale Model carry out size estimation using one-dimensional filtering device, and two dimensional filter is used for location estimation.
7. method as claimed in claim 6, it is characterised in that position model and Scale Model according to tracking filter carry out Monotrack, when between the image coordinate of target aircraft in current frame image and the image coordinate of target aircraft tracking box Spacing without departing from effective tracking threshold value when, the learning rate parameter η according to setting is filtered device position model and scale mould The update of type, when between the image coordinate of target aircraft in current frame image and the image coordinate of target aircraft tracking box When spacing is beyond effectively tracking threshold value, the update without filter location model and Scale Model.
8. the method for claim 7, it is characterised in that on start frame image, within target aircraft appearance profile The seed point for selecting starting manually, the profile region of target aircraft is obtained using the image partition method of region growing Domain, the abscissa end value and ordinate end value x of the contour area according to target aircraftmin、xmax、ymin、ymax, it is with angle point (xmin,ymin)、(xmax,ymin)、(xmin,ymax)、(xmax,ymax) rectangle as starting target aircraft tracking box (P1 (xmin,ymin),P2(xmax,ymax))。
9. method as claimed in claim 8, it is characterised in that when HOG feature extractions, cromogram is first switched to gray-scale map and is gone forward side by side Row Gamma corrections, γ=0.5 for Gamma corrections.
10. method as claimed in claim 9, it is characterised in that the histogram of gradients statistical of Cell is:By gradient direction It is mapped in the range of 180 degree, is divided into 9 dimensional feature vectors, the gradient magnitude of pixel is projected as weights, uses gradient Direction determines to be projected to which dimension.
CN201810230742.1A 2018-03-20 2018-03-20 Aircraft continuous tracking method based on combination of video analysis and positioning information Active CN108446634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810230742.1A CN108446634B (en) 2018-03-20 2018-03-20 Aircraft continuous tracking method based on combination of video analysis and positioning information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810230742.1A CN108446634B (en) 2018-03-20 2018-03-20 Aircraft continuous tracking method based on combination of video analysis and positioning information

Publications (2)

Publication Number Publication Date
CN108446634A true CN108446634A (en) 2018-08-24
CN108446634B CN108446634B (en) 2020-06-09

Family

ID=63195462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810230742.1A Active CN108446634B (en) 2018-03-20 2018-03-20 Aircraft continuous tracking method based on combination of video analysis and positioning information

Country Status (1)

Country Link
CN (1) CN108446634B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543634A (en) * 2018-11-29 2019-03-29 达闼科技(北京)有限公司 Data processing method, device, electronic equipment and storage medium in position fixing process
CN109670462A (en) * 2018-12-24 2019-04-23 北京天睿空间科技股份有限公司 Continue tracking across panorama based on the aircraft of location information
CN111210457A (en) * 2020-01-08 2020-05-29 北京天睿空间科技股份有限公司 Aircraft listing method combining video analysis and positioning information
CN111275766A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN112084952A (en) * 2020-09-10 2020-12-15 湖南大学 Video point location tracking method based on self-supervision training
CN112686921A (en) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112949588A (en) * 2021-03-31 2021-06-11 苏州科达科技股份有限公司 Target detection tracking method and target detection tracking device
CN113641685A (en) * 2021-10-18 2021-11-12 中国民用航空总局第二研究所 Data processing system for guiding aircraft
CN113763416A (en) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 Automatic labeling and tracking method, device, equipment and medium based on target detection
CN115100293A (en) * 2022-06-24 2022-09-23 河南工业大学 ADS-B signal blindness-compensating method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299243A (en) * 2014-09-28 2015-01-21 南京邮电大学 Target tracking method based on Hough forests
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106127807A (en) * 2016-06-21 2016-11-16 中国石油大学(华东) A kind of real-time video multiclass multi-object tracking method
US20160379486A1 (en) * 2015-03-24 2016-12-29 Donald Warren Taylor Apparatus and system to manage monitored vehicular flow rate
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN107705324A (en) * 2017-10-20 2018-02-16 中山大学 A kind of video object detection method based on machine learning
US20180061209A1 (en) * 2016-08-31 2018-03-01 Tile, Inc. Tracking Device Location and Management

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299243A (en) * 2014-09-28 2015-01-21 南京邮电大学 Target tracking method based on Hough forests
US20160379486A1 (en) * 2015-03-24 2016-12-29 Donald Warren Taylor Apparatus and system to manage monitored vehicular flow rate
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106127807A (en) * 2016-06-21 2016-11-16 中国石油大学(华东) A kind of real-time video multiclass multi-object tracking method
US20180061209A1 (en) * 2016-08-31 2018-03-01 Tile, Inc. Tracking Device Location and Management
CN106981073A (en) * 2017-03-31 2017-07-25 中南大学 A kind of ground moving object method for real time tracking and system based on unmanned plane
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN107705324A (en) * 2017-10-20 2018-02-16 中山大学 A kind of video object detection method based on machine learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALAN LUKEZIC等: "Discriminative Correlation Filter with Channel and Spatial Reliability", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
YI WU等: "Online Object Tracking: A Benchmark", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
刘晓鸣: "基于改进的Hausdorff距离匹配算法的运动目标跟踪系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王青等: "基于序列二次规划算法的定位器坐标快速标定方法", 《浙江大学学报(工学版)》 *
葛致磊等: "《导弹导引系统原理》", 31 March 2016, 国防工业出版社 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543634A (en) * 2018-11-29 2019-03-29 达闼科技(北京)有限公司 Data processing method, device, electronic equipment and storage medium in position fixing process
CN111275766A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN111275766B (en) * 2018-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN109670462A (en) * 2018-12-24 2019-04-23 北京天睿空间科技股份有限公司 Continue tracking across panorama based on the aircraft of location information
CN109670462B (en) * 2018-12-24 2019-11-01 北京天睿空间科技股份有限公司 Continue tracking across panorama based on the aircraft of location information
CN111210457A (en) * 2020-01-08 2020-05-29 北京天睿空间科技股份有限公司 Aircraft listing method combining video analysis and positioning information
CN113763416A (en) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 Automatic labeling and tracking method, device, equipment and medium based on target detection
CN112084952B (en) * 2020-09-10 2023-08-15 湖南大学 Video point location tracking method based on self-supervision training
CN112084952A (en) * 2020-09-10 2020-12-15 湖南大学 Video point location tracking method based on self-supervision training
CN112686921A (en) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112686921B (en) * 2021-01-08 2023-12-01 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112949588A (en) * 2021-03-31 2021-06-11 苏州科达科技股份有限公司 Target detection tracking method and target detection tracking device
CN113641685A (en) * 2021-10-18 2021-11-12 中国民用航空总局第二研究所 Data processing system for guiding aircraft
CN113641685B (en) * 2021-10-18 2022-04-08 中国民用航空总局第二研究所 Data processing system for guiding aircraft
CN115100293A (en) * 2022-06-24 2022-09-23 河南工业大学 ADS-B signal blindness-compensating method

Also Published As

Publication number Publication date
CN108446634B (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN108446634A (en) The aircraft combined based on video analysis and location information continues tracking
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
Alvarez et al. Combining priors, appearance, and context for road detection
US9846946B2 (en) Objection recognition in a 3D scene
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN109670462B (en) Continue tracking across panorama based on the aircraft of location information
CN105389550B (en) It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives
KR100506095B1 (en) Method and apparatus of landmark detection in intelligent system
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction
Aeschliman et al. Tracking vehicles through shadows and occlusions in wide-area aerial video
Fernandez et al. A comparative analysis of decision trees based classifiers for road detection in urban environments
CN109543694A (en) A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature
CN111666860A (en) Vehicle track tracking method integrating license plate information and vehicle characteristics
WO2022271963A1 (en) Automated computer system and method of road network extraction from remote sensing images using vehicle motion detection to seed spectral classification
CN116245949A (en) High-precision visual SLAM method based on improved quadtree feature point extraction
Liu et al. Vehicle detection from aerial color imagery and airborne LiDAR data
CN108573280A (en) A kind of unmanned boat independently passes through the method for bridge
Liu et al. A novel trail detection and scene understanding framework for a quadrotor UAV with monocular vision
CN104613928A (en) Automatic tracking and air measurement method for optical pilot balloon theodolite
Kim et al. Object Modeling with Color Arrangement for Region‐Based Tracking
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
Jain et al. Airborne vehicle detection with wrong-way drivers based on optical flow
CN107886505B (en) A kind of synthetic aperture radar airfield detection method based on line primitives polymerization
Karimi et al. Techniques for Automated Extraction of Roadway Inventory Features from High‐Resolution Satellite Imagery
Volkova et al. Aerial wide-area motion imagery registration using automated multiscale feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Xiangbin

Inventor after: Lin Shuhan

Inventor after: Zheng Wentao

Inventor after: Wang Guofu

Inventor before: Li Xiangbin

Inventor before: Zheng Wentao

Inventor before: Wang Guofu

GR01 Patent grant
GR01 Patent grant