CN108694724A - A kind of long-time method for tracking target - Google Patents

A kind of long-time method for tracking target Download PDF

Info

Publication number
CN108694724A
CN108694724A CN201810450292.7A CN201810450292A CN108694724A CN 108694724 A CN108694724 A CN 108694724A CN 201810450292 A CN201810450292 A CN 201810450292A CN 108694724 A CN108694724 A CN 108694724A
Authority
CN
China
Prior art keywords
target
candidate
frame image
tracking
max
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810450292.7A
Other languages
Chinese (zh)
Inventor
李宁鸟
王文涛
韩雪云
李�权
魏璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Original Assignee
XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd filed Critical XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Priority to CN201810450292.7A priority Critical patent/CN108694724A/en
Publication of CN108694724A publication Critical patent/CN108694724A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to target tracking domain more particularly to a kind of long-time method for tracking target.A kind of long-time method for tracking target, this method are as follows:The tracking target information and current frame image in initial pictures are obtained, centered on the current frame image of acquisition to track target in the position of previous frame image, chooses candidate region;The target location corresponding to candidate target is obtained in candidate region using sorter model;Judge whether candidate target is tracking target.The present invention chooses candidate region centered on present frame to track target in the position of previous frame image, obtains the target location corresponding to candidate target and accurately judges whether target is abnormal;And in target target exception in current frame image by the position of previous frame image centered on expand again selection range carry out retrieval realize target for a long time tracking purpose.

Description

A kind of long-time method for tracking target
Technical field
The present invention relates to target tracking domain more particularly to a kind of long-time method for tracking target.
Background technology
No matter in military or civil field, target following technology, which suffers from, to be widely applied.Investigation, low latitude in battlefield Defence, traffic monitoring and Homeland Security etc. full-automatic or automanual realization target following task can subtract significantly Few staff and working time.However, although it has already been proposed many effective video target tracking algorisms, in reality Still suffer from many difficulties in the application of border, for example, the non-linear deformation, video camera of the illumination variation, target in environment shake, And the factors such as noise jamming in background, bring great challenge to target following.
Meanwhile most of existing method for tracking target is merely able to realize in a relatively short period of time to target into line trace, Rarely has research relative to long-time track side's rule.However, in practical engineering application, then more to the permanent tenacious tracking of target It is concerned.Therefore it is still one very challenging to design accurate and reliable, prolonged target tracking algorism currently Work.This patent can carry out prolonged tracking in view of the above problems, proposing one kind to video object.This method is logical Rule of judgment is lost in the tracking for crossing the setting of target response diagram oscillatory condition, so as to accurately judge whether target encounters screening Gear is lost or the situations such as fuzzy;It is accurately detected using the target detection model of deep learning and loses target, it is multiple to solve tradition The problem that target detection is difficult under miscellaneous background and accuracy is low;Tracking and detection method are effectively combined, it is ensured that The tracking of target long-time stable.The experimental results showed that the method for proposition accurately can judge that target is during tracking It is no encounter block, lose or the situations such as fuzzy, and continue after capable of accurately detecting target in the case where target is lost Into line trace, the purpose of target tracking for a long time is realized.
Invention content
The present invention is directed in view of the above-mentioned problems, proposing a kind of long-time method for tracking target.
Technical program of the present invention lies in:
A kind of long-time method for tracking target, this method are as follows:
The tracking target information and current frame image in initial pictures are obtained, with tracking on the current frame image of acquisition Target chooses candidate region centered on the position of previous frame image;It is obtained in candidate region using sorter model Target location corresponding to candidate target;
Judge whether candidate target is tracking target:
If tracking target, then uses the coordinate information of tracking target in current frame image into line trace, update grader mould Type completes the long-time tracking of target in video image;
If not tracking target, then judge the Exception Type situation that candidate target occurs, with tracking on current frame image Target establishes region of search and again detecting and tracking target centered on previous frame image position, to the candidate mesh detected Mark carries out goal congruence judgement with the tracking target in its previous frame image, selects the candidate target conduct for meeting Rule of judgment Target is tracked, and updates sorter model, completes the long-time tracking of target in video image.
A kind of long-time method for tracking target, this method are as follows:
Obtain the tracking target information and current frame image in initial pictures;
In current frame image, to track target centered on the position of previous frame image, with the 2 of target sizes~ 5 times of range chooses candidate region;
The response diagram of candidate region is sought with sorter model, obtains the maximum response in response diagram, the peak response Value position is the target location corresponding to candidate target;
Judge whether candidate target is tracking target, if tracking target, then uses the seat that target is tracked in current frame image Information is marked into line trace, sorter model is updated, completes the detect and track of target in video image;If not tracking target, Then judge that candidate target occurs blocking, lose or ambiguity, to track target in previous frame image on current frame image Region of search and again detecting and tracking target are established centered on position;
Target detection is carried out to candidate target in previous frame image, to candidate target and its previous frame image detected In tracking target carry out goal congruence judgement, select and meet the candidate target of Rule of judgment as tracking target, and update Sorter model completes the long-time tracking of target in video image.
The above method is repeated, realizes the long-time tracking for persistently completing target.
3-7 candidate region is chosen in the range of 2-5 times of target sizes, method is as follows:
Centered on the central point for detecting target position, the first candidate region is chosen in current frame image, first The width and height of candidate region are respectively wide and high in previous frame image 2-2.5 times of tracking target;
On the basis of the first candidate region range size, centered on its central point, using k as scale factor, 1-3 are chosen Candidate region, wherein 1 k≤1.5 <;
On the basis of the first candidate region range size, centered on its central point, selected in current frame image with 1/k times Take 1-3 candidate region.
The method that the response diagram of candidate region is sought with sorter model is as follows:
Before training sorter model, the tracking target in initial pictures is extended, i.e., in initial pictures 2-2.5 times of range of target area is extended, the Hog feature vectors after extraction extension corresponding to target area;
According to the corresponding Hog feature vectors in target area after extension, training sorter model;
The training formula of sorter model is as follows:
Wherein,Indicate the Fourier transformation to α,Indicate that the sorter model that training obtains, y indicate in initial pictures The corresponding label of training sample, k indicate that kernel function, x indicate that the Hog feature vectors of extension rear region, λ are a regularization ginsengs Number is constant, value 0.000001;
Then training sample is marked using continuous label during training sorter model, to center of a sample apart from mesh The far and near numerical value assigned respectively within the scope of 0-1 at mark center, and Gaussian distributed, closer from target, value is more intended to 1, from mesh Mark is remoter, and value is more intended to 0;
Using object classifiers model, the corresponding response diagram in candidate region of multiple scales in present frame is obtained;
Wherein,Indicate that the Fourier transformation to f (z), f (z) indicate that the corresponding response diagrams of candidate region z, z expressions are worked as The corresponding Hog feature vectors in one of candidate region in previous frame, x indicate the corresponding Hog features in target area after extension to Amount,Indicate the sorter model that training obtains.
The determination method of target location corresponding to candidate target is as follows:
The maximum response in response diagram corresponding to 3-7 candidate region is calculated separately by sorter model, wherein the The maximum response of one candidate region is denoted as FmaxA, using k as scale factor, the maximum response of the candidate region of selection is denoted as FmaxA ', using 1/k as scale factor, the maximum response of the candidate region of selection is denoted as FmaxA ", wherein A are the first candidate regions Domain, A ' are the candidate region chosen by scale factor of k, and A " is the candidate region chosen by scale factor of 1/k;
Meanwhile scale weight factor scale_weight is introduced, its value range is set between 0.9-1;
Judge FmaxWhether A is more than scale_weight and FmaxThe product of A ';Work as FmaxA>scale_weight×FmaxA′ When, then assert FmaxA is maximum response Fmax', judge into next step;Otherwise assert FmaxA ' is maximum response Fmax', into Enter and judge in next step, while updating the information of candidate region;
Judge Fmax'Whether scale_weight and F is more thanmaxThe product of A ";
Work as Fmax'>scale_weight×FmaxWhen A ", then F is assertmax'For maximum response Fmax, then it is directly entered next Step;Otherwise assert FmaxA ' is maximum response Fmax, while updating the information of candidate region;
Maximum response FmaxThe position that the candidate region at place, as present frame target most probable occur.
Judge whether candidate target is that tracking mesh calibration method is as follows:
Judge candidate region maximum response FmaxWhether default response is more than, wherein the default response refers to waiting The minimum value of maximum response in favored area, value range is between 0-1, and preferably 0.3;
As maximum response FmaxWhen more than default response, then calculates present frame and can react candidate region response diagram and shake The APCE values for swinging degree, are denoted as APCEcurrentAnd the average APCE of target is tracked in previous frame image to the second frame image Value, is denoted as APCEaverage;
Wherein:APCE values to seek formula as follows:
Wherein FmaxFor the maximum response in response diagram, FminFor the minimum response value in response diagram, Fw,hFor in response diagram The response of corresponding position (w, h), mean are to seek mean value.It is exactly that target is blocked or mesh when APCE values reduce suddenly The case where mark loss or even objective fuzzy.
Judge the APCE of present frame candidate regioncurrentWhether the APCE of default concussion ratio is more thanaverage;
Work as APCEcurrentMore than the average APCE of default concussion ratioaverageWhen, it is believed that the candidate mesh in current frame image It is designated as tracking target, updates sorter model;Otherwise, judge that candidate target occurs blocking, lose or ambiguity, under One frame image carries out target detection;The default concussion ratio is between 0-1, and preferably 0.4.
The method for updating sorter model is as follows:
Target information is tracked in the information update previous frame image for tracking target in current frame image, and calculates present frame The APCE of target is tracked in imageaverage;
Judge the F of tracking targetmaxWhether the average F of default response ratio times is more thanmax-average, set the preset ratio Between 0-1, preferably 0.7;
In the F for judging tracking targetmaxMore than the average F of default response ratio timesmax-averageWhen, then it is directly entered next Step judges to be determined;Otherwise, current frame image is updated without sorter model;
Judge the APCE of tracking targetaverageWhether value is more than the average APCE values of default averagely concussion ratio times, setting Default averagely concussion ratio is between 0-1, and preferably 0.45;
When judging that the APCE values of tracking target are more than the average APCE values of default averagely concussion ratio times, then to current Frame image carries out sorter model update;Otherwise, current frame image is updated without sorter model;
Model modification is carried out to current frame image according to sorter model more new formula;
Wherein:Fmax-averageFor the maximum response F of response diagram in current frame imagemaxWith response diagram in previous frame image Maximum response FmaxAverage value;
Wherein default response ratio refers to the maximum response of present frame tracking target area relative to tracking target histories The floating degree of average response value, value range is between 0-1, and preferably 0.7;
Default averagely concussion ratio refer to by the obtained average concussion value of present frame candidate region response diagram relative to The severe degree of target histories average response figure concussion value is tracked, value range is between 0-1, and preferably 0.45;
Sorter model more new formula is as follows:
WhereinIndicate the sorter model parameter of n-th frame image,Indicate the sorter model ginseng of the (n-1)th frame image Number, η indicate Study rate parameter, value 0.015.
Again detecting and tracking goal approach is as follows:
Centered on current frame image to track target in previous frame image position, former tracking target sizes are established 5 times of region of search;
In region of search, region detection is carried out using the object detection method of deep learning, after the completion of to be detected, is preserved All candidate targets detected;
Goal congruence judgement is carried out to the tracking target of all candidate targets and former frame that detect, determines the tracking Whether target still has;
The condition that the goal congruence judges is:There must be while meet position criterion, similar in all candidate targets The candidate target for spending criterion, otherwise carries out target detection again into next frame image, until meeting goal congruence judges item Until part;
Position criterion:Take candidate target central point and former frame in track target center point coordinate, work as candidate target With tracking target when the difference on the directions x and the directions y is respectively less than position criterion, judge that two targets are consistent;
Similarity criterion:If there are one the preliminary consistent targets for tracking target, then it is assumed that the candidate target is current The tracking target of frame;If tracking the preliminary consistent target more than one of target, previous frame tracking target and institute are solved respectively It is that the normalization between two targets is mutual to have NCC value of the preliminary consistent target in correspondence image region of tracking target, NCC values Pass value;It selects to track tracking target of the maximum candidate target of NCC values of target as present frame in candidate target with previous frame;
The calculation formula of NCC is as follows:
Wherein I1And I2Indicate that the corresponding image-region of two targets, ⊙ indicate point multiplication operation respectively;
If the candidate target detected is all unsatisfactory for the condition of above-mentioned two criterion, be directly entered next frame image into Row detection, is judged again.
The technical effects of the invention are that:
The present invention chooses candidate region centered on present frame to track target in the position of previous frame image;Profit The target location corresponding to candidate target is obtained in candidate region with sorter model;It is set by target response figure oscillatory condition Rule of judgment is lost in fixed tracking, the situations such as blocks, loses or obscure so as to accurately judge whether target encounters;And It is with the position of previous frame image in current frame image when generation target encounters and the situations such as blocks, loses or obscure Center expands selection range and is retrieved again, continues to track after accurately detecting target, realize target for a long time with The purpose of track.
Description of the drawings
Fig. 1 is the flow chart of the method for the invention;
Fig. 2 is the method for the invention functional block diagram;
Fig. 3 is first frame image trace target information terminal display figure;
Fig. 4 is the 1st frame image trace target display figure;
Fig. 5 is that target enters continual and steady tracking schematic diagram;
Fig. 6 is the 28th frame image trace target display figure;
Fig. 7 is the 96th frame image trace target display figure;
Fig. 8 is the 365th frame image trace target display figure;
Fig. 9 is the 365th frame image trace target display figure.
Specific implementation mode
A kind of long-time method for tracking target of the present invention, this method are as follows:
Obtain the tracking target information and current frame image in initial pictures;
When tracking for the first time, the initial pictures for including tracking target information, and the tracking comprising initial pictures is needed to regard Frequently.In the initial pictures, the information such as top left co-ordinate and width, the height of tracking target are provided.Wherein in initial pictures Tracking target information can be automatically provided by detection algorithm, can also be selected in initial pictures by way of manually confining It takes.
Centered on the central point for detecting target position, the first candidate region is chosen in current frame image, first The wide and high of candidate region is respectively wide and high in previous frame image 2 times or 2.5 times of tracking target;
On the basis of the first candidate region range size, centered on its central point, using k as scale factor, 1-3 are chosen Candidate region, wherein 1 k≤1.5 <;
On the basis of the first candidate region range size, centered on its central point, selected in current frame image with 1/k times Take 1-3 candidate region.
The first candidate region is existed with tracking target centered on the central point for detecting target position in the present embodiment Wide and high 2.5 times 1 candidate regions chosen for range in previous frame image, the second candidate region is with the first candidate region On the basis of range size, centered on its central point, with a candidate region of 1.05 times of scale factor selection;Third is candidate Region centered on its central point, is chosen with 1/1.05 times of scale factor on the basis of the first candidate region range size A candidate region.
During the motion in view of target, it may occur that dimensional variation, it can be big with the range of the first candidate region Based on small, 2 or 3 scale factors are chosen again in the range of 1 k≤1.5 <, such as 1.1,1.5 multiple candidates are determined with this Region, to help sorter model accurately to obtain the precision target position corresponding to candidate target in more candidate regions It sets.The method that the response diagram of candidate region is sought using sorter model is as follows:
Before training sorter model, the tracking target in initial pictures is extended, i.e., in initial pictures 2 times of target area or 2.5 times are extended, and part background information is contained in the target area after making it extend;This not only may be used To increase the quantity of training sample, grader study can also be made to part background information, improve the precision of grader;Extraction is expanded Hog feature vectors after exhibition corresponding to target area;
In view of Hog features are a kind of multidimensional characteristics, the illumination variation and dimensional variation to target have robustness;Cause This, Hog feature vectors are extracted according to the target area after extension, and grader is trained using this feature vector;In addition, by target The problem of tracking, is converted into the problem of solving ridge regression model, by building the circular matrix of training sample, utilizes circular matrix Diagonalizable characteristic in Fourier, greatly simplifies the solution procedure of ridge regression model parameter, is obtained to more quick To object classifiers.
The training formula of sorter model is as follows:
Wherein,Indicate the Fourier transformation to α,Indicate that the sorter model that training obtains, y indicate in initial pictures The corresponding label of training sample, k indicate that kernel function, x indicate that the Hog feature vectors of extension rear region, λ are a regularization ginsengs Number is constant, value 0.000001;
Further, since at present major part algorithm be all using it is non-just bear by the way of mark training sample, i.e. positive sample Label is 1, negative sample 0.The method of this marker samples has a problem in that cannot react each negative sample well Weight, the i.e. close sample to the sample remote from target's center and from target's center are put on an equal footing.
Therefore, training sample is marked using continuous label during training sorter model, to center of a sample's distance The far and near numerical value assigned respectively within the scope of 0-1 of target's center, and Gaussian distributed, closer from target, value is more intended to 1, from Target is remoter, and value is more intended to 0;
Using object classifiers model, the corresponding response diagram in candidate region of multiple scales in present frame is obtained;
Wherein,Indicate that the Fourier transformation to f (z), f (z) indicate that the corresponding response diagrams of candidate region z, z expressions are worked as The corresponding Hog feature vectors in one of candidate region in previous frame, x indicate the corresponding Hog features in target area after extension to Amount,Indicate the sorter model that training obtains.
The determination method of target location corresponding to candidate target, first according to candidate region and grader under three scales Then response diagram between model finds out the peak value of response of each response diagram, be finally compared according to condition and determine maximum ring The candidate region that should be worth, so that it is determined that the candidate region is most likely to be tracking target at this time, i.e., its position is present frame The position that target most probable occurs.
Method is as follows:
The present invention chooses three candidate regions, and the first candidate region is 1 times, the second candidate region is 1.05 times, third is waited Favored area is 1/1.05 times of three scale size, is denoted as F respectivelymax-1.05, Fmax-1, Fmax-1/1.05;
Candidate region under three scales, which is calculated separately, by sorter model corresponds to maximum response in response diagram;
Scale weight factor scale_weight is introduced, value is set as 0.95;
Judge Fmax-1Whether scale_weight and F is more thanmax-1.05Product;Work as Fmax-1> scale_weight× Fmax-1.05When, then by Fmax-1Regard as maximum response Fmax', then it is directly entered and judges to be determined in next step;Otherwise will Fmax-1.05Regard as maximum response Fmax', also it is determined into judgement in next step, while updating the information of candidate region;
Judge Fmax'Whether scale_weight and F is more thanmax-1/1.05Product;Work as Fmax'> scale_weight× Fmax-1/1.05When, then by Fmax'Regard as maximum response Fmax, then it is directly entered and judges to be determined in next step;Otherwise will Fmax-1.05Regard as maximum response Fmax, while updating the information of candidate region;
It is final to determine maximum response FmaxThe candidate region at place, i.e. its position are present frame target location.
Judge whether candidate target is tracking target, if tracking target, then uses the seat that target is tracked in current frame image Information is marked into line trace, sorter model is updated, completes the detect and track of target in video image;If not tracking target, Then judge that candidate target occurs blocking, lose or ambiguity, to track target in previous frame image on current frame image Region of search and again detecting and tracking target are established centered on position;
How to judge the degree of strength of tracker tracking stability, how accurately to judge current frame image in other words Middle target occurs blocking or even target is lost.The present invention by tracking during tracking lose judgment method quality come into Row assessment, once can judge this point, the accuracy of model modification can have a distinct increment, and the stability of tracking also obtains Reinforce.
Accurate, the maximum value of candidate target response diagram in tracking, that is, peak value, are an apparent waves Especially encounter close to ideal dimensional gaussian distribution, and in the case of tracking bad and block, lose or obscure etc. in peak Violent oscillation can occur for the response diagram of situation, candidate target, at this point, the case where response diagram will will appear multiple peak values, causes The center of target can not be determined by peak value of response, but it is current that target can be timely reacted by degree of oscillation State the situations such as blocks, loses or obscure to accurately judge whether target encounters.Therefore the present invention can using one The criterion APCE (average peak correlation energy) of reaction response figure degree of oscillation judges.The present invention passes through previous step grader Model obtains the response diagram of candidate region, finds the maximum response F in response diagrammax, judge FmaxWhether default response is more than Value 0.3, works as Fmax>When 0.3, then it is directly entered and judges to be determined in next step;Otherwise, judge the candidate mesh in current frame image Mark is not tracking target, i.e. current frame image BREAK TRACK;
This method is as follows:
Judge candidate region maximum response FmaxWhether default response is more than, wherein the default response refers to waiting The minimum value of maximum response in favored area, value range is between 0-1, and preferably 0.3;
Work as Fmax>When 0.3, then it is directly entered and judges to be determined in next step;Otherwise, judge the candidate in current frame image Target is not tracking target, i.e. current frame image BREAK TRACK;
As maximum response FmaxWhen more than default response, then calculates present frame and can react candidate region response diagram and shake The APCE values for swinging degree, are denoted as APCEcurrentAnd the average APCE of target is tracked in previous frame image to the second frame image Value, is denoted as APCEaverage;
Wherein:APCE values to seek formula as follows:
Wherein FmaxFor the maximum response in response diagram, FminFor the minimum response value in response diagram, Fw,hFor in response diagram The response of corresponding position (w, h), mean are to seek mean value.It is exactly that target is blocked or mesh when APCE values reduce suddenly The case where mark loss or even objective fuzzy.
By candidate region response diagram, the minimum response value F in response diagram is foundmin, and calculate the APCE of the candidate target Value, is denoted as APCEcurrent.Meanwhile the average APCE values of target are tracked in previous frame image to the second frame image, it is denoted as APCEaverage.The value proceeds by the APCE for calculating tracking target from the second frame imagecurrent-2, target is steady in third frame image APCE is sought after fixed trackingcurrent-3Afterwards, APCEaverageEqual to APCEcurrent-2And APCEcurrent-3Average value;It waits seeking The APCE of target is tracked in four frame imagescurrent-4Afterwards, APCEaverageEqual to APCEcurrent-4It is sought with third frame image APCEaverageAverage value.And so on, during target tenacious tracking, tracks in video and track target in n-th frame image APCEaverageEqual to the APCE that n-th frame tracks targetcurrent-nThe APCE sought with the (n-1)th frame tracking targetaverageIt is flat Mean value.
Judge the APCE of present frame candidate regioncurrentWhether value is more than the average APCE values of preset ratio times, the present invention The default concussion ratio is 0.4.
Work as APCEcurrent>0.4×APCEaverageWhen, judge that the candidate target in current frame image is to track target, more New sorter model;Otherwise, it is tracking target, i.e. current frame image target following to judge the candidate target in current frame image not It loses, re-starts target detection.
Pass through the judgement of tracking result reliability, it is determined whether the tracking result of each frame is all used for updating, when target quilt Block or tracker with it is bad when, if go again update sorter model, only can make tracker increasingly None- identified target, to cause sorter model drifting problem.
In addition, due to wanting to ensure tracking velocity, it is necessary to a kind of simple and effective model modification strategy, it is desirable to logical Judged after some resources obtained, without carrying out too many complicated calculating.
Therefore, the present invention utilizes the maximum response and APCE values the two criterions progress sorter model for tracking target Update, only work as FmaxWhen being all more than history mean value with certain proportion with APCE, sorter model is just updated.It should On the one hand method greatly reduces the case where sorter model drift, on the other hand reduce the newer number of sorter model, Achieve the effect that accelerate.
When carrying out sorter model update, sorter model parameter update should be carried out according to preset ratio.
Target information is tracked in the information update previous frame image for tracking target in current frame image, and calculates present frame The APCE of target is tracked in imageaverage;
Judge the F of tracking targetmaxWhether the average F of default response ratio times is more thanmax, the preset ratio is set as 0.7;
In the F for judging tracking targetmaxMore than the average F of default response ratio timesmaxWhen, then it is directly entered and judges in next step It is determined;Otherwise, current frame image is updated without sorter model;
Judge whether the APCE values for tracking target are more than the average APCE values of default averagely concussion ratio times, it is default to set this Ratio is 0.45;
When judging that the APCE values of tracking target are more than the average APCE values of default averagely concussion ratio times, then to current Frame image carries out sorter model update;Otherwise, current frame image is updated without sorter model;
Model modification is carried out to current frame image according to the following equation.
Sorter model more new formula is as follows:
WhereinIndicate the sorter model parameter of n-th frame image,Indicate the sorter model ginseng of the (n-1)th frame image Number, η indicate Study rate parameter, value 0.015.
Target detection is carried out to candidate target in previous frame image, to candidate target and its previous frame image detected In tracking target carry out goal congruence judgement, select and meet the candidate target of Rule of judgment as tracking target, and update Sorter model completes the long-time tracking of target in video image.
During tracking, in order to avoid target cannot be steady for a long time caused by influences due to suddenly blocking, obscuring etc. Fixed tracking, need target lose judge after to current frame image in target lost regions carry out target detection, to complete The task of tracking for a long time, in addition, target detection model of the target also with deep learning is detected again, so that it is guaranteed that detection Accuracy.Object detection method is as follows:
Centered on current frame image to track target in previous frame image position, former tracking target sizes are established 5 times of region of search;
In region of search, region detection is carried out using the object detection method of deep learning, after the completion of to be detected, is preserved All candidate targets detected;
Goal congruence judgement is carried out to the tracking target of all candidate targets and former frame that detect, determines the tracking Whether target still has.
When position criterion and similarity criterion are satisfied by, carry out target detection, pair meet simultaneously position criterion to it is similar The candidate target of degree criterion is judged, is otherwise carried out target detection again into next frame image, is judged again; In order to achieve the effect that target tracks for a long time, need to carry out the tracking target of all candidate targets and former frame that detect Goal congruence judges, determines whether the tracking target still has.
The method that goal congruence judges is as follows:
Position criterion:Take candidate target central point and former frame in track target center point coordinate, work as candidate target With tracking target when the difference on the directions x and the directions y is respectively less than 15, judge that two targets are consistent;
Similarity criterion:If there are one the preliminary consistent targets for tracking target, then it is assumed that the candidate target is current The tracking target of frame;If tracking the preliminary consistent target more than one of target, previous frame tracking target and institute are solved respectively It is that the normalization between two targets is mutual to have NCC value of the preliminary consistent target in correspondence image region of tracking target, NCC values Pass value;It selects to track tracking target of the maximum candidate target of NCC values of target as present frame in candidate target with previous frame;
The calculation formula of NCC is as follows:
Wherein I1And I2Indicate that the corresponding image-region of two targets, ⊙ indicate point multiplication operation respectively.
If the candidate target detected is all unsatisfactory for the condition of above-mentioned two criterion, be directly entered next frame image into Row detection target, is judged again.
The above method is repeated, realizes the long-time tracking for persistently completing target.
Here is the verification for combining photo to carry out effect with tracking to mobile target in complex background of the present invention detection:
This video is the UAV Video of outfield acquisition, mainly for low latitude complex scene (such as building, grove and interference Object etc.) real-time detect and track is carried out to unmanned plane target.
When video starts, obtains in first frame image and track target information.This experiment sends first by detection algorithm Target information in frame image is in terminal, as shown in figure 3, tracking target frame is shown on first frame image simultaneously, such as Fig. 4 institutes Show.The video first frame image Scene is more complicated, and has chaff interferent influence around tracking target, is brought very to tracking Big difficulty.
In order to which whether verification method can ensure continual and steady tracking, this can be seen that by the output of terminal interface Method can ensure tenacious tracking, as shown in Figure 5.Enter continual and steady tracking mode to the 28th frame, target since the 2nd frame, The successful mark 1 of returning tracking always.In addition, in order to which whether verification method has certain anti-ability of blocking, pass through video figure Target Traversing, which blocks, as in is maintained to tenacious tracking state and can be confirmed, as shown in Figure 6.In conjunction in Fig. 4, Fig. 6 The flight path of target and the lasting return of Fig. 5 successfully indicate, it can be determined that target twice succeed avoiding shelter Influence, continual and steady is locked in tracking box.
From the tracking result discovery to this target in video image, the method for proposition has very strong anti-interference ability, As shown in fig. 7, there is the influence of branch, electric pole and electric wire around target, but still maintain tenacious tracking state;In addition, The method of proposition has the ability of tenacious tracking under complex background, as shown in figure 8, illustrating that tracking box has followed mesh in conjunction with Fig. 7 Mark is moved to the left end of tree from the right end of tree.
Finally, it also found from the tracking result to this target in video image, this method can accurately judge target Whether encounter and the situations such as block, lose or obscure, and mesh will be accurately detected in current frame image using detection algorithm Mark determines target location after goal congruence judges, continues to track, as shown in figure 9, target is opened in 618 frame Beginning thickens;Cause to lose and judges to come into force;Detect candidate target carry out unanimously judge after, export coordinates of targets, again into Enter tracking.

Claims (9)

1. a kind of long-time method for tracking target, which is characterized in that this method is as follows:
Obtain the tracking target information and current frame image in initial pictures;
Centered on the current frame image of acquisition to track target in the position of previous frame image, candidate region is chosen;
The target location corresponding to candidate target is obtained in candidate region using sorter model;
Judge whether candidate target is tracking target:If tracking target, then the coordinate letter of tracking target in current frame image is used It ceases into line trace, updates sorter model, complete the long-time tracking of target in video image;If not tracking target, then sentence The Exception Type situation that disconnected candidate target occurs, to track during target is in previous frame image position on current frame image The heart establishes region of search and again detecting and tracking target, to candidate target and the tracking target in its previous frame image detected Goal congruence judgement is carried out, selects the candidate target for meeting Rule of judgment as tracking target, and update sorter model, it is complete It is tracked at the long-time of target in video image.
2. long-time method for tracking target according to claim 1, which is characterized in that this method is as follows:
Obtain the tracking target information and current frame image in initial pictures;
In current frame image, to track target centered on the position of previous frame image, with 2-5 times of target sizes Range chooses candidate region;
The response diagram of candidate region is sought with sorter model, obtains the maximum response in response diagram, the maximum response institute It is the target location corresponding to candidate target in position;
Judge whether candidate target is tracking target, if tracking target, then uses the coordinate letter of tracking target in current frame image It ceases into line trace, updates sorter model, complete the detect and track of target in video image;If not tracking target, then sentence Blocking occurs in disconnected candidate target, loses or ambiguity, to track target where previous frame image on current frame image Region of search and again detecting and tracking target are established centered on position;
Target detection is carried out to candidate target in previous frame image, to the candidate target that detects in its previous frame image It tracks target and carries out goal congruence judgement, select the candidate target for meeting Rule of judgment as tracking target, and update classification Device model completes the long-time tracking of target in video image.
3. long-time method for tracking target according to claim 2, which is characterized in that this method repeats claim 2 institute Method is stated, realizes the long-time tracking for persistently completing target.
4. long-time method for tracking target according to claim 2 or 3, which is characterized in that in 2-5 times of model of target sizes Middle 3-7 candidate region of selection is enclosed, this method is as follows:
Centered on the central point for detecting target position, the first candidate region is chosen in current frame image, first is candidate The width and height in region are respectively wide and high in previous frame image 2-2.5 times of tracking target;
On the basis of the first candidate region range size, centered on its central point, using k as scale factor, 1-3 candidate is chosen Region, wherein 1 k≤1.5 <;
On the basis of the first candidate region range size, centered on its central point, 1- is chosen in current frame image with 1/k times 3 candidate regions.
5. long-time method for tracking target according to claim 4, which is characterized in that seek candidate regions with sorter model The method of the response diagram in domain is as follows:
Before training sorter model, the tracking target in initial pictures is extended, i.e., with the target in initial pictures The range in 2-2.5 times of region is extended, the Hog feature vectors after extraction extension corresponding to target area;
According to the corresponding Hog feature vectors in target area after extension, training sorter model;
The training formula of sorter model is as follows:
Wherein,Indicate the Fourier transformation to α,Indicate that the sorter model that training obtains, y indicate training sample in initial pictures This corresponding label, k indicate that kernel function, x indicate the Hog feature vectors of extension rear region, and λ is a regularization parameter;
Then training sample is marked using continuous label during training sorter model, in center of a sample's distance objective The far and near numerical value assigned respectively within the scope of 0-1 of the heart, and Gaussian distributed, closer from target, value is more intended to 1, is got over from target Far, value is more intended to 0;
Using object classifiers model, the corresponding response diagram in candidate region of multiple scales in present frame is obtained;
Wherein,Indicate that the Fourier transformation to f (z), f (z) indicate that the corresponding response diagrams of candidate region z, z indicate present frame In the corresponding Hog feature vectors in one of candidate region, x indicates the corresponding Hog feature vectors in target area after extension, Indicate the sorter model that training obtains.
6. long-time method for tracking target according to claim 5, which is characterized in that the target position corresponding to candidate target The determination method set is as follows:
The maximum response in response diagram corresponding to 3-7 candidate region is calculated separately by sorter model, wherein first waits The maximum response of favored area is denoted as FmaxA, using k as scale factor, the maximum response of the candidate region of selection is denoted as FmaxA′, Using 1/k as scale factor, the maximum response of the candidate region of selection is denoted as FmaxA ", wherein A are the first candidate region, and A ' is Using the candidate region that k chooses as scale factor, A " is the candidate region chosen by scale factor of 1/k;
Meanwhile scale weight factor scale_weight is introduced, its value range is set between 0.9-1;
Judge FmaxWhether A is more than scale_weight and FmaxThe product of A ';
Work as FmaxA>scale_weight×FmaxWhen A ', then F is assertmaxA is maximum response Fmax', judge into next step;It is no Then assert FmaxA ' is maximum response Fmax', judge into next step, while updating the information of candidate region;
Judge Fmax'Whether scale_weight and F is more thanmaxThe product of A ";
Work as Fmax'>scale_weight×FmaxWhen A ", then F is assertmax'For maximum response Fmax, then it is directly entered in next step;It is no Then assert FmaxA ' is maximum response Fmax, while updating the information of candidate region;
Maximum response FmaxThe position that the candidate region at place, as present frame target most probable occur.
7. long-time method for tracking target according to claim 6, which is characterized in that judge whether candidate target is tracking Mesh calibration method is as follows:
Judge candidate region maximum response FmaxWhether default response is more than, wherein the default response refers to candidate regions The minimum value of maximum response in domain, value range is between 0-1;
As maximum response FmaxWhen more than default response, then candidate region response diagram oscillation journey can be reacted by calculating present frame The APCE values of degree, are denoted as APCEcurrentAnd the average APCE values of target, note are tracked in previous frame image to the second frame image For APCEaverage;
Wherein:APCE values to seek formula as follows:
Judge the APCE of present frame candidate regioncurrentWhether the APCE of default concussion ratio is more thanaverage;
Work as APCEcurrentMore than the average APCE of default concussion ratioaverageWhen, it is believed that the candidate target in current frame image is Target is tracked, sorter model is updated;Otherwise, judge that candidate target occurs blocking, lose or ambiguity, into next frame Image carries out target detection;The default concussion ratio is between 0-1.
8. long-time method for tracking target according to claim 7, which is characterized in that update the method for sorter model such as Under:
Target information is tracked in the information update previous frame image for tracking target in current frame image, and calculates current frame image The APCE of middle tracking targetaverage;
Judge the F of tracking targetmaxWhether the average F of default response ratio times is more thanmax-average, the preset ratio is set in 0-1 Between;
In the F for judging tracking targetmaxMore than the average F of default response ratio timesmax-averageWhen, then it is directly entered and sentences in next step It is disconnected to be determined;Otherwise, current frame image is updated without sorter model;
Judge the APCE of tracking targetaverageWhether value is more than the average APCE values of default averagely concussion ratio times, sets default flat Concussion ratio is between 0-1;
When judging that the APCE values of tracking target are more than the average APCE values of default averagely concussion ratio times, then to current frame image Carry out sorter model update;Otherwise, current frame image is updated without sorter model;
Model modification is carried out to current frame image according to sorter model more new formula;
Wherein:Fmax-averageFor the maximum response F of response diagram in current frame imagemaxMost with response diagram in previous frame image Big response FmaxAverage value;
Wherein default response ratio refers to that the maximum response of present frame tracking target area is average relative to tracking target histories The floating degree of response, value range is between 0-1;
Default averagely concussion ratio refers to the obtained average concussion value by present frame candidate region response diagram relative to tracking The severe degree of target histories average response figure concussion value, value range is between 0-1;
Sorter model more new formula is as follows:
WhereinIndicate the sorter model parameter of n-th frame image,Indicate the sorter model parameter of the (n-1)th frame image, η Indicate Study rate parameter.
9. mobile target in complex background detection as claimed in claim 8 and tracking, which is characterized in that detecting and tracking again Goal approach is as follows:
Centered on current frame image to track target in previous frame image position, former 5 times of target sizes of tracking are established Region of search;
In region of search, region detection is carried out using the object detection method of deep learning, after the completion of to be detected, preserves detection All candidate targets arrived;
Goal congruence judgement is carried out to the tracking target of all candidate targets and former frame that detect, determines the tracking target Whether still have;
The condition that the goal congruence judges is:Must have in all candidate targets while meet position criterion, similarity is sentenced According to candidate target, otherwise target detection is carried out again into next frame image, until meeting goal congruence Rule of judgment and being Only;
Position criterion:Take candidate target central point and former frame in track target center point coordinate, when candidate target with Difference of the track target on the directions x and the directions y is respectively less than position criterion, judges that two targets are consistent;
Similarity criterion:If there are one the preliminary consistent targets for tracking target, then it is assumed that the candidate target is present frame Track target;If tracking target preliminary consistent target more than one, respectively solve previous frame tracking target with it is all with For the preliminary consistent target of track target in the NCC values in correspondence image region, NCC values are the normalized crosscorrelation between two targets Value;It selects to track tracking target of the maximum candidate target of NCC values of target as present frame in candidate target with previous frame;
The calculation formula of NCC is as follows:
Wherein I1And I2Indicate that the corresponding image-region of two targets, ⊙ indicate point multiplication operation respectively;
If the candidate target detected is all unsatisfactory for the condition of above-mentioned two criterion, it is directly entered next frame image and is examined It surveys, is judged again.
CN201810450292.7A 2018-05-11 2018-05-11 A kind of long-time method for tracking target Pending CN108694724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810450292.7A CN108694724A (en) 2018-05-11 2018-05-11 A kind of long-time method for tracking target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810450292.7A CN108694724A (en) 2018-05-11 2018-05-11 A kind of long-time method for tracking target

Publications (1)

Publication Number Publication Date
CN108694724A true CN108694724A (en) 2018-10-23

Family

ID=63847337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810450292.7A Pending CN108694724A (en) 2018-05-11 2018-05-11 A kind of long-time method for tracking target

Country Status (1)

Country Link
CN (1) CN108694724A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109472229A (en) * 2018-10-30 2019-03-15 福州大学 Shaft tower Bird's Nest detection method based on deep learning
CN110211158A (en) * 2019-06-04 2019-09-06 海信集团有限公司 Candidate region determines method, apparatus and storage medium
CN110363789A (en) * 2019-06-25 2019-10-22 电子科技大学 A kind of long-term visual tracking method towards practical engineering application
CN110490902A (en) * 2019-08-02 2019-11-22 西安天和防务技术股份有限公司 Method for tracking target, device, computer equipment applied to smart city
CN110516705A (en) * 2019-07-19 2019-11-29 平安科技(深圳)有限公司 Method for tracking target, device and computer readable storage medium based on deep learning
CN110807410A (en) * 2019-10-30 2020-02-18 北京百度网讯科技有限公司 Key point positioning method and device, electronic equipment and storage medium
CN111027376A (en) * 2019-10-28 2020-04-17 中国科学院上海微系统与信息技术研究所 Method and device for determining event map, electronic equipment and storage medium
CN111179343A (en) * 2019-12-20 2020-05-19 西安天和防务技术股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN111199179A (en) * 2018-11-20 2020-05-26 深圳市优必选科技有限公司 Target object tracking method, terminal device and medium
CN111223123A (en) * 2019-12-17 2020-06-02 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111784737A (en) * 2020-06-10 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 Automatic target tracking method and system based on unmanned aerial vehicle platform
CN111832549A (en) * 2020-06-29 2020-10-27 深圳市优必选科技股份有限公司 Data labeling method and device
WO2021022643A1 (en) * 2019-08-08 2021-02-11 初速度(苏州)科技有限公司 Method and apparatus for detecting and tracking target in videos
CN113327272A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Robustness long-time tracking method based on correlation filtering
CN114554300A (en) * 2022-02-28 2022-05-27 合肥高维数据技术有限公司 Video watermark embedding method based on specific target
CN114663977A (en) * 2022-03-24 2022-06-24 龙港市添誉信息科技有限公司 Long-time span video image pedestrian monitoring accurate tracking method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348390A1 (en) * 2013-05-21 2014-11-27 Peking University Founder Group Co., Ltd. Method and apparatus for detecting traffic monitoring video
CN106204638A (en) * 2016-06-29 2016-12-07 西安电子科技大学 A kind of based on dimension self-adaption with the method for tracking target of taking photo by plane blocking process
CN107424171A (en) * 2017-07-21 2017-12-01 华中科技大学 A kind of anti-shelter target tracking based on piecemeal
CN107491742A (en) * 2017-07-28 2017-12-19 西安因诺航空科技有限公司 Stable unmanned plane target tracking when a kind of long
CN107886048A (en) * 2017-10-13 2018-04-06 西安天和防务技术股份有限公司 Method for tracking target and system, storage medium and electric terminal
CN107992790A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target long time-tracking method and system, storage medium and electric terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348390A1 (en) * 2013-05-21 2014-11-27 Peking University Founder Group Co., Ltd. Method and apparatus for detecting traffic monitoring video
CN106204638A (en) * 2016-06-29 2016-12-07 西安电子科技大学 A kind of based on dimension self-adaption with the method for tracking target of taking photo by plane blocking process
CN107424171A (en) * 2017-07-21 2017-12-01 华中科技大学 A kind of anti-shelter target tracking based on piecemeal
CN107491742A (en) * 2017-07-28 2017-12-19 西安因诺航空科技有限公司 Stable unmanned plane target tracking when a kind of long
CN107886048A (en) * 2017-10-13 2018-04-06 西安天和防务技术股份有限公司 Method for tracking target and system, storage medium and electric terminal
CN107992790A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target long time-tracking method and system, storage medium and electric terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAN YANG 等: "An object detection and tracking system for unmanned surface vehicles", 《PROC.SPIE 10432, TARGET AND BACKGROUND SIGNATURES III》 *
MENGMENG WANG 等: "Large Margin Object Tracking with Circulant Feature Maps", 《ARXIV》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109472229A (en) * 2018-10-30 2019-03-15 福州大学 Shaft tower Bird's Nest detection method based on deep learning
CN111199179A (en) * 2018-11-20 2020-05-26 深圳市优必选科技有限公司 Target object tracking method, terminal device and medium
CN111199179B (en) * 2018-11-20 2023-12-29 深圳市优必选科技有限公司 Target object tracking method, terminal equipment and medium
CN110211158A (en) * 2019-06-04 2019-09-06 海信集团有限公司 Candidate region determines method, apparatus and storage medium
CN110211158B (en) * 2019-06-04 2023-03-28 海信集团有限公司 Candidate area determination method, device and storage medium
CN110363789A (en) * 2019-06-25 2019-10-22 电子科技大学 A kind of long-term visual tracking method towards practical engineering application
CN110516705A (en) * 2019-07-19 2019-11-29 平安科技(深圳)有限公司 Method for tracking target, device and computer readable storage medium based on deep learning
WO2021012484A1 (en) * 2019-07-19 2021-01-28 平安科技(深圳)有限公司 Deep learning-based target tracking method and apparatus, and computer readable storage medium
CN110490902A (en) * 2019-08-02 2019-11-22 西安天和防务技术股份有限公司 Method for tracking target, device, computer equipment applied to smart city
CN110490902B (en) * 2019-08-02 2022-06-14 西安天和防务技术股份有限公司 Target tracking method and device applied to smart city and computer equipment
WO2021022643A1 (en) * 2019-08-08 2021-02-11 初速度(苏州)科技有限公司 Method and apparatus for detecting and tracking target in videos
CN111027376A (en) * 2019-10-28 2020-04-17 中国科学院上海微系统与信息技术研究所 Method and device for determining event map, electronic equipment and storage medium
CN110807410A (en) * 2019-10-30 2020-02-18 北京百度网讯科技有限公司 Key point positioning method and device, electronic equipment and storage medium
CN110807410B (en) * 2019-10-30 2022-09-06 北京百度网讯科技有限公司 Key point positioning method and device, electronic equipment and storage medium
CN111223123B (en) * 2019-12-17 2024-03-19 西安天和防务技术股份有限公司 Target tracking method, device, computer equipment and storage medium
CN111223123A (en) * 2019-12-17 2020-06-02 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111179343B (en) * 2019-12-20 2024-03-19 西安天和防务技术股份有限公司 Target detection method, device, computer equipment and storage medium
CN111179343A (en) * 2019-12-20 2020-05-19 西安天和防务技术股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111784737A (en) * 2020-06-10 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 Automatic target tracking method and system based on unmanned aerial vehicle platform
CN111832549A (en) * 2020-06-29 2020-10-27 深圳市优必选科技股份有限公司 Data labeling method and device
CN111832549B (en) * 2020-06-29 2024-04-23 深圳市优必选科技股份有限公司 Data labeling method and device
CN113327272A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Robustness long-time tracking method based on correlation filtering
CN114554300A (en) * 2022-02-28 2022-05-27 合肥高维数据技术有限公司 Video watermark embedding method based on specific target
CN114554300B (en) * 2022-02-28 2024-05-07 合肥高维数据技术有限公司 Video watermark embedding method based on specific target
CN114663977A (en) * 2022-03-24 2022-06-24 龙港市添誉信息科技有限公司 Long-time span video image pedestrian monitoring accurate tracking method

Similar Documents

Publication Publication Date Title
CN108694724A (en) A kind of long-time method for tracking target
CN108765452A (en) A kind of detection of mobile target in complex background and tracking
CN108664930A (en) A kind of intelligent multi-target detection tracking
Li et al. Learning to associate: Hybridboosted multi-target tracker for crowded scene
US10242266B2 (en) Method and system for detecting actions in videos
CN108447080B (en) Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network
CN108010067A (en) A kind of visual target tracking method based on combination determination strategy
CN109740413A (en) Pedestrian recognition methods, device, computer equipment and computer storage medium again
CN108694723A (en) A kind of target in complex environment tenacious tracking method
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN113628245B (en) Multi-target tracking method, device, electronic equipment and storage medium
CN110009060A (en) A kind of robustness long-term follow method based on correlation filtering and target detection
CN109448023A (en) A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation
CN111709430A (en) Ground extraction method of outdoor scene three-dimensional point cloud based on Gaussian process regression
CN114283355A (en) Multi-target endangered animal tracking method based on small sample learning
Dash et al. Mutation based self regulating and self perception particle swarm optimization for efficient object tracking in a video
CN116740539A (en) Visual SLAM method and system based on lightweight target detection network
CN109887004A (en) A kind of unmanned boat sea area method for tracking target based on TLD algorithm
Xu et al. An accurate multi-cell parameter estimate algorithm with heuristically restrictive ant system
Lu et al. Hybrid deep learning based moving object detection via motion prediction
CN116342645A (en) Multi-target tracking method for natatorium scene
Kästner et al. A bayesian approach to learning 3d representations of dynamic environments
KP et al. Feature selection in top-down visual attention model using WEKA
Wang et al. Path planning model of mobile robots in the context of crowds
Hu et al. Multi-task based marine radar tracking network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181023

RJ01 Rejection of invention patent application after publication