CN108734177A - Two-step correlation filtering method for tracking target - Google Patents

Two-step correlation filtering method for tracking target Download PDF

Info

Publication number
CN108734177A
CN108734177A CN201810471517.7A CN201810471517A CN108734177A CN 108734177 A CN108734177 A CN 108734177A CN 201810471517 A CN201810471517 A CN 201810471517A CN 108734177 A CN108734177 A CN 108734177A
Authority
CN
China
Prior art keywords
target
image
scale
tracking
filter template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810471517.7A
Other languages
Chinese (zh)
Other versions
CN108734177B (en
Inventor
胡永江
葛宝义
左宪章
李永科
褚丽娜
李爱华
赵月飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN201810471517.7A priority Critical patent/CN108734177B/en
Publication of CN108734177A publication Critical patent/CN108734177A/en
Application granted granted Critical
Publication of CN108734177B publication Critical patent/CN108734177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of two-step correlation filtering method for tracking target, are related to technical field of computer vision, especially image procossing target tracking domain.This algorithm focuses on reinforcing the robustness of related filtered target tracking, improves correlation filtering target following processing target and blocks ability.The present invention comprises the steps of:(1) fusion HOG and CN features obtains target more comprehensively characteristic present improves the robustness of target following.(2) two-step correlation filtering method for tracking target is used, is combined with fine positioning by coarse positioning, improves the accuracy guarantee tracking velocity of target following.(3) new target following confidence indicator is used, by confidence indicator adaptive updates filter template, filter template when solving the problems, such as target shield is contaminated.The experimental results showed that two-step correlation filtering method for tracking target realizes the stronger target following of real-time, has important improvement to the prior art compared with the precision and robustness that basic methods improve target following.

Description

Two-step correlation filtering method for tracking target
Technical field
The present invention relates to technical field of computer vision, especially image procossing target tracking domain.
Background technology
Target following determines the target at position according to known image, predicts to determine the image position of target in the next frame It sets, and then reaches target Continuous tracking.Due to target following have illumination variation, dimensional variation, target occlusion, target deformation, Motion blur, quickly movement, target rotation, beyond the visual field, background interference, low resolution the problems such as and target information it is less, The larger problem of modeling difficulty to target causes target following difficult.Correlation filtering method for tracking target is as a kind of discriminate Tracking calculates correlation filtering target following due to its tracking velocity with superelevation and stronger tracking robustness Method, which is studied, tracks target real-time, improves tracking robustness and is of great significance.
Foreign scholar conducts in-depth research the field:Bolme et al. proposes to minimize output mean square error methodology (Minimum Output Sum of Squared Error, MOSSE), by minimizing filtered target result and desired value Error come train filter template complete target following.Correlation filtering is applied in target following by this method for the first time, tracking speed Degree reaches 669Fps, and algorithm keeps track robustness is preferable.Henriques et al. proposes core correlation filtering (Kernelized Correlation Filters, KCF) method for tracking target.With circular matrix property by the sampling etc. of correlation filtering training process Valence to complete the intensive sampling of filter template training, and uses Kernel-Based Methods in target signature matrix circular shifting function, Solve nonlinear problem.This method has significantly speeded up the training process of target filter template, improves correlation filtering target The robustness of tracking.Bibi proposes self-adapting multi-dimension core correlation filtering (Scale Adaptive Kernel Correlation Filter, SAMF) method.By building target scale pond, related filter is carried out to different scale images respectively Wave, to estimate that target scale changes.This method solve the variation issues of target size, but due to tracking velocity limit scale pond Be arranged it is limited, cause for target scale change it is insensitive.Danelljan proposes that scale space differentiates tracking (Discriminative Scale Space Tracking, DSST) method.By additional unidimensional scale correlation filter, Differentiate the best scale in target scale pond.This method is simple and efficient, accurate to target scale variation estimation.Horse is super to propose layering The tracking of convolution characteristic target (Hierarchical Convolutional Features for Visual Tracking, HCFT) method is improved by the powerful ability in feature extraction of convolutional neural networks to extract the feature of target more robust The robustness and precision of target following, but tracking velocity is slower.Liu Ting proposes adaptive piecemeal target following (Real-time Part-based tracking via adaptive correlation filters, RPAC) method, by by goal decomposition For localized target, correlation filtering tracking is carried out respectively to localized target, then in conjunction with blocking information as final tracking result.It should Method is preferable to the tracking effect of target occlusion situation, but target segment model is complex, target following under some cases Robustness is not high.
Above-mentioned algorithm is innovated and has been improved for correlation filtering target tracking algorism, but still is had the following problems:Mesh The precision and robustness for marking tracking need to be further increased;Processing to target occlusion and template renewal when blocking are asked Topic has to be optimized.
Invention content
In view of this, it is an object of the invention to propose a kind of two-step correlation filtering target tracking algorism, this method can The caused filter template pollution problem of the problems such as improving the robustness of target tracking algorism, effectively solving target occlusion.
Based on above-mentioned purpose, technical solution provided by the invention is:
Two-step correlation filtering method for tracking target, this approach includes the following steps:
Step 1:Training objective characteristics of image, training coarse positioning associated filter template, the filter of fine positioning correlation are extracted respectively Wave device template and target scale associated filter template;
Step 2:Extraction prediction target image HOG features, target is determined with the coarse positioning associated filter template after training Substantially picture position;
Step 3:It is fixed with the essence after training in target substantially picture position extraction target image block HOG and CN fusion features Position associated filter template seeks target fine positioning response diagram and determines target exact image position;
Step 4:Target multi-scale image feature is extracted in target exact image position, is filtered with the scale correlation after training Wave device determines target scale size;
Step 5:According to target fine positioning response diagram, target following confidence indicator value is calculated;
Step 6:Target exact image position according to step 1 method be respectively trained coarse positioning associated filter template, Fine positioning associated filter template and target scale associated filter template, according to target following confidence threshold value, adaptively more New associated filter template;
Step 7:According to target following confidence indicator value, target following confidence threshold value is updated;
Step 8:Repeat step 2~step 7;
Complete target following.
Wherein, step 1 specifically includes following steps:
(101) in training objective picture position corresponding target image is extracted respectively with target sizes by 1.6 times of image spreading The 31 dimension HOG features x that block Cell sizes are 8ci
(102) by minimizing loss functionTo train coarse positioning correlation to filter Wave device template wC, whereinIt is filter response desired value, u={ 1,2 ... W }, v={ 1,2 ... H }, W/H It is the width and height of target image block,λ=0.0001 is regularization coefficient, and n indicates training target image Number, training obtain coarse positioning associated filter template It is each square The discrete Fourier transform DFT of battle array is as a result, ⊙ indicates that corresponding element is multiplied;
(103) target image is extended 1 times with target sizes respectively in training objective picture position, extracts corresponding target figure As the 31 dimension HOG features that block Cell sizes are 1;11 Victoria C N features are extracted with onesize image block;By the HOG features of extraction with CN features are fused to 42 dimension target signature x by channelFi
(104) by minimizing loss functionTo train fine positioning phase Close filter template wF, trained to obtain fine positioning associated filter template
(105) it is 4 to correspond to target image block Cell sizes in training objective picture position with target sizes image zooming-out 31 dimension HOG features xsi
(106) by minimizing loss functionTo train scale correlation filtering Device template ws, trained to obtain target scale associated filter template
Wherein, step 2 specifically includes following steps:
(201) target image is extended 1.6 times with target sizes in prediction target image position, extracts corresponding target image The 31 dimension HOG features z that block Cell sizes are 8C
(202) coarse positioning correlation filter is usedSeek target coarse positioning response diagramD indicates that target signature port number, target are thick Location response figure maximum is target substantially picture position.
Wherein, step 3 specifically includes following steps:
(301) target image is extended 1 times with target sizes in target substantially picture position, extracts corresponding target image block The 31 dimension HOG features that Cell sizes are 1;11 Victoria C N features are extracted with onesize target image block;By the HOG features of extraction with CN features are fused to 42 dimension target signature z by channelF
(302) fine positioning correlation filter is usedSeek target fine positioning response diagramTarget fine positioning response diagram maximum is mesh Mark exact image position.
Wherein, step 4 specifically includes following steps:
(401) in target exact image placement configurations target image scale pond, sW × sH,W/H is the width and height of target image block, and N=33 is target scale Number, a=1.02 indicate scale step-length;
(402) it is 4 to all corresponding target image block Cell sizes of different scale images extraction in target image scale pond 31 dimension HOG features zS
(403) target scale correlation filter is usedSeek target scale response diagramTarget scale response diagram maximum is target ruler Spend size.
Wherein, step 5 is specially:
According to target fine positioning response diagram, target following confidence indicator is soughtIt is the maximum value and mean value of target fine positioning response diagram respectively,It is the target response value of target fine positioning response diagram different location point, yFFor target fine positioning response diagram, ω/k is indicated respectively The width and height of target fine positioning response diagram.
Wherein, step 6 specifically includes following steps:
(601) in target exact image position, according to (101), (103) and (105) step method obtains target signature;
(602) judge whether target following confidence indicator value AP meets confidence threshold value shd conditions, if it is greater than confidence Spend threshold value then to filter template as the following formula:It carries out Update, do not updated then if it is less than confidence threshold value, a/b indicate different filter templates to dependent variable, aold/boldIt indicates more Before new different filter templates to dependent variable, x indicates the target signature of respective filter template extraction.
Wherein, step 7 is specially:
According to target following confidence indicator value AP, update target following confidence threshold value shd, shd=(1- κ) shdold+ κ AP, κ=0.05 are threshold value turnover rate, shdoldFor the target following confidence threshold value before update, in target image First frame trains filter template, shd 0.
The present invention is compared to the advantages of background technology:
The robustness and precision for focusing on reinforcing the tracking of related filtered target of this algorithm, improve correlation filtering target with Track processing target blocks ability, avoids the model pollution problem in target following, while can ensure the real-time of tracking.Two-step Correlation filtering method for tracking target, compared with the precision and robustness that basic methods improve target following, can realize real-time compared with Strong target following is the important improvement to the prior art.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 is a method flow diagram of the embodiment of the present invention.
Fig. 2 is the HOG feature visualizations figure and reconstruct image of target image difference Cell.
Fig. 3 is target following different moments confidence indicator value.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference Attached drawing, the present invention is described in further detail.
Present embodiment elaborates the principle of two-step correlation filtering target tracking algorism, according to filter template training, mesh Cursor position coarse positioning, target location fine positioning, target scale estimation, the calculating of objective degrees of confidence index value, target filter template Newer thinking is calculated, and is filtered to the structure of correlation filtering target tracking algorism, target feature fusion and target emphatically Device template renewal mode optimizes, and respectively by this algorithm and traditional algorithm into line trace robustness and accuracy comparison with Where the advantage for illustrating the present invention.It is as follows:
Step 1:Training objective characteristics of image, training coarse positioning associated filter template, the filter of fine positioning correlation are extracted respectively Wave device template and target scale associated filter template;
(101) in training objective picture position corresponding target image is extracted respectively with target sizes by 1.6 times of image spreading The 31 dimension HOG features x that block Cell sizes are 8ci
(102) by minimizing loss functionTo train coarse positioning correlation to filter Wave device template wC,It is filter response desired value, u={ 1,2 ... W }, v={ 1,2 ... H }, W/H is mesh The width and height of logo image block,λ=0.0001 is regularization coefficient, and n indicates training target image number, It can train to obtain coarse positioning associated filter template It is the discrete Fourier transform DFT knots of each matrix Fruit, ⊙ indicate that corresponding element is multiplied;
(103) target image is extended 1 times with target sizes in training objective picture position, extracts corresponding target image block The 31 dimension HOG features that Cell sizes are 111 Victoria C N features are extracted with onesize image blockThe HOG of extraction is special SignWith CN featuresIt is fused to 42 dimension target signature x by channelF
(104) by minimizing loss functionTo train fine positioning correlation to filter Wave device template wF, can train to obtain fine positioning associated filter template
Histograms of oriented gradients (Histogram of Oriented Gradients, HOG) is characterized in a kind of object detection Feature Descriptor describes the appearance features of object by calculating the histogram of gradients of image.It includes the local messages of image And global information, it is accurate to be described to clarification of objective, and the influence not being illuminated by the light, while feature calculation speed is fast.
As shown in Fig. 2, figure (a) is a certain frame target image tracked in video, it is big in different Cell on the left of figure (b) Small, the target image HOG feature visualization figures of extraction, figure (b) right side is according to the HOG target signature of the different Cell sizes of correspondence The target image reconstructed.Unit is bigger it can be seen from (b) different units reconstructed image, and image detail information is fewer, and schemes Picture semantic information is abundanter, conversely, then the detailed information of image is abundanter, the semantic information of image is fewer, while Cell is bigger Target HOG feature calculation speed is faster.Target signature feature is extracted similar to convolutional neural networks, is different from neural network, HOG The parameter that feature extraction needs is less, and calculating speed is faster.
The semantic information of image describes preferably the variation of target location, and the detailed information of image can complete target location Be accurately positioned.The robustness of target following, while big unit HOG are improved with coarse positioning combination tracking strategy by fine positioning Feature calculation amount is small, and tracking velocity loss is small, can ensure the real-time of target following.
(105) it is 4 to correspond to target image block Cell sizes in training objective picture position with target sizes image zooming-out 31 dimension HOG features xsi
(106) by minimizing loss functionTo train scale correlation filter Template ws, can train to obtain target scale associated filter template
Step 2:Extraction prediction target image HOG features, target is determined with the coarse positioning associated filter template after training Substantially picture position;
(201) target image is extended 1.6 times with target sizes in prediction target image position, extracts corresponding target image The 31 dimension HOG features z that block Cell sizes are 8C
(202) coarse positioning correlation filter is usedSeek target coarse positioning response diagramD indicates that target signature port number, target are slightly fixed Position response diagram maximum is target substantially picture position.
Step 3:It is fixed with the essence after training in target substantially picture position extraction target image block HOG and CN fusion features Position associated filter template seeks target fine positioning response diagram and determines target exact image position;
(301) target image is extended 1 times with target sizes in target substantially picture position, extracts corresponding target image block The 31 dimension HOG features that Cell sizes are 111 Victoria C N features are extracted with onesize target image blockBy extraction HOG featuresWith CN featuresIt is fused to 42 dimension target signature z by channelF
MOSSE correlation filtering method for tracking target describes son as target signature using gray feature and carries out target following, But gray feature is easy to be influenced by factors such as noise and illumination, therefore the robustness of target following is relatively low.Thereafter Scholar uses HOG features as object tracking features extraction algorithm, and HOG features are due to the expression to goal gradient information, to mesh Mark details is portrayed, while it is insensitive to factors such as illumination, therefore the robustness of target following is higher.
Color (Color Name, CN) feature has been used widely in target identification and detection field, by CN features It is applied in target following, good effect is obtained in terms of based on color target tracking.CN features by target RGB (it is red, it is green, It is blue) 3 channel color characteristics be mapped as black, blue, brown, grey, green, orange, powder, purple, red, Bai Hehuang (black, blue, brown, Grey, green, orange, pink, purple, red, white, yellow) 11 dimension colors channel characteristics, compared with RGB three primary colors Feature is stronger to the characterization ability of target, and has certain optics invariance.
And single features are incomplete to the characterization of target signature, it is comprehensive to the influence factor discriminating power of target following, The method for tracking target of feature based fusion has higher robustness compared with single features.Using HOG features and CN Fusion Features Method for tracking target, CN features have certain deformation invariance to target, and still have in objective fuzzy certain Discriminating power.CN features describe mutually to supplement portraying for target detail with HOG features for the apparent characteristic of target, to target Description is more complete, and target signature can be improved in the judgement index of several scenes, improve the robustness of target following.
(302) fine positioning correlation filter is usedSeek target fine positioning response diagramTarget fine positioning response diagram maximum is mesh Mark fine positioning picture position.
Step 4:Target multi-scale image feature is extracted in target exact image position, is filtered with the scale correlation after training Wave device determines target scale size;
(401) in target exact image placement configurations target image scale pond, sW × sH,W/H is the width and height of target image block, and N=33 is target scale Number, a=1.02 indicate scale step-length;
(402) it is 4 to all corresponding target image block Cell sizes of different scale images extraction in target image scale pond 31 dimension HOG features zS
(403) target scale correlation filter is usedSeek target scale response diagramTarget scale response diagram maximum is target ruler Spend size.
Step 5:According to target fine positioning response diagram, target following confidence indicator value is calculated;
(501) according to target fine positioning response diagram, target following confidence indicator is soughtIt is the maximum value and mean value of target fine positioning response diagram respectively,It is the target response value of target fine positioning response diagram different location point, yFFor target fine positioning response diagram, ω/k is indicated respectively The width and height of target fine positioning response diagram.
Step 6:It is accurately positioned picture position in target and coarse positioning correlation filter mould is respectively trained according to step 1 method Plate, fine positioning associated filter template and target scale associated filter template, according to target following confidence threshold value, adaptively Update associated filter template;
(601) in target exact image position, according to (101), (103) and (105) step method obtains target signature xC, xF,xS
(602) judge whether AP values required by (501) meet confidence threshold value shd conditions, then if it is greater than confidence threshold value As the following formula to filter template: It is updated, if Do not updated then less than confidence threshold value, a/b indicate different filter templates to dependent variable, x indicates that respective filter template carries The target signature taken i.e. xC,xF,xS
Based on correlation filtering method for tracking target, in object tracking process, it often will appear target and be blocked or go out to regard The problems such as wild, and standard correlation filtering method for tracking target, target filter template real-time update during tracking, to meet The tracking situation of the problems such as target rotation, deformation, therefore when target is blocked, target filter template can because study to blocking Information and be contaminated, cause model to drift about, eventually lead to target following failure.
Target following effect is judged, is a kind of simply to have according to tracking creditability adaptive updates filter template The processing target of effect blocks the strategy of model drift.When target following effect is preferable, real-time update filter template is to meet The needs such as target deformation preserve target filter template, wait for target when target following object is blocked or goes out the visual field Restore, continues to track and update target template, calculating can be reduced to a certain extent while avoiding model from drifting about Amount improves tracking velocity.
Step 7:According to target following confidence level, confidence threshold value is updated;
(701) according to target following confidence indicator value AP required by (501), target following confidence threshold value, shd=are updated (1-κ)·shdold+ κ AP, κ=0.05 are threshold value turnover rates.It may be noted that filter template is trained in target image head frames, Shd is 0.
The confidence threshold value of target following needs artificial to be rule of thumb fixed value with tracking scene settings, causes in difference The adaptability of confidence threshold value declines in scene, and fixed threshold cannot meet a variety of tracking scenes.It proposes a kind of adaptive Confidence threshold value determines method, such as Fig. 3, under different tracking scenes, template renewal threshold value can be adaptively determined, in the case of a variety of Tracking robustness it is stronger.
The confidence indicator value that present frame is calculated after new frame target following, then updates confidence according to certain turnover rate Threshold value is spent, which combines current tracking creditability and original tracking creditability information, and tracking robustness is stronger.Confidence simultaneously It spends index to calculate using having target response figure calculating, reduces resource occupation, and calculation amount is smaller, tracking velocity is influenced It is relatively low, formwork calculation and update times can be reduced using confidence threshold value more new strategy to a certain extent, accelerate target with Track speed.
Step 8:Repeat step 2~step 7;
Complete target following.
Those of ordinary skills in the art should understand that:The discussion of any of the above embodiment is exemplary only, not It is intended to imply that the scope of the present disclosure (including claim) is limited to these examples.All within the spirits and principles of the present invention, Any omission made to the above embodiment, modification, equivalent replacement, improvement etc., should be included in protection scope of the present invention it It is interior.

Claims (8)

1. two-step correlation filtering method for tracking target, which is characterized in that this approach includes the following steps:
Step 1:Training objective characteristics of image, training coarse positioning associated filter template, fine positioning correlation filter are extracted respectively Template and scale associated filter template;
Step 2:Extraction prediction target image HOG features, determine target substantially with the coarse positioning associated filter template after training Picture position;
Step 3:In target substantially picture position extraction target image block HOG and CN fusion features, with the fine positioning phase after training Pass filter template seeks target fine positioning response diagram and determines target exact image position;
Step 4:Target multi-scale image feature is extracted in target exact image position, with the scale correlation filter after training Determine target scale size;
Step 5:According to target fine positioning response diagram, target following confidence indicator value is calculated;
Step 6:Coarse positioning associated filter template is respectively trained according to step 1 method in target exact image position, essence is determined Position associated filter template and target scale associated filter template, according to target following confidence threshold value, adaptive updates phase Close filter template;
Step 7:According to target following confidence indicator value, target following confidence threshold value is updated;
Step 8:Repeat step 2~step 7;
Complete target following.
2. two-step correlation filtering method for tracking target according to claim 1, it is characterised in that:Step 1 specifically includes following Step:
(101) in training objective picture position corresponding target image block is extracted respectively with target sizes by 1.6 times of image spreading The 31 dimension HOG features x that Cell sizes are 8ci
(102) by minimizing loss functionTo train coarse positioning correlation filter Template wC, whereinIt is filter response desired value, u={ 1,2 ... W }, v={ 1,2 ... H }, W/H is mesh The width and height of logo image block,λ=0.0001 is regularization coefficient, and n indicates training target image number, Training obtains coarse positioning associated filter template It is each matrix Discrete Fourier transform DFT is as a result, ⊙ indicates that corresponding element is multiplied;
(103) target image is extended 1 times with target sizes respectively in training objective picture position, extracts corresponding target image block The 31 dimension HOG features that Cell sizes are 1;11 Victoria C N features are extracted with onesize image block;The HOG features of extraction and CN is special Sign is fused to 42 dimension target signature x by channelFi
(104) by minimizing loss functionTo train fine positioning correlation filter Template wF, trained to obtain fine positioning associated filter template
(105) 31 dimensions that target image block Cell sizes are 4 are corresponded to target sizes image zooming-out in training objective picture position HOG features xsi
(106) by minimizing loss functionTo train scale associated filter template ws, trained to obtain target scale associated filter template
3. the method for tracking target of two-step correlation filtering according to claim 2, it is characterised in that:Step 2 specifically include with Lower step:
(201) target image is extended 1.6 times with target sizes in prediction target image position, extracts corresponding target image block The 31 dimension HOG features z that Cell sizes are 8C
(202) coarse positioning correlation filter is usedSeek target coarse positioning response diagramD indicates that target signature port number, target are thick Location response figure maximum is target substantially picture position.
4. the method for tracking target of two-step correlation filtering according to claim 2, it is characterised in that:Step 3 specifically include with Lower step:
(301) target image is extended 1 times with target sizes in target substantially picture position, extracts corresponding target image block Cell The 31 dimension HOG features that size is 1;11 Victoria C N features are extracted with onesize target image block;The HOG features of extraction and CN is special Sign is fused to 42 dimension target signature z by channelF
(302) fine positioning correlation filter is usedSeek target fine positioning response diagramD indicates target signature port number, target essence Location response figure maximum is target exact image position.
5. the method for tracking target of two-step correlation filtering according to claim 2, it is characterised in that:Step 4 specifically include with Lower step:
(401) in target exact image placement configurations target image scale pond, sW × sH,W/H is the width and height of target image block, and N=33 is target scale Number, a=1.02 indicate scale step-length;
(402) to the corresponding target image block Cell sizes of all different scale images extraction in target image scale pond are 4 31 Tie up HOG features zS
(403) target scale correlation filter is usedSeek target scale response diagramD indicates target signature port number, target scale Response diagram maximum is target scale size.
6. the method for tracking target of two-step correlation filtering according to claim 1, it is characterised in that:Step 5 is specially:
According to target fine positioning response diagram, target following confidence indicator is soughtIt is the maximum value and mean value of target fine positioning response diagram respectively,It is the target response value of target fine positioning response diagram different location point, yFFor target fine positioning response diagram, ω/k is indicated respectively The width and height of target fine positioning response diagram.
7. the method for tracking target of two-step correlation filtering according to claim 2, it is characterised in that:Step 6 specifically include with Lower step:
(601) in target exact image position, according to (101), (103) and (105) step method obtains target signature;
(602) judge whether target following confidence indicator value AP meets confidence threshold value shd conditions, if it is greater than confidence level threshold Value then to filter template as the following formula:It is updated, Do not updated then if it is less than confidence threshold value, a/b indicate different filter templates to dependent variable, aold/boldBefore indicating update Different filter templates to dependent variable, x indicates the target signature of respective filter template extraction.
8. the method for tracking target of two-step correlation filtering according to claim 1, it is characterised in that:Step 7 is specially:
According to target following confidence indicator value AP, target following confidence threshold value shd, shd=(1- κ) shd is updatedold+ κ AP, κ=0.05 are threshold value turnover rate, shdoldFor the target following confidence threshold value before update, instructed in target image head frames Practice filter template, shd 0.
CN201810471517.7A 2018-05-17 2018-05-17 Double-step correlation filtering target tracking method Active CN108734177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810471517.7A CN108734177B (en) 2018-05-17 2018-05-17 Double-step correlation filtering target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810471517.7A CN108734177B (en) 2018-05-17 2018-05-17 Double-step correlation filtering target tracking method

Publications (2)

Publication Number Publication Date
CN108734177A true CN108734177A (en) 2018-11-02
CN108734177B CN108734177B (en) 2021-06-29

Family

ID=63938327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810471517.7A Active CN108734177B (en) 2018-05-17 2018-05-17 Double-step correlation filtering target tracking method

Country Status (1)

Country Link
CN (1) CN108734177B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087337A (en) * 2018-11-07 2018-12-25 山东大学 Long-time method for tracking target and system based on layering convolution feature
CN109919982A (en) * 2019-03-12 2019-06-21 哈尔滨工程大学 A kind of multiscale target tracking improved method based on particle filter
CN110033472A (en) * 2019-03-15 2019-07-19 电子科技大学 A kind of stable objects tracking under the infrared ground environment of complexity
CN110097575A (en) * 2019-04-28 2019-08-06 电子科技大学 A kind of method for tracking target based on local feature and scale pond
CN112598011A (en) * 2020-12-14 2021-04-02 沈阳化工大学 Correlation filtering tracking method based on multi-feature fusion
CN112598011B (en) * 2020-12-14 2024-07-02 沈阳化工大学 Correlation filtering tracking method based on multi-feature fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008335A2 (en) * 2003-07-11 2005-01-27 Carl Zeiss Sms Gmbh Method for analysing objects in microlithography
CN107341820A (en) * 2017-07-03 2017-11-10 郑州轻工业学院 A kind of fusion Cuckoo search and KCF mutation movement method for tracking target
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008335A2 (en) * 2003-07-11 2005-01-27 Carl Zeiss Sms Gmbh Method for analysing objects in microlithography
CN107341820A (en) * 2017-07-03 2017-11-10 郑州轻工业学院 A kind of fusion Cuckoo search and KCF mutation movement method for tracking target
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087337A (en) * 2018-11-07 2018-12-25 山东大学 Long-time method for tracking target and system based on layering convolution feature
CN109087337B (en) * 2018-11-07 2020-07-14 山东大学 Long-time target tracking method and system based on hierarchical convolution characteristics
CN109919982A (en) * 2019-03-12 2019-06-21 哈尔滨工程大学 A kind of multiscale target tracking improved method based on particle filter
CN109919982B (en) * 2019-03-12 2022-05-20 哈尔滨工程大学 Particle filter-based multi-scale target tracking improvement method
CN110033472A (en) * 2019-03-15 2019-07-19 电子科技大学 A kind of stable objects tracking under the infrared ground environment of complexity
CN110033472B (en) * 2019-03-15 2021-05-11 电子科技大学 Stable target tracking method in complex infrared ground environment
CN110097575A (en) * 2019-04-28 2019-08-06 电子科技大学 A kind of method for tracking target based on local feature and scale pond
CN110097575B (en) * 2019-04-28 2022-07-01 电子科技大学 Target tracking method based on local features and scale pool
CN112598011A (en) * 2020-12-14 2021-04-02 沈阳化工大学 Correlation filtering tracking method based on multi-feature fusion
CN112598011B (en) * 2020-12-14 2024-07-02 沈阳化工大学 Correlation filtering tracking method based on multi-feature fusion

Also Published As

Publication number Publication date
CN108734177B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN108734177A (en) Two-step correlation filtering method for tracking target
CN111402209B (en) U-Net-based high-speed railway steel rail damage detection method
CN107424171A (en) A kind of anti-shelter target tracking based on piecemeal
CN104537841B (en) Unlicensed vehicle violation detection method and detection system thereof
EP3819859B1 (en) Sky filter method for panoramic images and portable terminal
CN108921089A (en) Method for detecting lane lines, device and system and storage medium
CN108986050A (en) A kind of image and video enhancement method based on multiple-limb convolutional neural networks
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN103942560B (en) A kind of high-resolution video vehicle checking method in intelligent traffic monitoring system
Qu et al. A pedestrian detection method based on yolov3 model and image enhanced by retinex
CN107123131A (en) A kind of moving target detecting method based on deep learning
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN109345474A (en) Image motion based on gradient field and deep learning obscures blind minimizing technology
CN104134222A (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN106803248B (en) Fuzzy license plate image blur evaluation method
CN107944403A (en) Pedestrian's attribute detection method and device in a kind of image
CN109712247B (en) Live-action training system based on mixed reality technology
CN106780727B (en) Vehicle head detection model reconstruction method and device
CN109583295A (en) A kind of notch of switch machine automatic testing method based on convolutional neural networks
Li et al. Deep learning-based object tracking in satellite videos: A comprehensive survey with a new dataset
CN104700405A (en) Foreground detection method and system
CN115761563A (en) River surface flow velocity calculation method and system based on optical flow measurement and calculation
CN113435452A (en) Electrical equipment nameplate text detection method based on improved CTPN algorithm
CN116883868A (en) Unmanned aerial vehicle intelligent cruising detection method based on adaptive image defogging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant