CN109448024A - Visual tracking method, the system of constraint correlation filter are constructed using depth data - Google Patents

Visual tracking method, the system of constraint correlation filter are constructed using depth data Download PDF

Info

Publication number
CN109448024A
CN109448024A CN201811313969.9A CN201811313969A CN109448024A CN 109448024 A CN109448024 A CN 109448024A CN 201811313969 A CN201811313969 A CN 201811313969A CN 109448024 A CN109448024 A CN 109448024A
Authority
CN
China
Prior art keywords
feature
target
correlation filter
depth
visual tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811313969.9A
Other languages
Chinese (zh)
Other versions
CN109448024B (en
Inventor
黄磊
李冠群
张沛昌
孙维泽
李强
王波
王一波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201811313969.9A priority Critical patent/CN109448024B/en
Publication of CN109448024A publication Critical patent/CN109448024A/en
Application granted granted Critical
Publication of CN109448024B publication Critical patent/CN109448024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of visual tracking method, systems that constraint correlation filter is constructed using depth data.The visual tracking method carries out depth targets segmentation to the depth data for the target location estimated in current depth image frame, obtains object space information comprising steps of S1;S2 extracts feature vector to the color data for the target location estimated in current color picture frame;S3 returns label function according to object space information architecture Gaussian;S4 returns label function building space constraint correlation filter using the feature vector and Gaussian extracted in object space information and current color picture frame;S5, carries out feature extraction to color data of next color image frame in search window and obtains feature vector, calculates target relevant response according to the feature vector and space constraint correlation filter extracted in next color image frame;S6 determines the target position of next color image frame according to the position of the peak response in target relevant response.

Description

Visual tracking method, the system of constraint correlation filter are constructed using depth data
Technical field
The present invention relates to technical field of image processing more particularly to a kind of constructed using depth data to constrain correlation filter Visual tracking method, system.
Background technique
Vision tracking is widely used in the fields such as artificial intelligence, traffic, security protection, robot, intelligent storage, is to realize intelligence Change, unmanned important method and approach.Using information such as the position of the available target of visual tracking method, speed, tracks, More advanced Activity recognition can be completed by these information, and then is preferably production and service for life.
In recent years, the method that correlation filtering is widely used in vision tracking field realizes object module in the efficient study of frequency domain And detection, but the method for correlation filtering be severely limited by edge effect and fixed target shape and can not further be promoted with Track performance seriously limits the application scenarios of vision tracking, and the practical application tracked to vision brings very big obstacle.
Correlation filtering method introduces edge effect, and it is quasi- for causing the correlation filter of only search window centre position to respond True, it reduce search ranges;And correlation filter uses fixed-size filter template, it is desirable that target must be solid The rectangular shape of scale cun, non-rectangle target can introduce background information to reduce discriminating power, trained filter with it is to be checked The template of survey must be same size, limit the application range of algorithm.
Summary of the invention
Technical problem to be solved by the present invention lies in provide a kind of target that can adapt to arbitrary shape and can overcome Visual tracking method, the system that constraint correlation filter is constructed using depth data of edge effect.
In order to solve the above technical problems, the present invention uses technical solution as described below:
A kind of visual tracking method constructing constraint correlation filter using depth data, which is characterized in that the vision Tracking includes the following steps:
S1 carries out depth targets segmentation to the depth data for the target location estimated in current depth image frame, obtains Object space information;
S2 carries out feature extraction to the color data for the target location estimated in current color picture frame, obtains feature Vector;
S3 returns label function according to object space information architecture Gaussian;
S4 returns label using the feature vector and Gaussian extracted in object space information and current color picture frame Function constructs space constraint correlation filter;
S5 carries out feature extraction to color data of next color image frame in search window and obtains feature vector, according to The feature vector and space constraint correlation filter extracted in next color image frame calculate target relevant response;
S6 determines the target position of next color image frame according to the position of the peak response in target relevant response.
Preferably, before the step S6 further include:
The contribution in each feature channel is calculated according to the feature vector extracted in current color picture frame;
The uniqueness in each feature channel is calculated according to the relevant response in each feature channel;
The weight in each feature channel is calculated according to the uniqueness of the contribution in feature channel and feature channel;
It is rung using the correlation that the weight in feature channel is weighted to obtain target to the relevant response in each feature channel It answers.
Preferably, after the step S6 further include:
Learning method is taken to be updated space constraint correlation filter;
Lower frame is determined according to the updated space constraint correlation filter using the method in the step S5-S6 The target position of color image.
Preferably, the object space information includes shape and depth distribution histogram.
Preferably, described eigenvector includes histograms of oriented gradients feature, color characteristic and gray feature.
A kind of Visual Tracking System constructing constraint correlation filter using depth data, which is characterized in that the vision Tracking system includes:
Spatial information obtains module, carries out for the depth data to the target location estimated in current depth image frame Depth targets segmentation, obtains object space information;
Target's feature-extraction module is carried out for the color data to the target location estimated in current color picture frame Feature extraction obtains feature vector;
Function construction module, for returning label function according to object space information architecture Gaussian;
Filter constructing module, for using the feature vector extracted in object space information and current color picture frame with And Gaussian returns label function and constructs space constraint correlation filter;
Response computation module, for carrying out feature extraction acquisition to color data of next color image frame in search window Feature vector is rung according to the feature vector extracted in next color image frame is related to space constraint correlation filter calculating target It answers;
Target position determining module, for determining present frame colour according to the position of the peak response in target relevant response The target position of image.
Preferably, the Visual Tracking System further includes having:
Contribution calculation module, for calculating the contribution in each feature channel according to feature vector;
Unique computing module, for calculating the uniqueness in each feature channel according to the relevant response in each feature channel;
Weight calculation module calculates the power in each feature channel according to the uniqueness of the contribution in feature channel and feature channel Weight.
Preferably, the Visual Tracking System further includes having: filter update module, for taking learning method to space Constraint correlation filter is updated.
The beneficial technical effect of the present invention lies in: the present invention obtains target by carrying out Target Segmentation to depth data Then the spatial informations such as shape and depth distribution have the correlation filter of space constraint using the shape building of target, related The spatial informations such as the shape of target are contained when filter, therefore, filter adapts to the target of arbitrary shape;Meanwhile target Spatial information broken circulation that correlation filter utilizes it is assumed that overcoming circulation in turn assumes bring edge effect, make The relevant response that must be calculated all is accurately, to expand the search range of vision tracking in entire search box.
Detailed description of the invention
Fig. 1 is the visual tracking method that constraint correlation filter is constructed using depth data in one embodiment of the invention Flow diagram;
Fig. 2 is the Visual Tracking System that constraint correlation filter is constructed using depth data in one embodiment of the invention Structural schematic diagram.
Specific embodiment
To make those skilled in the art that the object, technical solutions and advantages of the present invention be more clearly understood, with Under the present invention is further elaborated in conjunction with the accompanying drawings and embodiments.
As shown in Figure 1, in an embodiment of the invention, using depth data construct the vision of constraint correlation filter with Track method, comprising:
S1 carries out depth targets segmentation to the depth data for the target location estimated in current depth image frame, obtains Object space information.
Specifically, depth targets segmentation is carried out to the depth data of the target location of current depth image frame estimation, obtained The shape and depth of target is taken the spatial informations such as to be distributed.The depth distribution histogram h (d of depth data is calculated firstj), step-length is 5cm, if not first frame, using the standard variance δ of target depth distribution as step-length, h (dj) share j position, each correspondence Depth-averaged value be dj,h(dj) be jth bit depth within the scope of depth value number.Doing length to depth distribution histogram is 3 one-dimensional non-maxima suppression and select the starting point that local maximum is clustered as K-Means.The number of local maximum Required the number of iterations is restrained to reduce as the class number K of K-Means cluster, if histogram digit is greater than 50, and local Maximum value only one, then histogram end add one cluster starting point as background cluster starting point placement mesh Target depth distribution standard variance must be influenced by maximum.It is clustered since initial point, update method isAfter clustering convergence, according to cluster result, take the corresponding the smallest class of depth bounds as target The plane of distribution takes out all depth values belonged in depth bounds on depth map, connects these depth values institute in the picture Position, take the maximum region of area as the shape m of target, all depth values belonged in target shape utilized to calculate targets Depth distribution standard deviation μ and standard variance δ.
S2 carries out feature extraction to the color data for the target location estimated in current color picture frame, obtains feature Vector.
Specifically, histograms of oriented gradients is extracted to the color data for the target location estimated in current color picture frame (Histogram ofOriented Gradient, HOG) feature, color (ColorNames, CN) feature and gray feature, should The data of three kinds of features constitute feature vector f.
S3 returns label function according to object space information architecture Gaussian.
Specifically, using the template center of estimation as origin, according toBuilding from target's center to edge gradually under The Gaussian of drop returns label function, and wherein m≤W, n≤H, W and H are respectively the width and height of search window.
S4 returns label using the feature vector and Gaussian extracted in object space information and current color picture frame Function constructs space constraint correlation filter.
Specifically, mark is returned using the feature vector and Gaussian extracted in object space information and current color picture frame Function building constraint correlation filter is signed, and solves filter model.Target shape can be used as a constraint condition, prevent from filtering For wave device by background contamination non-targeted in rectangle frame, the solution of correlation filter is to minimize relevant response and Gauss recurrence in fact The process of two norms of label function, unconstrained correlation filter are to minimize Being converted into frequency domain isWhereinIt is operated for digital fourier transformation,For Complex conjugate operation, λ are regular terms, prevent over-fitting, are usually arranged as 0.02.It is drawn according to unconstrained correlation filter using augmentation Ge Lang method building constraint correlation filter is expressed as in frequency domainWherein about Beam condition is hc-h⊙m≡0,hm=h ⊙ m,For Lagrangian.To hcIt solves with h iterative approach, uses respectivelyWithWherein draw Ge Lang operator according toIt updates.Work as algorithmic statement, h is required filter.
S5 carries out feature extraction to color data of next color image frame in search window and obtains feature vector, according to The feature vector and space constraint correlation filter extracted in next color image frame calculate target relevant response.
Specifically, histograms of oriented gradients (Histogram ofOriented is extracted to the color image in search box Gradient, HOG) feature, color (ColorNames, CN) feature and gray feature, the data of three kinds of features constitute mesh Target feature vector f.Then it is carried out related calculation according to this feature vector f and trained filter h, relevant response calculation method It is as follows:Wherein NcThe port number being characterized.
S6 determines the target position of next color image frame according to the position of the peak response in target relevant response.
Specifically, for image after the processing of correlation filter, each characteristic point in image can get corresponding sound It should be worth, meanwhile, after the processing of correlation filter, if a certain characteristic point belongs to interested target, this feature in image The corresponding response of point is larger, if this feature point belongs to background, the corresponding response of this feature point is smaller.Therefore, general feelings Under condition, response corresponding to the center of target is often maximum response.Based on the above principles, target correlation is being obtained On the basis of responding g (h), the corresponding position of the maximum value of relevant response g (h) is target's center, in conjunction with the shape of target, It can determine the target position in next frame color image.
What is provided in the embodiment of the present invention constructs the visual tracking method of constraint correlation filter using depth data, passes through Target Segmentation is carried out to depth data, the spatial informations such as the shape and depth distribution of target is obtained, then utilizes the shape of target Building has the correlation filter of space constraint, and when correlation filter contains the spatial informations such as the shape of target, therefore, filtering Device adapts to the target of arbitrary shape;Meanwhile the spatial information of target has broken circulation that correlation filter utilizes it is assumed that in turn It overcomes circulation and assumes bring edge effect, so that the relevant response calculated is all accurately, to expand in entire search box The search range of vision tracking.
What is provided based on the above embodiment constructs the visual tracking method of constraint correlation filter using depth data, described Before step S6 further include:
The contribution in each feature channel is calculated according to feature vector.Specifically, the current color image extracted using step S2 The feature vector f of frame calculates the relevant response ρ in each feature channeld=fd* h,Represent the contribution of feature channel d
The uniqueness in each feature channel is calculated according to the relevant response in each feature channel.Specifically, the correlation of feature channel d Response is ρd=fd* h, the uniqueness in feature channelIt can pass throughIt acquires, whereinWithRespectively ρdMaximum value and second largest value after inhibiting by core having a size of 3 × 3 maximum.
The weight in each feature channel is calculated according to the uniqueness of the contribution in feature channel and feature channel.Specifically, feature The weight w in channeldAccording toIt is calculated.
It is rung using the correlation that the weight in feature channel is weighted to obtain target to the relevant response in each feature channel It answers.Specifically, pass throughRelevant response is weighted.When calculating relevant response g (h), We are according to convolution theorem in frequency-domain calculationsComputational complexity from O (n2) drop To O (nlog (n)).
What is provided in the embodiment of the present invention constructs the visual tracking method of constraint correlation filter using depth data, passes through The uniqueness and contribution and last weight in each feature channel are calculated, weight represents the discriminating power in feature channel, by sentencing Other ability prevents the feature channel of judgement index from being flooded because the value is too small by other feature channels each feature channel weighting Not yet, the introducing of weight ensure that the discriminating power in each feature channel can be played, it is ensured that correlation filter it is accurate Property, target can accurately and effectively be tracked, improve the accuracy rate of vision tracking.
The visual tracking method that constraint correlation filter is constructed using depth data based on the offer of any of the above-described embodiment, After the step S6 further include:
Learning method is taken to be updated space constraint correlation filter.Specifically, filter presses h=(1- η) ht-1+η H is updated, and wherein η takes empirical value 0.02.It should be noted that η is also according to can actually use other values.
Lower frame is determined according to the updated space constraint correlation filter using the method in the step S5-S6 The target position of color image.Specifically, after being updated to space constraint correlation filter, any of the above-described embodiment is utilized In step S5-S6 in method determine that the next frame of next frame image is colored according to updated space constraint correlation filter Target position in image.It specifically includes: the progress to the color data of next color image frame of next frame in search window Feature extraction obtains feature vector, related to space constraint according to the feature vector extracted in next color image frame of next frame Filter calculates target relevant response;Determine that the next frame of next frame is color according to the position of the peak response in target relevant response The target position of chromatic graph picture.
What is provided in the embodiment of the present invention constructs the visual tracking method of constraint correlation filter using depth data, passes through Correlation filter is constantly updated, and then the mesh in subsequent all color image frames is successively tracked according to updated correlation filter Cursor position, until completing entire target visual tracks process, it is ensured that the accuracy of correlation filter can carry out target quasi- It really effectively tracks, improves the accuracy rate of vision tracking.
As shown in Fig. 2, in one embodiment of the invention, the vision of constraint correlation filter is constructed using depth data Tracking system includes that spatial information obtains module 10, target's feature-extraction module 20, function construction module 30, filter construction Module 40, response computation module 50 and target position determining module.
Spatial information obtain module 10, for the depth data to the target location estimated in current depth image frame into The segmentation of row depth targets, obtains object space information.
Specifically, the spatial information obtains the depth for the target location that module 10 is used to estimate current depth image frame Degree obtains the spatial informations such as the shape and depth distribution of target according to depth targets segmentation is carried out.Depth data is calculated first Depth distribution histogram h (dj), step-length 5cm, if not first frame, using the standard variance δ of target depth distribution as step-length, h(dj) j position is shared, the corresponding depth-averaged value in each position is dj,h(dj) be jth bit depth within the scope of depth value Number.One-dimensional non-maxima suppression that length is 3 is done to depth distribution histogram and to select local maximum poly- as K-Means The starting point of class.The number of local maximum reduces the number of iterations required for convergence as the class number K that K-Means is clustered, If histogram digit be greater than 50, and local maximum only one, then histogram end add one cluster starting Point must be influenced as the depth distribution standard variance of the cluster starting point drop target of background by maximum.Since initial point Cluster, update method areAfter clustering convergence, according to cluster result, corresponding depth bounds are taken Plane of the smallest class as target distribution takes out all depth values belonged in depth bounds on depth map, connects these Depth value position in the picture, take the maximum region of area to belong in target shape as the shape m of target using all Depth value calculate target depth distribution standard deviation μ and standard variance δ.
Target's feature-extraction module 20, for the color data to the target location estimated in current color picture frame into Row feature extraction obtains feature vector.
Specifically, the target's feature-extraction module 20 is used for the target location estimated in current color picture frame Color data extracts histograms of oriented gradients (Histogram ofOriented Gradient, HOG) feature, color (ColorNames, CN) feature and gray feature, the data of three kinds of features constitute feature vector f.
Function construction module 30, for returning label function according to object space information architecture Gaussian.
Specifically, using the template center of estimation as origin, according toBuilding from target's center to edge gradually under The Gaussian of drop returns label function, and wherein m≤W, n≤H, W and H are respectively the width and height of search window.
Filter constructing module 40, for utilizing the feature vector extracted in object space information and current color picture frame And Gaussian returns label function and constructs space constraint correlation filter.
Specifically, the filter constructing module 40, for using being mentioned in object space information and current color picture frame The feature vector and Gaussian taken returns label function building constraint correlation filter, and solves filter model.Target shape Shape can be used as a constraint condition, prevent filter by background contamination non-targeted in rectangle frame, the solution of correlation filter It is the process for minimizing two norms of relevant response and Gauss recurrence label function in fact, unconstrained correlation filter is minimum ChangeBeing converted into frequency domain isIts InIt is operated for digital fourier transformation,For complex conjugate operation, λ is regular terms, prevents over-fitting, is usually arranged as 0.02. Constraint correlation filter is constructed using Augmented Lagrange method according to unconstrained correlation filter to be expressed as in frequency domainWherein about Beam condition is hc-h⊙m≡0,hm=h ⊙ m,For Lagrangian.To hcIt solves with h iterative approach, uses respectivelyWithWherein draw Ge Lang operator according toIt updates.Work as algorithmic statement, h is required filter.
Response computation module 50 is obtained for carrying out feature extraction to color data of next color image frame in search window Feature vector is obtained, it is related to space constraint correlation filter calculating target according to the feature vector extracted in next color image frame Response.
Specifically, the response computation module 50 is used to extract histograms of oriented gradients to the color image in search box (Histogram ofOriented Gradient, HOG) feature, color (ColorNames, CN) feature and gray feature, should The data of three kinds of features constitute clarification of objective vector f.Then phase is done according to this feature vector f and trained filter h Operation is closed, relevant response calculation method is as follows:WhereinNcThe port number being characterized.
Target position determining module 60, for determining present frame coloured silk according to the position of the peak response in target relevant response The target position of chromatic graph picture.
Specifically, for image after the processing of correlation filter, each characteristic point in image can get corresponding sound It should be worth, meanwhile, after the processing of correlation filter, if a certain characteristic point belongs to interested target, this feature in image The corresponding response of point is larger, if this feature point belongs to background, the corresponding response of this feature point is smaller.Therefore, general feelings Under condition, response corresponding to the center of target is often maximum response.Based on the above principles, target correlation is being obtained On the basis of responding g (h), the corresponding position of the maximum value of relevant response g (h) is target's center, in conjunction with the shape of target, It can determine the target position in next frame color image.
What is provided in the embodiment of the present invention constructs the Visual Tracking System of constraint correlation filter using depth data, passes through Target Segmentation is carried out to depth data, the spatial informations such as the shape and depth distribution of target is obtained, then utilizes the shape of target Building has the correlation filter of space constraint, and when correlation filter contains the spatial informations such as the shape of target, therefore, filtering Device adapts to the target of arbitrary shape;Meanwhile the spatial information of target has broken circulation that correlation filter utilizes it is assumed that in turn It overcomes circulation and assumes bring edge effect, so that the relevant response calculated is all accurately, to expand in entire search box The search range of vision tracking.
What is provided based on the above embodiment constructs the Visual Tracking System of constraint correlation filter using depth data, described Visual Tracking System further includes contributing computing module, unique computing module and weight calculation module.
The contribution calculation module, for calculating the contribution in each feature channel according to feature vector.Specifically, using described The feature vector f for the current color picture frame that target's feature-extraction module 20 is extracted, calculates the relevant response in each feature channel ρd=fd* h,Represent the contribution of feature channel d
The uniqueness computing module, for calculating the uniqueness in each feature channel according to the relevant response in each feature channel Property.Specifically, the relevant response of feature channel d is ρd=fd* h, the uniqueness in feature channelIt can pass throughIt acquires, whereinWithRespectively ρdInhibit by core having a size of 3 × 3 maximum Maximum value and second largest value later.
The weight calculation module calculates each feature channel according to the uniqueness of the contribution in feature channel and feature channel Weight.Specifically, the weight w in feature channeldAccording toIt is calculated.
Response computation module 50 is weighted the relevant response in each feature channel using the weight in feature channel To the relevant response of target.Specifically, pass throughRelevant response is weighted.Calculating phase When closing response g (h), we are according to convolution theorem in frequency-domain calculationsOperation is answered It is miscellaneous to spend from O (n2) drop to O (nlog (n)).
What is provided in the embodiment of the present invention constructs the Visual Tracking System of constraint correlation filter using depth data, passes through The uniqueness and contribution and last weight in each feature channel are calculated, weight represents the discriminating power in feature channel, by sentencing Other ability prevents the feature channel of judgement index from being flooded because the value is too small by other feature channels each feature channel weighting Not yet, the introducing of weight ensure that the discriminating power in each feature channel can be played, it is ensured that correlation filter it is accurate Property, target can accurately and effectively be tracked, improve the accuracy rate of vision tracking.
The Visual Tracking System that constraint correlation filter is constructed using depth data based on the offer of any of the above-described embodiment, The Visual Tracking System further includes having filter update module, for take learning method to space constraint correlation filter into Row updates.Specifically, filter presses h=(1- η) ht-1+ η h is updated, and wherein η takes empirical value 0.02.It should be noted that η Also according to can actually use other values.
After space constraint correlation filter updates, the response computation module 50 and target position determining module 60 are utilized The target position of lower color image frame is determined according to the updated space constraint correlation filter.Specifically, the response The carry out feature extraction of color data of the computing module 50 to next color image frame of next frame in search window obtains feature Vector, and target is calculated according to the feature vector and space constraint correlation filter extracted in next color image frame of next frame Relevant response;The target position determining module 60 determines next frame according to the position of the peak response in target relevant response The target position of next color image frame.
What is provided in the embodiment of the present invention constructs the Visual Tracking System of constraint correlation filter using depth data, passes through Correlation filter is constantly updated, and then the mesh in subsequent all color image frames is successively tracked according to updated correlation filter Cursor position, until completing entire target visual tracks process, it is ensured that the accuracy of correlation filter can carry out target quasi- It really effectively tracks, improves the accuracy rate of vision tracking.
The above description is only a preferred embodiment of the present invention, rather than does limitation in any form to the present invention.This field Technical staff can impose various equivalent changes and improvement, all institutes within the scope of the claims on the basis of the above embodiments The equivalent variations or modification done, should all fall under the scope of the present invention.

Claims (10)

1. it is a kind of using depth data construct constraint correlation filter visual tracking method, which is characterized in that the vision with Track method includes the following steps:
S1 carries out depth targets segmentation to the depth data for the target location estimated in current depth image frame, obtains target Spatial information;
S2 carries out feature extraction to the color data for the target location estimated in current color picture frame, obtains feature vector;
S3 returns label function according to object space information architecture Gaussian;
S4 returns label function using the feature vector and Gaussian extracted in object space information and current color picture frame Construct space constraint correlation filter;
S5 carries out feature extraction to color data of next color image frame in search window and obtains feature vector, according to next The feature vector and space constraint correlation filter extracted in color image frame calculate target relevant response;
S6 determines the target position of next color image frame according to the position of the peak response in target relevant response.
2. as described in claim 1 using the visual tracking method of depth data building constraint correlation filter, feature exists In before the step S6 further include:
The contribution in each feature channel is calculated according to the feature vector extracted in current color picture frame;
The uniqueness in each feature channel is calculated according to the relevant response in each feature channel;
The weight in each feature channel is calculated according to the uniqueness of the contribution in feature channel and feature channel;
The relevant response in each feature channel is weighted to obtain the relevant response of target using the weight in feature channel.
3. as described in claim 1 using the visual tracking method of depth data building constraint correlation filter, feature exists In after the step S6 further include:
Learning method is taken to be updated space constraint correlation filter;
Determine that lower frame is colored according to the updated space constraint correlation filter using the method in the step S5-S6 The target position of image.
4. as described in claim 1 using the visual tracking method of depth data building constraint correlation filter, feature exists In the object space information includes shape and depth distribution histogram.
5. as described in claim 1 using the visual tracking method of depth data building constraint correlation filter, feature exists In described eigenvector includes histograms of oriented gradients feature, color characteristic and gray feature.
6. it is a kind of using depth data construct constraint correlation filter Visual Tracking System, which is characterized in that the vision with Track system includes:
Spatial information obtains module, carries out depth for the depth data to the target location estimated in current depth image frame Target Segmentation obtains object space information;
Target's feature-extraction module carries out feature for the color data to the target location estimated in current color picture frame It extracts, obtains feature vector;
Function construction module, for returning label function according to object space information architecture Gaussian;
Filter constructing module, for utilizing the feature vector and height extracted in object space information and current color picture frame This type returns label function and constructs space constraint correlation filter;
Response computation module obtains feature for carrying out feature extraction to color data of next color image frame in search window Vector calculates target relevant response according to the feature vector and space constraint correlation filter extracted in next color image frame;
Target position determining module, for determining current color image frame according to the position of the peak response in target relevant response Target position.
7. as claimed in claim 6 using the Visual Tracking System of depth data building constraint correlation filter, feature exists In the Visual Tracking System further includes having:
Contribution calculation module, for calculating the contribution in each feature channel according to the feature vector extracted in current color picture frame;
Unique computing module, for calculating the uniqueness in each feature channel according to the relevant response in each feature channel;
Weight calculation module calculates the weight in each feature channel according to the uniqueness of the contribution in feature channel and feature channel.
8. as claimed in claim 6 using the Visual Tracking System of depth data building constraint correlation filter, feature exists In the Visual Tracking System further includes having: filter update module, for taking learning method to space constraint correlation filtering Device is updated.
9. as claimed in claim 6 using the Visual Tracking System of depth data building constraint correlation filter, feature exists In the object space information includes shape and depth distribution histogram.
10. as claimed in claim 6 using the Visual Tracking System of depth data building constraint correlation filter, feature exists In described eigenvector includes histograms of oriented gradients feature, color characteristic and gray feature.
CN201811313969.9A 2018-11-06 2018-11-06 Visual tracking method and system for constructing constraint correlation filter by using depth data Active CN109448024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811313969.9A CN109448024B (en) 2018-11-06 2018-11-06 Visual tracking method and system for constructing constraint correlation filter by using depth data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811313969.9A CN109448024B (en) 2018-11-06 2018-11-06 Visual tracking method and system for constructing constraint correlation filter by using depth data

Publications (2)

Publication Number Publication Date
CN109448024A true CN109448024A (en) 2019-03-08
CN109448024B CN109448024B (en) 2022-02-11

Family

ID=65550909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811313969.9A Active CN109448024B (en) 2018-11-06 2018-11-06 Visual tracking method and system for constructing constraint correlation filter by using depth data

Country Status (1)

Country Link
CN (1) CN109448024B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN111080675A (en) * 2019-12-20 2020-04-28 电子科技大学 Target tracking method based on space-time constraint correlation filtering
WO2020228522A1 (en) * 2019-05-10 2020-11-19 腾讯科技(深圳)有限公司 Target tracking method and apparatus, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101454815B1 (en) * 2013-05-29 2014-11-04 중앙대학교 산학협력단 Apparatus and method for detecting object using dual color filter aperture
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
CN107169994A (en) * 2017-05-15 2017-09-15 上海应用技术大学 Correlation filtering tracking based on multi-feature fusion
CN107784663A (en) * 2017-11-14 2018-03-09 哈尔滨工业大学深圳研究生院 Correlation filtering tracking and device based on depth information
CN108550126A (en) * 2018-04-18 2018-09-18 长沙理工大学 A kind of adaptive correlation filter method for tracking target and system
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101454815B1 (en) * 2013-05-29 2014-11-04 중앙대학교 산학협력단 Apparatus and method for detecting object using dual color filter aperture
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor
CN107169994A (en) * 2017-05-15 2017-09-15 上海应用技术大学 Correlation filtering tracking based on multi-feature fusion
CN107784663A (en) * 2017-11-14 2018-03-09 哈尔滨工业大学深圳研究生院 Correlation filtering tracking and device based on depth information
CN108550126A (en) * 2018-04-18 2018-09-18 长沙理工大学 A kind of adaptive correlation filter method for tracking target and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN109798888B (en) * 2019-03-15 2021-09-17 京东方科技集团股份有限公司 Posture determination device and method for mobile equipment and visual odometer
WO2020228522A1 (en) * 2019-05-10 2020-11-19 腾讯科技(深圳)有限公司 Target tracking method and apparatus, storage medium and electronic device
JP2022516055A (en) * 2019-05-10 2022-02-24 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Goal tracking methods, computer programs, and electronic devices
JP7125562B2 (en) 2019-05-10 2022-08-24 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Target tracking method, computer program, and electronic device
US11610321B2 (en) 2019-05-10 2023-03-21 Tencent Technology (Shenzhen) Company Limited Target tracking method and apparatus, storage medium, and electronic device
CN111080675A (en) * 2019-12-20 2020-04-28 电子科技大学 Target tracking method based on space-time constraint correlation filtering

Also Published As

Publication number Publication date
CN109448024B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN113506317B (en) Multi-target tracking method based on Mask R-CNN and apparent feature fusion
CN114972418B (en) Maneuvering multi-target tracking method based on combination of kernel adaptive filtering and YOLOX detection
CN108062574B (en) Weak supervision target detection method based on specific category space constraint
CN107633226B (en) Human body motion tracking feature processing method
CN111460968A (en) Video-based unmanned aerial vehicle identification and tracking method and device
Babu et al. Online adaptive radial basis function networks for robust object tracking
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN111739053B (en) Online multi-pedestrian detection tracking method under complex scene
CN109448024A (en) Visual tracking method, the system of constraint correlation filter are constructed using depth data
CN109448023A (en) A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation
Wang et al. Low-altitude infrared small target detection based on fully convolutional regression network and graph matching
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN112052771A (en) Object re-identification method and device
CN116245949A (en) High-precision visual SLAM method based on improved quadtree feature point extraction
Wang et al. Multiple pedestrian tracking with graph attention map on urban road scene
CN110472607A (en) A kind of ship tracking method and system
CN113033356B (en) Scale-adaptive long-term correlation target tracking method
CN108280845B (en) Scale self-adaptive target tracking method for complex background
Cai et al. A target tracking method based on KCF for omnidirectional vision
Elbaşi Fuzzy logic-based scenario recognition from video sequences
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
CN111291785A (en) Target detection method, device, equipment and storage medium
CN114627339A (en) Intelligent recognition and tracking method for border crossing personnel in dense jungle area and storage medium
Bai et al. Pedestrian Tracking and Trajectory Analysis for Security Monitoring
Chan et al. Online learning for classification and object tracking with superpixel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant