CN106204647B - Based on multiple features and organize sparse visual target tracking method - Google Patents

Based on multiple features and organize sparse visual target tracking method Download PDF

Info

Publication number
CN106204647B
CN106204647B CN201610515653.2A CN201610515653A CN106204647B CN 106204647 B CN106204647 B CN 106204647B CN 201610515653 A CN201610515653 A CN 201610515653A CN 106204647 B CN106204647 B CN 106204647B
Authority
CN
China
Prior art keywords
particle
template
target
sparse
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610515653.2A
Other languages
Chinese (zh)
Other versions
CN106204647A (en
Inventor
莫博瑞
周芸
付光涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Original Assignee
National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television filed Critical National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Priority to CN201610515653.2A priority Critical patent/CN106204647B/en
Publication of CN106204647A publication Critical patent/CN106204647A/en
Application granted granted Critical
Publication of CN106204647B publication Critical patent/CN106204647B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention patent relates to a kind of based on multiple features and organizes sparse visual target tracking method, is technically characterized by comprising the steps as follows: in video present frame and carries out multi-feature extraction to target;The study dictionary under different characteristic is constructed using multicharacteristic information;Particle sampler is carried out in new video frame;It uses boundary particle resampling to remove underproof particle, sparse optimization method then is solved to remaining particle;More new template and the cosine similarity for investigating this frame result and greatest coefficient template use the smallest template of current template replacement coefficient if similarity is lower than some value;If video is not finished, resampling.The present invention has merged multiple features, particle filter, the sparse learning art of group, it makes to contain richer target information in the dictionary of construction by tracking the various features of object, increase the tracking accuracy of total algorithm, the stability for improving tracking result obtains good visual target tracking result.

Description

Based on multiple features and organize sparse visual target tracking method
Technical field
The invention belongs to visual target tracking field, especially a kind of visual target tracking sparse based on multiple features and group Method.
Background technique
Core research topic one of of the motion target tracking technology based on video as computer vision field, main mesh Be the motion perception function of imitating physiological vision system, by analyzing the image sequence that camera captures, calculate Two-dimensional coordinate position of the moving target in each frame image out;Then, according to the relevant characteristic value of moving target, by image sequence The same moving target of continuous interframe associates in column, obtains the kinematic parameter and consecutive frame image of target in every frame image Between moving target corresponding relationship, to obtain the complete motion profile of each moving target, i.e., in continuous video sequence Establish the corresponding relationship of moving target.
Visual target tracking method core based on rarefaction representation and dictionary learning is the sparse representation model for constructing target. Xue Mei(Xue Mei and Haibin Ling,“Robust visual tracking using l1minimization,”in Computer Vision,2009IEEE 12th International Conference On.IEEE, 2009, pp.1436-1443.) et al. a kind of sparse representation model optimized based on L1 norm is proposed.Its core The heart be use first frame and nearest several frames to obtain image (feature) as dictionary, in new candidate target, utilize L1 minimum two Multiply criterion to project on this group of dictionary, finds out realistic objective.Tianzhu Zhang et al. (Tianzhu Zhang, Bernard Ghanem,Si Liu,and Narendra Ahuja,“Robust visual tracking via multi-task sparse learning,”in Computer Vision and Pattern Recognition(CVPR),2012IEEE Conference on.IEEE, 2012, pp.2042-2049.) on this basis by introducing the sparse study of multitask, further Optimize candidate particle and sparse solving result.Xiangyuan Lan et al. (Xiangyuan Lan, Andy Jinhua Ma, and Pong Chi Yuen,“Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation,”in Computer Vision and Pattern Recognition (CVPR), 2014IEEE.Conference on.IEEE, 2014, pp.1194-1201.) to propose one kind more The method for tracking target of Fusion Features increases the precision of target following by introducing multiple features and Fusion Features.On although Target following can be carried out to a certain extent by stating method, still, also there is the problems such as accuracy is not high, stability is poor.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of designs rationally, precision is high and stability is strong Based on multiple features and organize sparse visual target tracking method.
The present invention solves its technical problem and adopts the following technical solutions to achieve:
A kind of visual target tracking method sparse based on multiple features and group, comprising the following steps:
Step 1 carries out multi-feature extraction to target in video present frame, and extracted feature includes gray feature, color Feature and LBP feature;
Step 2 constructs the study dictionary under different characteristic using multicharacteristic information, is converted using two dimensional affine to target figure As being mapped, it is converted into the rectangular area of a fixed size;
Step 3 carries out particle sampler in new video frame, samples according to Gaussian Profile: putting in the target proximity of previous frame More particles are set, less particle is placed in wide region;
Step 4 uses boundary particle resampling to remove underproof particle, then solves to remaining particle sparse excellent Change equation;
Step 5, more new template investigate the cosine similarity of this frame result and greatest coefficient template, if similarity is lower than Some value then uses the smallest template of current template replacement coefficient;
If step 6, video are not finished, resampling, n particle needed for generating next frame tracking, return step 3.
It includes target template and trifling template, the study dictionary that the study dictionary under different characteristic is constructed in the step 2 Construction method the following steps are included:
(1) characteristics of image of the affine transformation by the original target image Feature Conversion of higher-dimension for low-dimensional, while target area are passed through When deformation occurs in domain, regional aim of different shapes carries out uniformity survey calculation;Affine transformation by translation, scaling, overturning, Rotation and wrong blanking method realize that matrix operation indicates are as follows:
Wherein x, y are the coordinate of original point, and x ', y ' are transformed coordinate, a11, a12, a13, a21, a22, a23It is affine Six coefficients of transformation;
(2) feature extraction is carried out to the object region after affine transformation, the study dictionary of each feature is constituted Are as follows: D=(T I), wherein target template collection T=t1, t2..., tl, I is trifling template.
The concrete methods of realizing of the step 3 are as follows: calculate sample state xtObservation likelihood probability p (zt|xt), for sample This state set S=x1, x2..., xnAnd candidate target collection O=y1, y2..., yn, sparse optimization expression are as follows:
Wherein i indicates i-th of particle, a=aTaIIndicate sparse coefficient.
The step 4 boundary particle resampling is realized using t test method, comprising the following steps:
(1) all particle observation probabilities upper bound is calculated:
And they are ranked up: q1≥q2≥…≥qn, i=1;
(2) if i≤n, according to the observation likelihood probability of each sample state:
It utilizesCalculate τi+1If i > n, step (4) are jumped to;
(3) if qi≥τi, then i=i+1, and return step (2) continues to solve;If qi≤τi, then qi, qi+1..., qnCorresponding particle will be all rejected;
(4) resampling is carried out to remaining particle.
The step 4 solve the method for sparse optimization method the following steps are included:
(1) construction grouping dictionary: D=D1, D2... Dg, under multiple features, the dictionary after trifling template is added is expressed as: Xk =Dk, I, K indicate to use k kind Expressive Features;
(2) the sparse mathematic optimal model of foundation group:
Wherein, the optimization of grouping coefficient indicates are as follows:
Weight q is assigned in each group, combines and is expressed as:
The sparse model using alternating direction multipliers method realize, its implementation the following steps are included:
(1) initialization vector z, λ1, λ2And the factor beta greater than zero1, β2, γ1, γ2
(2) formula is not converged or when cycle-index is not up to, and is calculated as follows:
a←β1I+β2XT1z-λ12XTy+XTλ2
λ1←λ11β1 z-a
λ2←λ22β2 Xa-y
Template renewal uses the strategy of real-time update in the step 5, and steps are as follows for update:
(1) y is the target found in new frame, akIndicate the coefficient of the target template under k-th of feature, feature selecting It is determined according to observation probability equation:
Selection can make the maximum feature of equation value as the currently active feature;
(2) fresh target y and each template T=t is calculated1, t2..., tlSimilarity, similarity use cosine similarity To measure:
Calculate average similarity:
And an empirical value η is set, the minimum template of similarity is replaced when average similarity is lower than η.
Resampling uses sequential importance sampling algorithm in the step 6, and method is as follows: with turning for state transfering variable Move probability density function p xk|xk-1As importance density function, the weight of particle are as follows:
In resampling, ignore the particle of low weight, the particle of high weight is constantly replicated.
The advantages and positive effects of the present invention are:
The present invention has merged multiple features, particle filter, the sparse learning art of group, and the various features by tracking object make Contain richer target information in the dictionary that must be constructed;It has used a kind of sparse method for solving of new group, and this method can Potential target is more accurately matched, the tracking accuracy of total algorithm is increased;And tracking result has under particle filter frame Very big stability, using alternating direction multipliers method solve Algorithm constitution mathematical model, obtain good sensation target with Track result.
Detailed description of the invention
Fig. 1 is the tracking result AUC curve comparison figure that the present invention is obtained with other algorithms of different;
Fig. 2 is the tracking result comparison diagram of the present invention with other algorithms of different.
Specific embodiment
The embodiment of the present invention is further described below in conjunction with attached drawing:
A kind of visual target tracking method sparse based on multiple features and group, comprising the following steps:
Step 1 carries out multi-feature extraction to target in video present frame, and the feature of extraction has gray feature, color special Sign, LBP feature.
The present invention using multiple features to tracking target be described, the feature used include: gray feature, color characteristic and LBP feature.Gray feature is most common global characteristics, and the picture information contained amount after gray processing greatly reduces, required Calculation amount also substantially reduces accordingly.The surface nature of scenery corresponding to color feature image or image-region, It is played an important role in the target following of color image.LBP feature is a kind of operator for describing Local textural feature, tool There are rotational invariance and gray scale invariance, extracted is Local textural feature.
Step 2, using multicharacteristic information learning of structure dictionary, target image is mapped using two dimensional affine transformation, It is converted into the rectangular area of a fixed size, to reduce the dimension of feature.
The learning of structure dictionary constitute the following steps are included:
It (1) can be the characteristics of image of low-dimensional by the original target image Feature Conversion of higher-dimension by affine transformation, simultaneously When deformation occurs for target area, regional aim of different shapes carries out uniformity survey calculation.Affine transformation passes through a series of The compound realization of Atom Transformation, comprising: translation (Translation) scales (Scale), overturns (Flip), rotation (Rotation) and mistake cuts (Shear).Matrix operation indicates are as follows:
Wherein x, y are the coordinate of original point, and x ', y ' are transformed coordinate, a11, a12, a13, a21, a22, a23It is affine Six coefficients of transformation.
(2) feature extraction, its study dictionary structure of each feature are carried out to the object region after affine transformation Become: D=(T I).Wherein target template collection T=t1, t2..., tl, I is unit matrix, i.e., trifling template.
Step 3 carries out particle sampler in new video frame, samples according to Gaussian Profile: in the target proximity of previous frame More particles are placed, less particle is placed in wide region.Wherein, each particle represents a candidate target. Specific processing method are as follows:
Target search is carried out in entire image, and is determined by particle distribution.In this paper track algorithm, the work of rarefaction representation With being exactly to calculate sample state xtObservation likelihood probability p (zt|xt).For sample state set S=x1, x2..., xn, Yi Jihou Select object set O=y1, y2..., yn, sparse optimization thought can indicate are as follows:
Wherein i indicates i-th of particle, a=aTaIIndicate sparse coefficient.
Step 4, boundary particle resampling remove some underproof particles, solve sparse optimization side to remaining particle Journey, using the optimisation strategy that group is sparse: assigning target template (targettemplate) to higher unified weight;By trifling mould Plate (trivial template) assigns lower unified weight.
Resampling can use t test method, and steps are as follows:
(1) all particle observation probabilities upper bound is calculated:
And they are ranked up: q1≥q2≥…≥qn, i=1.
(2) if i≤n, according to the observation likelihood probability of each sample state:
It utilizesCalculate τi+1If i > n jumps to step 4.
(3) if qi≥τi, then i=i+1, and return step 2 continues to solve;If qi≤τi, then qi, qi+1..., qn Corresponding particle will be all rejected.
(4) resampling is carried out to remaining particle.
In order to which more accurately matching potential target, the present invention construct a kind of based on the optimisation strategy for organizing sparse study.Step It is rapid as follows:
(1) construction grouping dictionary: D=D1, D2... Dg.Dictionary under multiple features, after trifling template is added
It indicates are as follows: Xk=Dk, I, K indicate to use k kind Expressive Features.
(2) the sparse mathematic optimal model of foundation group:
Wherein the optimization of grouping coefficient indicates are as follows:
Weight q is assigned in each group, combines and is expressed as:
This patent solves the group sparse model of multiple features using alternating direction multipliers method, and alternating direction multipliers method step is such as Under:
Initialization vector z, λ1, λ2;And the factor beta greater than zero1, β2, γ1, γ2
Following formula is calculated when formula is not converged or cycle-index is not up to:
a←β1I+β2XT1z-λ12XTy+XTλ2
λ1←λ11β1 z-a
λ2←λ22β2 Xa-y
Step 5, more new template, calculate the coefficient that each target template all can be obtained every time, and coefficient gets over large form weight It is bigger.The cosine similarity of this frame result and greatest coefficient template is investigated, if similarity is lower than some value, with current mould The smallest template of plate replacement coefficient.Template renewal uses the strategy of real-time update, and steps are as follows for update:
(1) y is the target found in new frame, akIndicate the coefficient of the target template under k-th of feature, feature selecting It is determined according to observation probability equation:
Selection can make the maximum feature of equation value as the currently active feature.
(2) fresh target y and each template T=t is calculated1, t2..., tlSimilarity, wherein more than their similarity use String similarity is measured:
In order to avoid frequently replacing template bring jitter problem, we calculate average similarity:
And an empirical value η is set, the minimum template of similarity is replaced when average similarity is lower than η.
If step 6, video are not finished, resampling, n particle needed for generating next frame tracking, return step 3.Tool Body method are as follows:
Resampling uses sequential importance sampling algorithm.With the probability density function p x of state transfering variablek| xk-1As importance density function, the weight of particle are as follows:
Resampling methods ignore the particle of low weight, constantly replicate to the particle of high weight.In order to overcome weight degeneration to ask Topic, i.e., the weight of only a small number of particles is larger after iteration for several times, needs to add a random quantity to particle again, nearby disperse Particle.
It is tested below as the method for the present invention, illustrates the experiment effect of this experiment.
Test environment: Visual Studio 2010, MATLAB 2013b
Cycle tests: selected sequence and its correspond to the standard-track location drawing (Ground Truth) from OTB (Y.Wu, J.Lim,and M.-H.Yang.Online object tracking:A benchmark.In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2013), cycle tests is respectively as follows:
Test index:
There is used herein two kinds of evaluation indexes, respectively CLE and VOR.Wherein CLE is center error, indicates tracking The center pixel distance of the target position and true labeling position arrived.CLE has ignored the influence of target sizes, as benefit It fills and considers VOR criterion, VOR is defined as the intersection of the target area and real estate of tracking and the ratio of union.Test result is such as Shown in figure one, curve and the area that reference axis surrounds below are bigger, illustrate that tracking effect is better.
It can be seen that by upper table and Fig. 1, Fig. 2 and target following carried out relative to other methods with one using the present invention Fixed superiority.In Fig. 1, algorithm is more top, and its robustness of explanation is better, which can intuitively find out each algorithm Integration capability.Algorithm proposed by the invention is relative to more famous Struck, DLT and SCM algorithm on these cycle tests All obtain more preferable effect.In Fig. 2, the sequence tested contains fast-moving target, such as Deer sequence;And complex background Under target tracking, such as Football, singer2 sequence;And good effect is also achieved in the tracking of pedestrian's scene, Such as Walking, Couple, Jogging and Subway sequence.Also there is preferable performance in the case where partial occlusion occurs, such as Coke, David3 sequence;Target can be also accurately tracked in the case where image resolution ratio is lower, tracking target information amount is less, Such as Skiing, Girl sequence.
It is emphasized that embodiment of the present invention be it is illustrative, without being restrictive, therefore packet of the present invention Include and be not limited to embodiment described in specific embodiment, it is all by those skilled in the art according to the technique and scheme of the present invention The other embodiments obtained, also belong to the scope of protection of the invention.

Claims (6)

1. a kind of visual target tracking method sparse based on multiple features and group, it is characterised in that the following steps are included:
Step 1 carries out multi-feature extraction to target in video present frame, and extracted feature includes gray feature, color characteristic With LBP feature;
Step 2, using multicharacteristic information construct different characteristic under study dictionary, using two dimensional affine transformation to target image into Row mapping, is converted into the rectangular area of a fixed size;
Step 3 carries out particle sampler in new video frame, samples according to Gaussian Profile: placing more in the target proximity of previous frame Less particle is placed in more particles, wide region;
Step 4 uses boundary particle resampling to remove underproof particle, then solves sparse optimization side to remaining particle Journey;
Step 5, more new template investigate the cosine similarity of this frame result and greatest coefficient template, if similarity is lower than a certain A value then uses the smallest template of current template replacement coefficient;
If step 6, video are not finished, resampling, n particle needed for generating next frame tracking, return step 3;
The concrete methods of realizing of the step 3 are as follows: calculate sample state xtObservation likelihood probability p (zt|xt), for sample shape State collection S=x1, x2..., xnAnd candidate target collection O=y1, y2..., yn, sparse optimization expression are as follows:
Wherein i indicates i-th of particle, a=aTaIIndicate sparse coefficient, I is trifling template;T indicates target template collection;
The step 4 boundary particle resampling is realized using t test method, comprising the following steps:
(1) all particle observation probabilities upper bound is calculated:
And they are ranked up: q1≥q2≥…≥qn, i=1;
(2) if i≤n, according to the observation likelihood probability of each sample state:
It utilizesCalculate τi+1If i > n, step (4) are jumped to;
(3) if qi≥τi, then i=i+1, and return step (2) continues to solve;If qi≤τi, then qi, qi+1..., qnIt is right The particle answered will be all rejected;
(4) resampling is carried out to remaining particle.
2. the visual target tracking method sparse based on multiple features and group according to claim 1, it is characterised in that: described It includes target template and trifling template, the construction method packet of the study dictionary that the study dictionary under different characteristic is constructed in step 2 Include following steps:
It (1) is the characteristics of image of low-dimensional by the original target image Feature Conversion of higher-dimension by affine transformation, while target area is sent out When raw deformation, regional aim of different shapes carries out uniformity survey calculation;Affine transformation passes through translation, scaling, overturning, rotation It is realized with wrong blanking method, matrix operation indicates are as follows:
Wherein x, y are the coordinate of original point, and x ', y ' are transformed coordinate, a11, a12, a13, a21, a22, a23It is affine transformation Six coefficients;
(2) feature extraction is carried out to the object region after affine transformation, the study dictionary of each feature is constituted are as follows: D =(T I), wherein target template collection T=t1, t2..., tl, I is trifling template.
3. the visual target tracking method sparse based on multiple features and group according to claim 1, it is characterised in that: described Step 4 solve the method for sparse optimization method the following steps are included:
(1) construction grouping dictionary: D=D1, D2... Dg, under multiple features, the dictionary after trifling template is added is expressed as: Xk=Dk, I, K indicate to use k kind Expressive Features;
(2) the sparse mathematic optimal model of foundation group:
Wherein, the optimization of grouping coefficient indicates are as follows:
Weight q is assigned in each group, combines and is expressed as:
4. the visual target tracking method sparse based on multiple features and group according to claim 3, it is characterised in that: described Sparse model using alternating direction multipliers method realize, its implementation the following steps are included:
(1) initialization vector z, λ1, λ2And the factor beta greater than zero1, β2, γ1, γ2
(2) formula is not converged or when cycle-index is not up to, and is calculated as follows:
a←β1I+β2XTX β1z-λ12XTy+XTλ2
λ1←λ11β1 z-a
λ2←λ222 Xa-y。
5. the visual target tracking method sparse based on multiple features and group according to claim 4, it is characterised in that: described Template renewal uses the strategy of real-time update in step 5, and steps are as follows for update:
(1) y is the target found in new frame, akIndicate under k-th of feature target template coefficient, feature selecting according to Observation probability equation determines:
Selection can make the maximum feature of equation value as the currently active feature;
(2) fresh target y and each template T=t is calculated1, t2..., tlSimilarity, similarity uses cosine similarity degree of coming Amount:
Calculate average similarity:
And an empirical value η is set, the minimum template of similarity is replaced when average similarity is lower than η.
6. the visual target tracking method sparse based on multiple features and group according to claim 1, it is characterised in that: described Resampling uses sequential importance sampling algorithm in step 6, and method is as follows: with the transitional provavility density letter of state transfering variable Number p (xk|xk-1) it is used as importance density function, the weight of particle are as follows:
In resampling, ignore the particle of low weight, the particle of high weight is constantly replicated.
CN201610515653.2A 2016-07-01 2016-07-01 Based on multiple features and organize sparse visual target tracking method Expired - Fee Related CN106204647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610515653.2A CN106204647B (en) 2016-07-01 2016-07-01 Based on multiple features and organize sparse visual target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610515653.2A CN106204647B (en) 2016-07-01 2016-07-01 Based on multiple features and organize sparse visual target tracking method

Publications (2)

Publication Number Publication Date
CN106204647A CN106204647A (en) 2016-12-07
CN106204647B true CN106204647B (en) 2019-05-10

Family

ID=57465940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610515653.2A Expired - Fee Related CN106204647B (en) 2016-07-01 2016-07-01 Based on multiple features and organize sparse visual target tracking method

Country Status (1)

Country Link
CN (1) CN106204647B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940891B (en) * 2016-12-12 2020-01-03 首都师范大学 HSV-based target tracking method and system
WO2018195979A1 (en) 2017-04-28 2018-11-01 深圳市大疆创新科技有限公司 Tracking control method and apparatus, and flight vehicle
CN107220660A (en) * 2017-05-12 2017-09-29 深圳市美好幸福生活安全系统有限公司 A kind of target tracking algorism based on the local cosine similarity of weighting
CN108280808B (en) * 2017-12-15 2019-10-25 西安电子科技大学 Method for tracking target based on structuring output correlation filter
CN109523587A (en) * 2018-11-20 2019-03-26 广东技术师范学院 The method for tracking target and system learnt based on multiple features and self-adapting dictionary

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN104484890A (en) * 2014-12-18 2015-04-01 上海交通大学 Video target tracking method based on compound sparse model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN104484890A (en) * 2014-12-18 2015-04-01 上海交通大学 Video target tracking method based on compound sparse model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Tracking via Robust Multi-task Multi-view Joint Sparse Representation;Zhibin Hong 等;《Computer Vision (ICCV), 2013 IEEE International Conference on》;20140303;649-656
多特征联合及选择的目标跟踪算法研究;徐玉伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160115(第01期);I138-668

Also Published As

Publication number Publication date
CN106204647A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106204647B (en) Based on multiple features and organize sparse visual target tracking method
Nibali et al. 3d human pose estimation with 2d marginal heatmaps
Suchi et al. EasyLabel: A semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets
CN100407798C (en) Three-dimensional geometric mode building system and method
CN109410321A (en) Three-dimensional rebuilding method based on convolutional neural networks
CN104915978B (en) Realistic animation generation method based on body-sensing camera Kinect
CN109658445A (en) Network training method, increment build drawing method, localization method, device and equipment
Zhang et al. GPU-accelerated real-time tracking of full-body motion with multi-layer search
Zeng et al. Pc-nbv: A point cloud based deep network for efficient next best view planning
CN104408760B (en) A kind of high-precision virtual assembly system algorithm based on binocular vision
Hu et al. Hand-model-aware sign language recognition
CN101154289A (en) Method for tracing three-dimensional human body movement based on multi-camera
CN109859241A (en) Adaptive features select and time consistency robust correlation filtering visual tracking method
CN110310285A (en) A kind of burn surface area calculation method accurately rebuild based on 3 D human body
CN111199207A (en) Two-dimensional multi-human body posture estimation method based on depth residual error neural network
Kulkarni et al. Nifty: Neural object interaction fields for guided human motion synthesis
CN111914595B (en) Human hand three-dimensional attitude estimation method and device based on color image
CN114689038A (en) Fruit detection positioning and orchard map construction method based on machine vision
CN101241601A (en) Graphic processing joint center parameter estimation method
Wan et al. Learn to predict how humans manipulate large-sized objects from interactive motions
Chen et al. Meta-learning regrasping strategies for physical-agnostic objects
Pan et al. Online human action recognition based on improved dynamic time warping
Han et al. A double branch next-best-view network and novel robot system for active object reconstruction
CN107507218A (en) Part motility Forecasting Methodology based on static frames
Wei et al. Generalized anthropomorphic functional grasping with minimal demonstrations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190510

Termination date: 20210701