CN104091350B - A kind of object tracking methods of utilization motion blur information - Google Patents

A kind of object tracking methods of utilization motion blur information Download PDF

Info

Publication number
CN104091350B
CN104091350B CN201410280387.0A CN201410280387A CN104091350B CN 104091350 B CN104091350 B CN 104091350B CN 201410280387 A CN201410280387 A CN 201410280387A CN 104091350 B CN104091350 B CN 104091350B
Authority
CN
China
Prior art keywords
target
dictionary
tracking
motion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410280387.0A
Other languages
Chinese (zh)
Other versions
CN104091350A (en
Inventor
徐向民
张南海
郭锴凌
钟岳宏
陈永彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201410280387.0A priority Critical patent/CN104091350B/en
Publication of CN104091350A publication Critical patent/CN104091350A/en
Application granted granted Critical
Publication of CN104091350B publication Critical patent/CN104091350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of object tracking methods of utilization motion blur information, tracking target signature is extracted first, fuzzy object image is constructed using fuzzy kernel function;Then using obtained clear target and fuzzy object and to image block construction dictionary;Represent to track target using sparse representation method followed by the dictionary of construction, usage factor extracts movable information;Using particle filter algorithm locating and tracking target, tracking target is determined by blocking factor, and the particle sampler distribution in particle filter algorithm is related to movable information;Finally using the target newly traced into, dictionary updating is carried out with reference to sparse coding and incremental learning.The method of the present invention, can solve the problems, such as that feature of the image caused by motion blur degrades is difficult extraction using the dictionary of construction, the degree of accuracy of object tracking algorithm and the speed of service can be improved using movable information.

Description

A kind of object tracking methods of utilization motion blur information
Technical field
The present invention relates to computer vision field, more particularly to a kind of object tracking methods of utilization motion blur information.
Background technology
In computer vision field, tracking problem is a study hotspot, the effect of track algorithm by it is many because Element influence, current work is main to handle following influence factor:Non-rigid shape occurs for picture noise, compound movement, object Become, partly or completely block, ambient interferences, illumination variation, requirement of real-time etc..In numerous applications, thing is typically assumed that Body is not moved suddenly, and object keeps that motion blur etc. is not present in constant speed movement, video.But move in practice It is inevitable to obscure, and the reason for producing motion blur has:Object of which movement is too fast, the time for exposure is too short, camera motion etc., motion Obscure very common in object tracking problem.
Motion blur influences very big for object tracking, and the result that motion blur is directly contributed is exactly image degradation.For Track algorithm, two key issues primarily solved are exactly that Target Modeling is positioned with target, and wherein Target Modeling is exactly mainly to carry Tracking clarification of objective is taken, and picture quality directly affects feature extraction.So to build the tracking system of a robustness, solve It is certainly highly important the problem of motion blur.And the processing on motion blur at present is mainly image procossing aspect, that is, lead The method of image deblurring is studied, and special processing is not done the problem of in terms of tracking for motion blur.
The content of the invention
It is an object of the invention to the shortcoming and deficiency for overcoming prior art, there is provided a kind of thing of utilization motion blur information Volume tracing method.
The purpose of the present invention is realized by following technical scheme:
A kind of object tracking methods of utilization motion blur information, the step of comprising following order:
S1. the gray feature of tracking target is extracted, different Blur scales and blur direction are constructed using fuzzy kernel function Fuzzy object image;
S2. fragmental image processing is done using obtained clear target and fuzzy object, each image is divided into can be overlapping Several fritters, each fritter constructs sparse coding dictionary as an entry of dictionary;
S3. represent to track target using sparse representation method using the sparse coding dictionary of construction, to all directions and chi The entry coefficient superposition of degree, movable information, including the direction of motion and motion size are extracted using coefficient is obtained;
S4. the coefficient of rarefaction representation is handled again, is superimposed according to block position, diagonal coefficient is taken, with reference to particle filter Algorithm locating and tracking target;
S5. dictionary is updated using the target combination sparse coding and incremental learning that newly trace into;
S6. repeat the above steps S3~S5, until tracking terminates.
Described step S1, specifically comprising following order the step of:
A, initial frame are assumed to be picture rich in detail, and the gray feature of tracking target is extracted from initial frame;
B, the blurred picture that convolution algorithm is simulated is done using picture rich in detail and PSF function, convolution algorithm formula is:
Wherein Is (x) represents picture rich in detail, and I (x) represents the motion blur image of computer simulation, and v is two-dimensional vector, table Show the direction of motion and motion amplitude;
V sets different v to be worth to different blurred pictures, 8 different directions is taken for v direction as parameter, Respectively π/4, pi/2 ... ..., 2 π, and the value of amplitude is related to the fog-level of image, takes n grade, finally gives 8n The motion blur image of different directions different motion amplitude.
In step S2, the specific configuration step of described sparse coding dictionary is as follows:
A, expansion picture rich in detail template, go out to track target signature, target signature are translated using initial frame image zooming-out And rotation transformation, obtain several picture rich in detail templates;
B, the image to each image template do piecemeal processing, are divided into the fritter that several can be overlapping, dividing method is such as Under, for the image that a breadth degree is 32, it 1~16 is the first fritter to do the method split can be in width, and 8~24 be second Fritter, 16~32 the 3rd fritters, can thus be divided into 3 parts to width, height can similarly be handled, for a width pixel value For 32 × 32 image, according to above-mentioned processing, 3 × 3 fritters can be obtained;
C, to each fritter extract gray feature, be used as an entry of dictionary, you can complete dictionary construction;Finally obtain Dictionary can be expressed as:
T=[t1,1,1,…,t1,1,k,t1,2,1,…,t1,j,k,t2,1,1,…,ti,j,k]
Wherein i represents i-th of the direction of motion, and j represents j-th of sport rank, and k represents k-th that each template is divided into Fritter;
D, normalized is done to dictionary.
In step S3, when there is motion blur in described tracking target, image degradation, in order to represent the tracking degraded Target, is represented tracking target using the sparse coding represented based on subspace and extracts movable information, comprised the following steps that:
A, sparse representation model are:
Wherein T is sparse coding dictionary, and c is sparse coding coefficient, solves this model using Lasso algorithms;Y is through undue The tracking target signature of block, now sparse representation model use Frobenius norms, according to the dictionary of design, coefficient c for (i × J × k) × k matrixes;
B, extraction movable information, sparse coding is the popularization of independent component analysis, by the sparse coding dictionary T constructed, mould Each entry of paste structure of transvers plate represents the different direction of motion and motion amplitude, and corresponding coefficient c is then illustrated along entry Weight, be superimposed according to all directions, obtain the weight θ of all directions, be superimposed according to each amplitude, obtain each motion amplitude power Weight l:
In step S4, described particle filter includes observation model and forecast model two parts, comprises the following steps that:
A, according to suggestion distribution q produce particle, it is proposed that distribution be integrated into movable information, it is proposed that distribution concrete model be:
q(xt|x1:t-1, y1:t)=p (xt|xt-1)+p(xt|xt-1,xt-2)+∑θiqi(xt|xt-1,yt-1)
Wherein, p (xt|xt-1) converted for first order Markov, it is x often to take averaget-1Gaussian function;p(xt|xt-1,xt-2) For second order markov transform, it is x often to take averaget-1+ut-1Gaussian function, ut-1For the speed difference of front cross frame;qi(xt|xt-1, yt-1) distribution of the expression along i-th of the direction of motion, it is also Gaussian Profile that it, which is distributed, and average is related to the movable information extracted, is xt-1+vt-1, vt-1It is parameter θ for motion vectoriWith amplitude l function;
B, the state represented each particle extract tracking target, to possible tracking target segment, and with sparse Tracking target is represented, the similarity of state and tracking target that each particle is represented is measured, measuring similarity uses piecemeal Information, specific practice is:
Wherein C represents to normalize constant, pk1,k2Expression represents the weight of 2 blocks of kth with 1 block of kth, then finally obtains P is k × k matrix, in theory k-th piece should be maximum by the coefficient of k-th piece of expression, so take matrix p diagonal element, This diagonal values is bigger, closest with tracking target;ci,j,k1,k2It is the coefficient of rarefaction representation, represents 1 fritter of kth in dictionary The entry of j-th of sport rank of i-th of the direction of motion rebuilds the coefficient of target 2 fritters of kth;
C, using the reconstructed error of rarefaction representation set up likelihood function model;
D, utilize likelihood function resampling particle;
E, repeat the above steps, until tracking terminates.
Described step S5, specifically comprising following order the step of:
A, PCA analyses are done to clear template, obtained characteristic vector sorts according to characteristic value size, the big several spies of feature Vector is levied as base U;
B, using obtained base U represented with sparse representation method preserve a few frame tracking result Y, now tracking result do not do Piecemeal is handled, and sparse representation model is:
Wherein z is coefficient, and s is noise, and s is in laplacian distribution;
Tracking result is corrected using rarefaction representation, bearing calibration is as follows:
Wherein μ is that PCA analyzes the mean vector for calculating and obtaining;
C, using the tracking result Ynew of correction increment PCA learning methods are used, re -training obtains new base U;
D, represent current tracking result y with the base U newly obtained, result in front of correction is current:
Wherein μ is that PCA analyzes the mean vector for calculating and obtaining, and corrected tracking result is used to update clear template;
E, the principle according to slow, the new entry updating decision of the old entry renewal of dictionary, generate accumulated probability sequence:
The random number between 0~1 is produced, is determined to replace a certain template in clear template by random number;
After F, clear template renewal, to the processing of clear template piecemeal, sparse coding dictionary T clear part is replaced, it is right New dictionary does normalized, the dictionary updated.
In step S6, described PSF function is gaussian kernel function.
The present invention compared with prior art, has the following advantages that and beneficial effect:
1st, the problem of motion blur causes image degradation is solved using rarefaction representation, the tracking of system energy robustness has fortune The object tracking problem of dynamic model paste.
2nd, using blocking characteristic extracting method, tracking system energy robustness handles the occlusion issue of tracking process.
3rd, using the movable information of extraction, it is effectively combined into particle filter algorithm, makes track algorithm more efficient.
4th, tracking process can effectively solve the changes such as posture, the illumination during tracking using dictionary updating method, will hide The dictionary updating of gear and motion blur is unified in same framework, reduces the tracking drifting problem that dictionary updating is brought.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the object tracking methods of utilization motion blur information of the present invention.
Embodiment
With reference to embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited In this.
Such as Fig. 1, a kind of object tracking methods of utilization motion blur information, the step of comprising following order:
S1. the gray feature of tracking target is extracted, different Blur scales and blur direction are constructed using fuzzy kernel function Fuzzy object image, specifically comprising following order the step of:
A, initial frame are assumed to be picture rich in detail, and the gray feature of tracking target is extracted from initial frame;
B, using picture rich in detail and PSF (point spread function) do the blurred picture that convolution algorithm is simulated, PSF here Function uses gaussian kernel function, and gaussian kernel function simulates motion blur image, and convolution algorithm formula is:
Wherein Is (x) represents picture rich in detail, and I (x) represents the motion blur image of computer simulation, and v is two-dimensional vector, table Show the direction of motion and motion amplitude;
V sets different v to be worth to different blurred pictures, 8 different directions is taken for v direction as parameter, Respectively π/4, pi/2 ... ..., 2 π, and the value of amplitude is related to the fog-level of image, takes n grade, finally gives 8n The motion blur image of different directions different motion amplitude;
S2. fragmental image processing is done using obtained clear target and fuzzy object, each image is divided into can be overlapping Several fritters, each fritter constructs sparse coding dictionary as an entry of dictionary, and specific configuration step is as follows:
A, expansion picture rich in detail template, go out to track target signature, target signature are translated using initial frame image zooming-out And rotation transformation, obtain several picture rich in detail templates;
B, the image to each image template do piecemeal processing, are divided into the fritter that several can be overlapping, dividing method is such as Under, for the image that a breadth degree is 32, it 1~16 is the first fritter to do the method split can be in width, and 8~24 be second Fritter, 16~32 the 3rd fritters, can thus be divided into 3 parts to width, height can similarly be handled, for a width pixel value For 32 × 32 image, according to above-mentioned processing, 3 × 3 fritters can be obtained;
C, to each fritter extract gray feature, be used as an entry of dictionary, you can complete dictionary construction;Finally obtain Dictionary can be expressed as:
T=[t1,1,1,…,t1,1,k,t1,2,1,…,t1,j,k,t2,1,1,…,ti,j,k]
Wherein i represents i-th of the direction of motion, and j represents j-th of sport rank, and k represents k-th that each template is divided into Fritter;
D, normalized is done to dictionary;
S3. represent to track target using sparse representation method using the sparse coding dictionary of construction, to all directions and chi The entry coefficient superposition of degree, movable information, including the direction of motion and motion size are extracted using coefficient is obtained;
When described tracking target has motion blur, image degradation, in order to represent the tracking target degraded, is used The sparse coding represented based on subspace is represented to track target, comprised the following steps that:
A, sparse representation model are:
Wherein T is sparse coding dictionary, and c is sparse coding coefficient, solves this model (minimum absolute using Lasso algorithms Reduction and selection algorithm);Y is the tracking target signature by piecemeal, and now sparse representation model uses Frobenius norms, According to the dictionary of design, coefficient c is (i × j × k) × k matrixes;
B, extraction movable information, sparse coding is the popularization of independent component analysis, by the sparse coding dictionary T constructed, mould Each entry of paste structure of transvers plate represents the different direction of motion and motion amplitude, and corresponding coefficient c is then illustrated along entry Weight, be superimposed according to all directions, obtain the weight θ of all directions, be superimposed according to each amplitude, obtain each motion amplitude power Weight l:
S4. the coefficient of rarefaction representation is handled again, is superimposed according to block position, diagonal coefficient is taken, with reference to particle filter Algorithm locating and tracking target;
Described particle filter includes observation model and forecast model two parts, comprises the following steps that:
A, according to suggestion distribution q produce particle, it is proposed that distribution be integrated into movable information, it is proposed that distribution concrete model be:
q(xt|x1:t-1,y1:t)=p (xt|xt-1)+p(xt|xt-1,xt-2)+∑θiqi(xt|xt-1,yt-1)
Wherein, p (xt|xt-1) converted for first order Markov, it is x often to take averaget-1Gaussian function;p(xt|xt-1,xt-2) For second order markov transform, it is x often to take averaget-1+ut-1Gaussian function, ut-1For the speed difference of front cross frame;qi(xt|xt-1, yt-1) distribution of the expression along i-th of the direction of motion, it is also Gaussian Profile that it, which is distributed, and average is related to the movable information extracted, is xt-1+vt-1, vt-1It is parameter θ for motion vectoriWith amplitude l function;
B, the state represented each particle extract tracking target, to possible tracking target segment, and with sparse Tracking target is represented, the similarity of state and tracking target that each particle is represented is measured, measuring similarity uses piecemeal Information, specific practice is:
Wherein C represents to normalize constant, pk1,k2Expression represents the weight of 2 blocks of kth with 1 block of kth, then finally obtains P is k × k matrix, in theory k-th piece should be maximum by the coefficient of k-th piece of expression, so take matrix p diagonal element, This diagonal values is bigger, closest with tracking target;ci,j,k1,k2It is the coefficient of rarefaction representation, represents 1 fritter of kth in dictionary The entry of j-th of sport rank of i-th of the direction of motion rebuilds the coefficient of target 2 fritters of kth;
C, using the reconstructed error of rarefaction representation set up likelihood function model;
D, utilize likelihood function resampling particle;
E, repeat the above steps, until tracking terminates;
S5. dictionary is updated using the target combination sparse coding and incremental learning that newly trace into, it is specific include with The step of lower order:
A, PCA analyses are done to clear template, obtained characteristic vector sorts according to characteristic value size, the big several spies of feature Vector is levied as base U;
B, using obtained base U represented with sparse representation method preserve a few frame tracking result Y, now tracking result do not do Piecemeal is handled, and sparse representation model is:
Wherein z is coefficient, and s is noise, and s is in laplacian distribution;
Tracking result is corrected using rarefaction representation, bearing calibration is as follows:
Wherein μ is that PCA analyzes the mean vector for calculating and obtaining;
C, using the tracking result Ynew of correction increment PCA learning methods are used, re -training obtains new base U;
D, represent current tracking result y with the base U newly obtained, result in front of correction is current:
Wherein μ is that PCA analyzes the mean vector for calculating and obtaining, and corrected tracking result is used to update clear template;
E, the principle according to slow, the new entry updating decision of the old entry renewal of dictionary, generate accumulated probability sequence:
The random number between 0~1 is produced, is determined to replace a certain template in clear template by random number;
After F, clear template renewal, to the processing of clear template piecemeal, sparse coding dictionary T clear part is replaced, it is right New dictionary does normalized, the dictionary updated;
S6. repeat the above steps S3~S5, until tracking terminates.
Above-described embodiment is preferably embodiment, but embodiments of the present invention are not by above-described embodiment of the invention Limitation, other any Spirit Essences without departing from the present invention and the change made under principle, modification, replacement, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.

Claims (3)

1. a kind of object tracking methods of utilization motion blur information, it is characterised in that the step of comprising following order:
S1. the gray feature of tracking target is extracted, the fuzzy of different Blur scales and blur direction is constructed using fuzzy kernel function Target image;
S2. do fragmental image processing using obtained clear target and fuzzy object, each image be divided into can be overlapping it is some Individual fritter, each fritter constructs sparse coding dictionary as an entry of dictionary;
S3. represent to track target using sparse representation method using the sparse coding dictionary of construction, to all directions and yardstick Entry coefficient is superimposed, and movable information, including the direction of motion and motion size are extracted using coefficient is obtained;
S4. the coefficient of rarefaction representation is handled again, is superimposed according to block position, diagonal coefficient is taken, with reference to particle filter algorithm Locating and tracking target;
S5. dictionary is updated using the target combination sparse coding and incremental learning that newly trace into;
S6. repeat the above steps S3~S5, until tracking terminates.
2. the object tracking methods of utilization motion blur information according to claim 1, it is characterised in that in step S2, The specific configuration step of described sparse coding dictionary is as follows:
A, expansion picture rich in detail template, go out to track target signature, target signature are translated and revolved using initial frame image zooming-out Transformation is changed, and obtains several picture rich in detail templates;
B, the image to each image template do piecemeal processing, are divided into the fritter that several can be overlapping, and dividing method is as follows, right In the image that a breadth degree is 32, it 1~16 is the first fritter to do the method split to be in width, and 8~24 be the second fritter, 16~ 32 the 3rd fritters, are thus divided into 3 parts to width, can similarly handle height, for the figure that a width pixel value is 32 × 32 Picture, according to above-mentioned processing, obtains 3 × 3 fritters;
C, to each fritter extract gray feature, be used as an entry of dictionary, you can complete dictionary construction;The word finally obtained Allusion quotation is expressed as:
T=[t1,1,1,…,t1,1,k,t1,2,1,…,t1,j,k,t2,1,1,…,ti,j,k]
Wherein i represents i-th of the direction of motion, and j represents j-th of sport rank, and k represents k-th of fritter that each template is divided into;
D, normalized is done to dictionary.
3. the object tracking methods of utilization motion blur information according to claim 1, it is characterised in that in step S4, Described particle filter include observation model and forecast model two parts, and advise distribution design be track algorithm effect very Big factor of influence, is comprised the following steps that:
A, according to suggestion distribution q produce particle, it is proposed that distribution be integrated into movable information, it is proposed that distribution concrete model be:
q(xt|x1:t-1,y1:t)=p (xt|xt-1)+p(xt|xt-1,xt-2)+∑θiqi(xt|xt-1,yt-1)
Wherein, p (xt|xt-1) converted for first order Markov, it is x to take averaget-1Gaussian function;p(xt|xt-1,xt-2) it is second order Markov transform, it is x to take averaget-1+ut-1Gaussian function, ut-1For the speed difference of front cross frame;qi(xt|xt-1,yt-1) represent Along the distribution of i-th of the direction of motion, it is also Gaussian Profile that it, which is distributed, and average is related to the movable information extracted, is xt-1+vt-1, vt-1It is parameter θ for motion vectoriAnd amplitudelFunction;
B, the state represented each particle extract tracking target, to possible tracking target segment, and use rarefaction representation Tracking target is represented, the similarity of state and tracking target that each particle is represented is measured, measuring similarity uses blocking information, Specific practice is:
Wherein C represents to normalize constant, pk1,k2Expression represents the weight of 2 blocks of kth with 1 block of kth, then the p finally obtained is k × k matrix, in theory k-th piece should be maximum by the coefficient of k-th piece of expression, so taking matrix p diagonal element, this is right Angle value is bigger, with tracking target closer to;ci,j,k1,k2It is the coefficient of rarefaction representation, represents i-th of 1 fritter of kth in dictionary The entry of j-th of sport rank of the direction of motion rebuilds the coefficient of target 2 fritters of kth;
C, using the reconstructed error of rarefaction representation set up likelihood function model;
D, utilize likelihood function resampling particle;
E, repeat step A~D, until tracking terminates.
CN201410280387.0A 2014-06-20 2014-06-20 A kind of object tracking methods of utilization motion blur information Active CN104091350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410280387.0A CN104091350B (en) 2014-06-20 2014-06-20 A kind of object tracking methods of utilization motion blur information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410280387.0A CN104091350B (en) 2014-06-20 2014-06-20 A kind of object tracking methods of utilization motion blur information

Publications (2)

Publication Number Publication Date
CN104091350A CN104091350A (en) 2014-10-08
CN104091350B true CN104091350B (en) 2017-08-25

Family

ID=51639065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410280387.0A Active CN104091350B (en) 2014-06-20 2014-06-20 A kind of object tracking methods of utilization motion blur information

Country Status (1)

Country Link
CN (1) CN104091350B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751484B (en) * 2015-03-20 2017-08-25 西安理工大学 A kind of moving target detecting method and the detecting system for realizing moving target detecting method
CN104751493A (en) * 2015-04-21 2015-07-01 南京信息工程大学 Sparse tracking method on basis of gradient texture features
CN107481269B (en) * 2017-08-08 2020-07-03 西安科技大学 Multi-camera moving object continuous tracking method for mine
CN108520497B (en) * 2018-03-15 2020-08-04 华中科技大学 Image restoration and matching integrated method based on distance weighted sparse expression prior
CN110033001A (en) * 2019-04-17 2019-07-19 华夏天信(北京)智能低碳技术研究院有限公司 Mine leather belt coal piling detection method based on sparse dictionary study
CN110991276A (en) * 2019-11-20 2020-04-10 湖南检信智能科技有限公司 Face motion blur judgment method based on convolutional neural network
CN111178409B (en) * 2019-12-19 2021-11-16 浙大网新系统工程有限公司 Image matching and recognition system based on big data matrix stability analysis
KR102337445B1 (en) * 2021-05-14 2021-12-09 이정환 System for providing golf putting simulation servie using virtual mesh form

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489201A (en) * 2013-09-11 2014-01-01 华南理工大学 Method for tracking target based on motion blur information
CN103729860A (en) * 2013-12-31 2014-04-16 华为软件技术有限公司 Image target tracking method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489201A (en) * 2013-09-11 2014-01-01 华南理工大学 Method for tracking target based on motion blur information
CN103729860A (en) * 2013-12-31 2014-04-16 华为软件技术有限公司 Image target tracking method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Blurred Target Tracking by Blur-driven Tracker》;Yi Wu et al;《2011 International Conference on Computer Vision》;20111106;第1100-1107页 *
《Robust Object Tracking via Sparsity-based Collaborative Model》;Wei Zhong et al;《Computer Vision and Pattern Recognition(CVPR),2012 IEEE Conference on》;20120616;第1838-1845页 *
《基于稀疏表示的目标跟踪方法》;钟伟;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20130815(第8期);第I138-629页 *

Also Published As

Publication number Publication date
CN104091350A (en) 2014-10-08

Similar Documents

Publication Publication Date Title
CN104091350B (en) A kind of object tracking methods of utilization motion blur information
CN109636905B (en) Environment semantic mapping method based on deep convolutional neural network
CN110008915B (en) System and method for estimating dense human body posture based on mask-RCNN
CN110572696B (en) Variational self-encoder and video generation method combining generation countermeasure network
CN105488815B (en) A kind of real-time objects tracking for supporting target size to change
CN106384094A (en) Chinese word stock automatic generation method based on writing style modeling
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN107368845A (en) A kind of Faster R CNN object detection methods based on optimization candidate region
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN108564025A (en) A kind of infrared image object identification method based on deformable convolutional neural networks
CN102509333B (en) Action-capture-data-driving-based two-dimensional cartoon expression animation production method
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN106022363B (en) A kind of Chinese text recognition methods suitable under natural scene
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN110570481A (en) calligraphy word stock automatic repairing method and system based on style migration
CN106709964B (en) Sketch generation method and device based on gradient correction and multidirectional texture extraction
CN104008538A (en) Super-resolution method based on single image
CN112651316B (en) Two-dimensional and three-dimensional multi-person attitude estimation system and method
CN107967695A (en) A kind of moving target detecting method based on depth light stream and morphological method
CN107229920B (en) Behavior identification method based on integration depth typical time warping and related correction
CN107730536B (en) High-speed correlation filtering object tracking method based on depth features
CN106127688A (en) A kind of super-resolution image reconstruction method and system thereof
CN106204658A (en) Moving image tracking and device
CN108182694B (en) Motion estimation and self-adaptive video reconstruction method based on interpolation
CN104751493A (en) Sparse tracking method on basis of gradient texture features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant