CN103530894A - Video target tracking method based on multi-scale block sparse representation and system thereof - Google Patents

Video target tracking method based on multi-scale block sparse representation and system thereof Download PDF

Info

Publication number
CN103530894A
CN103530894A CN201310513554.7A CN201310513554A CN103530894A CN 103530894 A CN103530894 A CN 103530894A CN 201310513554 A CN201310513554 A CN 201310513554A CN 103530894 A CN103530894 A CN 103530894A
Authority
CN
China
Prior art keywords
target
image
target image
video
multiple dimensioned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310513554.7A
Other languages
Chinese (zh)
Other versions
CN103530894B (en
Inventor
檀结庆
谢成军
何蕾
阿里
霍星
刘奎
白天
姚焱刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201310513554.7A priority Critical patent/CN103530894B/en
Publication of CN103530894A publication Critical patent/CN103530894A/en
Application granted granted Critical
Publication of CN103530894B publication Critical patent/CN103530894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a video target tracking method based on multi-scale block sparse representation and a system thereof. Compared with the prior art, the invention solves the defect that the conventional target tracking method cannot achieve a better effect. The method comprises the following steps: initializing a target feature template; tracking a target image; updating a target appearance feature template; checking whether a video is read or not. Through the adoption of the method, target tracking precision and efficiency in complex scenes are improved, and the application degree of a target tracking technology in various scenes is improved.

Description

A kind of video object method for tracing and system thereof based on multiple dimensioned rarefaction representation
Technical field
The present invention relates to intelligent video technology field, is a kind of video object method for tracing and system thereof based on multiple dimensioned rarefaction representation specifically.
Background technology
Target tracking is an important research direction in computer vision field, and in the application of many reality, as the understanding of video monitoring, video scene, interactive video processing etc., all plays vital effect.Present stage researchist has proposed multiple different tracing algorithm, and under different application scenarioss, has obtained certain success.But traditional image local block size based on local sparse representation model production method selects directly impact to follow the trail of effect, follow the trail of the objective is when living through longer video sequence, block, the shape of target appearance, the illumination of environment etc. all can change, therefore target following does not often reach good effect, and the technical merit of target following is still not high.How to develop a kind of in video frequency tracking process outward appearance represent that target tracking method that model is more effective, healthy and strong and system thereof have become and be badly in need of the technical matters that solves.
Summary of the invention
The defect that the object of the invention is not reach in order to solve method for tracking target in prior art better effects, provides a kind of video object method for tracing and system thereof based on multiple dimensioned rarefaction representation to solve the problems referred to above.
To achieve these goals, technical scheme of the present invention is as follows:
A video object method for tracing for multiple dimensioned rarefaction representation, comprises the following steps:
Initialization target signature template, to video object position initialization, constructs target appearance feature templates by video the first two field picture;
The image that follows the trail of the objective, by reading video next frame image configuration next frame external appearance characteristic, and and the first frame external appearance characteristic template between carry out similarity comparison, maximum similarity be that video object is in the position of next frame;
Upgrade target appearance feature templates, candidate's external appearance characteristic of similarity maximum in previous step is updated to target appearance feature templates, for the external appearance characteristic of next frame image, carry out similarity comparison;
Check whether video reads complete, if read completely, complete video object and follow the trail of, if do not read complete, the operation of the image of proceeding to follow the trail of the objective.
Described initialization target signature template comprises the following steps:
Initialization video the first frame target image position, searches out the position of target image in video the first two field picture;
According to video the first frame target image, carry out establishing target view data dictionary;
The first frame video object image is carried out to multiple dimensioned processing, by constructed destination image data dictionary, calculate the sparse coefficient of target image under multiple dimensioned, and this sparse coefficient is represented as target appearance feature templates.
The described image that follows the trail of the objective comprises the following steps:
Read video next frame image;
By particle filter mode, in next frame video image, select candidate target image, build multiple dimensioned blocks of data dictionary;
Successively candidate target image is carried out to multiple dimensioned and process and calculate corresponding sparse coefficient;
The sparse coefficient of calculated candidate target image and the similarity between target appearance feature templates, the candidate target image of maximum similarity is video object in the position of next frame.
Described renewal target appearance feature templates is defined as follows:
T new s = ω T first s + ( 1 - ω ) T temp s
Here ω, for upgrading weights, is set as 0.9 in this method. be illustrated in feature templates new under yardstick s.
Comprising the following steps of described establishing target view data dictionary:
The first two field picture I and corresponding target image in given video;
By get K image local piece in object region, obtain set D={d i| i=1:K}, here d ibe i localized target image block, destination image data dictionary is D.
Comprising the following steps of the multiple dimensioned blocks of data dictionary of described structure:
Next frame image I+1 and corresponding candidate target image in given video;
In candidate target image-region, get K image local piece, the size of each topography's piece is made as respectively 3x3,5x5, and 7x7,9x9,11x11 is totally 5 yardstick pieces;
Obtain set
Figure BDA0000402394800000032
here d ibe i localized target image block, s is 5 yardstick piece parameters, and target image multi-Scale Data piece dictionary is D s.
The calculating of the external appearance characteristic template of described target comprises the following steps:
If
Figure BDA0000402394800000033
the topography's piece extracting under different scale from the first frame target image, by multiple dimensioned blocks of data dictionary, each topography's piece
Figure BDA0000402394800000034
have a corresponding sparse coefficient, its account form is as follows:
a ^ i s = arg min | | &alpha; i s | | 1 subjectto | | p i s - D s &alpha; i s | | 2 < &epsiv;
Figure BDA0000402394800000036
for corresponding target topography piece
Figure BDA0000402394800000037
sparse coefficient;
Collect Local sparsity coefficient and be expressed as target appearance feature templates, being defined as follows:
T s = [ a ^ 1 s , a ^ 2 s , . . . , a ^ K s ] T .
The sparse coefficient of described target image and the similarity between target appearance feature templates are calculated as follows:
If sim is (T r, T q) representing similarity between the first frame target image feature templates and next frame candidate target characteristics of image target, it is defined as follows:
sim ( T r , T q ) = &Sigma; s = 1 m &lambda; s &rho; ( T r s , T q s )
Here λ srepresent similarity weights coefficient under different scale s, ρ is two Bhattacharyya distances between target image, and the less similarity of its distance is higher, is defined as follows:
&rho; ( T r s , T q s ) = &Sigma; j = 1 K T r s ( j ) &CenterDot; T q s ( j ) .
A system for the video object method for tracing of multiple dimensioned rarefaction representation, comprising:
Initialization load module, for setting the video initialized location parameter that will follow the trail of the objective, starts video frequency tracking system, real-time tracing video object;
Target image multi-Scale Data dictionary builds module, for initialization target image being carried out to multiple dimensioned dictionary, calculates and builds;
Target appearance feature templates computing module, for the calculating of the sparse coefficient of multiple dimensioned lower target image, and using sparse coefficient as clarification of objective template;
Candidate target image similarity module, for similarity between main calculating target image feature templates and next frame candidate target characteristics of image target;
Target signature template renewal module, for upgrading the first frame feature templates, to adapt to the variation of target outward appearance etc. in video;
Described initialization load module is connected to target image multi-Scale Data dictionary and builds module, target image multi-Scale Data dictionary builds module and is connected to target appearance feature templates computing module, target appearance feature templates computing module is connected with target signature template renewal module by candidate target image similarity module, the target signature template renewal module target image multi-Scale Data dictionary structure module that is linked back.
Beneficial effect
A kind of video object method for tracing and system thereof based on multiple dimensioned rarefaction representation of the present invention, has compared with prior art improved precision and the efficiency of the target tracking in complex scene, has promoted the level of application of target tracking technology in all kinds of scenes.Utilize video image theoretical at the rarefaction representation under multiple dimensioned, by Image Multiscale process, the series of steps such as multiple dimensioned dictionary structure, multiple dimensioned rarefaction representation, similarity calculating, target localization, target in automatic tracing video, according to the result of following the trail of, automatically upgrade target appearance characteristic model, to adapt to the target tracking task in complicated video scene.
Accompanying drawing explanation
Fig. 1 is video object method for tracing process flow diagram of the present invention
Fig. 2 is video object tracing system johning knot composition of the present invention
Embodiment
For making that architectural feature of the present invention and the effect reached are had a better understanding and awareness, in order to preferred embodiment and accompanying drawing, coordinate detailed explanation, be described as follows:
As shown in Figure 1, a kind of video object method for tracing based on multiple dimensioned rarefaction representation of the present invention, comprises the following steps:
The first step, initialization target signature template, to video object position initialization, constructs target appearance feature templates by video the first two field picture.Initialization target signature template is the video object position of finding in video the first two field picture, and carries out initialized process operation.Its initialization target signature template comprises the following steps:
(11) initialization video the first frame target image position, searches out the position of target image in video the first two field picture.In actual use, if desired personage's head portrait is carried out to target tracking, personage's head portrait is just target image so, determines the position of personage's head portrait, has namely determined target image position, and this target image is exactly required content of following the trail of below.
(12) according to video the first frame target image, carry out establishing target view data dictionary, because the image of the first note is definite, target image wherein is also determined, target image is still the scope of an image, at this, just need to carry out cutting apart of all size window to target image, this set of cutting apart is just called destination image data dictionary.
Carry out establishing target view data dictionary and also comprise the following steps, by following methods, built:
(121) the first two field picture I and corresponding target image in given video;
(121), by get K image local piece in object region, obtain set D={d i| i=1:K}, here d ibe i localized target image block, destination image data dictionary is D.
(13) the first frame video object image is carried out to multiple dimensioned processing, by constructed destination image data dictionary, calculate the sparse coefficient of target image under multiple dimensioned, and this sparse coefficient is represented as target appearance feature templates.By target image being carried out to multiple dimensioned processing, by the further refinement of target image, guaranteed the accuracy that target image is followed the trail of, combining target view data dictionary calculates sparse coefficient again, this sparse coefficient of pin is as the target appearance feature templates that is used for comparing, to guarantee that accurately investigation is to the image that follows the trail of the objective of next frame.The computing method of the external appearance characteristic template of target image are as follows:
(131) establish the topography's piece extracting under different scale from the first frame target image, by multiple dimensioned blocks of data dictionary, each topography's piece have a corresponding sparse coefficient, its account form is as follows:
a ^ i s = arg min | | &alpha; i s | | 1 subjectto | | p i s - D s &alpha; i s | | 2 < &epsiv;
Figure BDA0000402394800000054
for corresponding target topography piece
Figure BDA0000402394800000055
sparse coefficient;
(132) collect Local sparsity coefficient and be expressed as target appearance feature templates, being defined as follows:
T s = [ a ^ 1 s , a ^ 2 s , . . . , a ^ K s ] T .
So far, complete the overall process of initialization target signature template, realized location and template gatherer process to target image in image the first frame, for postorder image ratio is to doing basis.
Second step, the image that follows the trail of the objective, by reading video next frame image configuration next frame external appearance characteristic, and and the first frame external appearance characteristic template between carry out similarity comparison, maximum similarity be that video object is in the position of next frame.This process is that the image of next frame is carried out to determining of target image, thereby finds the target image position of required tracking in next frame image.Its image that follows the trail of the objective comprises the following steps:
(21) read video next frame image, from video next frame image, find target image position, thereby realize target is followed the trail of.
(22) by particle filter mode, in next frame video image, select candidate target image, build multiple dimensioned blocks of data dictionary.Next frame video image is equally also needed to carry out cutting apart of all size window, and its method that builds multiple dimensioned blocks of data dictionary is:
(221) next frame image I+1 and corresponding candidate target image in given video, determine the image that will cut apart.
(222) in candidate target image-region, get K image local piece, the size of each topography's piece is made as respectively 3x3,5x5, and 7x7,9x9,11x11 is totally 5 yardstick pieces, generally can have 300-500 image local piece.
(223) obtain set
Figure BDA0000402394800000061
here d ibe i localized target image block, s is 5 yardstick piece parameters, and target image multi-Scale Data piece dictionary is D s.
(23) successively candidate target image is carried out to multiple dimensioned and process and calculate corresponding sparse coefficient, after getting target image multi-Scale Data piece dictionary, by being that candidate target image carries out multiple dimensioned processing to certain image, by the further refinement of candidate target image, then combining target view data dictionary calculates sparse coefficient.The computing method of sparse coefficient are identical with the method for compute sparse coefficient in the 13rd step, thereby calculate the corresponding sparse coefficient that multiple dimensioned degree is processed.
(24) the sparse coefficient of calculated candidate target image and the similarity between target appearance feature templates, the candidate target image of maximum similarity is video object in the position of next frame.The sparse coefficient of candidate target image and target appearance feature templates are contrasted, the quantity of the sparse coefficient of candidate target image is a plurality of, contrast with target appearance feature templates seriatim, thereby find the sparse coefficient of maximum similarity, thereby further determine that video object is in the position of next frame.Sparse coefficient and the similarity calculating method between target appearance feature templates of target image are as follows:
If sim is (T r, T q) representing similarity between the first frame target image feature templates and next frame candidate target characteristics of image target, it is defined as follows:
sim ( T r , T q ) = &Sigma; s = 1 m &lambda; s &rho; ( T r s , T q s )
Here λ srepresent similarity weights coefficient under different scale s, ρ is two Bhattacharyya distances between target image, and the less similarity of its distance is higher, is defined as follows:
&rho; ( T r s , T q s ) = &Sigma; j = 1 K T r s ( j ) &CenterDot; T q s ( j ) .
The 3rd step, upgrades target appearance feature templates, and candidate's external appearance characteristic of similarity maximum in previous step is updated to target appearance feature templates, for the external appearance characteristic of next frame image, carries out similarity comparison.Owing to having completed the location tracking of target image in previous step, the target image in the image of present frame will be as target appearance feature templates so, for the work of comparing of next frame image.The method of upgrading target appearance feature templates is as follows:
T new s = &omega; T first s + ( 1 - &omega; ) T temp s
Here ω, for upgrading weights, is set as 0.9 in this method.
Figure BDA0000402394800000074
be illustrated in feature templates new under yardstick s.
The 4th step, checks whether video reads complete, if read completely, complete video object and follow the trail of, if do not read complete, the operation of the image of proceeding to follow the trail of the objective.Judge whether current video target tracking all completes, if current video finishes, complete all tracing processes; If current video does not finish, also have the picture of next frame to exist, continue to get back to second step, the operation of the image of proceeding to follow the trail of the objective, until video finishes, tracing process finishes.
As shown in Figure 2, the system of a kind of video object method for tracing based on multiple dimensioned rarefaction representation of the present invention, is characterized in that, comprise: initialization load module, be used for setting the video initialized location parameter that will follow the trail of the objective, start video frequency tracking system, real-time tracing video object.Target image multi-Scale Data dictionary builds module, for initialization target image being carried out to multiple dimensioned dictionary, calculates and builds.Target appearance feature templates computing module, for the calculating of the sparse coefficient of multiple dimensioned lower target image, and using sparse coefficient as clarification of objective template.Candidate target image similarity module, for similarity between main calculating target image feature templates and next frame candidate target characteristics of image target.Target signature template renewal module, for upgrading the first frame feature templates, to adapt to the variation of target outward appearance etc. in video.
Initialization load module is connected to target image multi-Scale Data dictionary and builds module, the data after initialization is passed to target image multi-Scale Data dictionary and build the structure that module is carried out multi-Scale Data dictionary.Target image multi-Scale Data dictionary builds module and is connected to target appearance feature templates computing module, carries out the calculating of target appearance feature templates.Target appearance feature templates computing module is connected with target signature template renewal module by candidate target image similarity module, the data of target appearance feature templates computing module reach candidate target image similarity module carry out similarity relatively after, after having determined the target image of following the trail of, reach again target signature template renewal module, the target image tracking is defined as to the target signature template of subsequent figures picture comparison, then the target signature template renewal module target image multi-Scale Data dictionary that is linked back builds module, by data reach target image multi-Scale Data dictionary build module a heavier new round carry out multi-Scale Data dictionary and build module, comparison process.
More than show and described ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; what in above-described embodiment and instructions, describe is principle of the present invention; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in claimed scope of the present invention.The protection domain that the present invention requires is defined by appending claims and equivalent thereof.

Claims (9)

1. the video object method for tracing based on multiple dimensioned rarefaction representation, is characterized in that, comprises the following steps:
11) initialization target signature template, to video object position initialization, constructs target appearance feature templates by video the first two field picture;
12) image that follows the trail of the objective, by reading video next frame image configuration next frame external appearance characteristic, and and the first frame external appearance characteristic template between carry out similarity comparison, maximum similarity be that video object is in the position of next frame;
13) upgrade target appearance feature templates, candidate's external appearance characteristic of similarity maximum in previous step is updated to target appearance feature templates, for the external appearance characteristic of next frame image, carry out similarity comparison;
14) check whether video reads complete, if read completely, complete video object and follow the trail of, if do not read complete, the operation of the image of proceeding to follow the trail of the objective.
2. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 1, is characterized in that, described initialization target signature template comprises the following steps:
21) initialization video the first frame target image position, searches out the position of target image in video the first two field picture;
22) according to video the first frame target image, carry out establishing target view data dictionary;
23) the first frame video object image is carried out to multiple dimensioned processing, by constructed destination image data dictionary, calculate the sparse coefficient of target image under multiple dimensioned, and this sparse coefficient is represented as target appearance feature templates.
3. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 1, is characterized in that, the described image that follows the trail of the objective comprises the following steps:
31) read video next frame image;
32) by particle filter mode, in next frame video image, select candidate target image, build multiple dimensioned blocks of data dictionary;
33) successively candidate target image is carried out to multiple dimensioned and process and calculate corresponding sparse coefficient;
34) the sparse coefficient of calculated candidate target image and the similarity between target appearance feature templates, the candidate target image of maximum similarity is that video object is in the position of next frame.
4. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 1, is characterized in that, described renewal target appearance feature templates is defined as follows:
T new s = &omega; T first s + ( 1 - &omega; ) T temp s
Here ω, for upgrading weights, is set as 0.9 in this method.
Figure FDA0000402394790000022
be illustrated in feature templates new under yardstick s.
5. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 2, is characterized in that, the comprising the following steps of described establishing target view data dictionary:
51) the first two field picture I and corresponding target image in given video;
52), by get K image local piece in object region, obtain set D={d i| i=1:K}, here d ibe i localized target image block, destination image data dictionary is D.
6. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 3, is characterized in that, the comprising the following steps of the multiple dimensioned blocks of data dictionary of described structure:
61) next frame image I+1 and corresponding candidate target image in given video;
62) in candidate target image-region, get K image local piece, the size of each topography's piece is made as respectively 3x3,5x5, and 7x7,9x9,11x11 is totally 5 yardstick pieces;
63) obtain set
Figure FDA0000402394790000023
here d ibe i localized target image block, s is 5 yardstick piece parameters, and target image multi-Scale Data piece dictionary is D s.
7. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 2, is characterized in that, the calculating of the external appearance characteristic template of described target comprises the following steps:
71) establish the topography's piece extracting under different scale from the first frame target image, by multiple dimensioned blocks of data dictionary, each topography's piece
Figure FDA0000402394790000025
have a corresponding sparse coefficient, its account form is as follows:
a ^ i s = arg min | | &alpha; i s | | 1 subjectto | | p i s - D s &alpha; i s | | 2 < &epsiv;
Figure FDA0000402394790000027
for corresponding target topography piece
Figure FDA0000402394790000028
sparse coefficient;
72) collect Local sparsity coefficient and be expressed as target appearance feature templates, being defined as follows:
T s = [ a ^ 1 s , a ^ 2 s , . . . , a ^ K s ] T .
8. a kind of video object method for tracing based on multiple dimensioned rarefaction representation according to claim 3, is characterized in that, the sparse coefficient of described target image and the similarity between target appearance feature templates are calculated as follows:
If sim is (T r, T q) representing similarity between the first frame target image feature templates and next frame candidate target characteristics of image target, it is defined as follows:
sim ( T r , T q ) = &Sigma; s = 1 m &lambda; s &rho; ( T r s , T q s )
Here λ srepresent similarity weights coefficient under different scale s, ρ is two Bhattacharyya distances between target image, and the less similarity of its distance is higher, is defined as follows:
&rho; ( T r s , T q s ) = &Sigma; j = 1 K T r s ( j ) &CenterDot; T q s ( j ) .
9. a system for the video object method for tracing based on multiple dimensioned rarefaction representation, is characterized in that, comprising:
Initialization load module, for setting the video initialized location parameter that will follow the trail of the objective, starts video frequency tracking system, real-time tracing video object;
Target image multi-Scale Data dictionary builds module, for initialization target image being carried out to multiple dimensioned dictionary, calculates and builds;
Target appearance feature templates computing module, for the calculating of the sparse coefficient of multiple dimensioned lower target image, and using sparse coefficient as clarification of objective template;
Candidate target image similarity module, for similarity between main calculating target image feature templates and next frame candidate target characteristics of image target;
Target signature template renewal module, for upgrading the first frame feature templates, to adapt to the variation of target outward appearance etc. in video;
Described initialization load module is connected to target image multi-Scale Data dictionary and builds module, target image multi-Scale Data dictionary builds module and is connected to target appearance feature templates computing module, target appearance feature templates computing module is connected with target signature template renewal module by candidate target image similarity module, the target signature template renewal module target image multi-Scale Data dictionary structure module that is linked back.
CN201310513554.7A 2013-10-25 2013-10-25 A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof Active CN103530894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310513554.7A CN103530894B (en) 2013-10-25 2013-10-25 A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310513554.7A CN103530894B (en) 2013-10-25 2013-10-25 A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof

Publications (2)

Publication Number Publication Date
CN103530894A true CN103530894A (en) 2014-01-22
CN103530894B CN103530894B (en) 2016-04-20

Family

ID=49932872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310513554.7A Active CN103530894B (en) 2013-10-25 2013-10-25 A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof

Country Status (1)

Country Link
CN (1) CN103530894B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537694A (en) * 2015-01-09 2015-04-22 温州大学 Online learning offline video tracking method based on key frames
CN104599275A (en) * 2015-01-27 2015-05-06 浙江大学 Understanding method of non-parametric RGB-D scene based on probabilistic graphical model
CN104616324A (en) * 2015-03-06 2015-05-13 厦门大学 Target tracking method based on adaptive appearance model and point-set distance metric learning
CN104951482A (en) * 2014-03-31 2015-09-30 炬芯(珠海)科技有限公司 Method and device for operating Sparse-format mirror image document
CN105590328A (en) * 2015-12-07 2016-05-18 天津大学 Sparsely represented selective appearance model-based frame-adaptive target tracking algorithm
WO2017045116A1 (en) * 2015-09-15 2017-03-23 SZ DJI Technology Co., Ltd. System and method for supporting smooth target following
CN107527028A (en) * 2017-08-18 2017-12-29 深圳乐普智能医疗器械有限公司 Target cell recognition methods, device and terminal
CN110148117A (en) * 2019-04-22 2019-08-20 南方电网科学研究院有限责任公司 Power equipment defect identification method and device based on power image and storage medium
CN111274966A (en) * 2020-01-20 2020-06-12 临沂大学 Long-term visual tracking method and device based on structured model
US10860040B2 (en) 2015-10-30 2020-12-08 SZ DJI Technology Co., Ltd. Systems and methods for UAV path planning and control
CN112116634A (en) * 2020-07-30 2020-12-22 西安交通大学 Multi-target tracking method of semi-online machine
WO2021139484A1 (en) * 2020-01-06 2021-07-15 上海商汤临港智能科技有限公司 Target tracking method and apparatus, electronic device, and storage medium
CN116385497A (en) * 2023-05-29 2023-07-04 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324030A (en) * 2011-09-09 2012-01-18 广州灵视信息科技有限公司 Target tracking method and system based on image block characteristics
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324030A (en) * 2011-09-09 2012-01-18 广州灵视信息科技有限公司 Target tracking method and system based on image block characteristics
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENGJUN XIE ET AL: "Adaptive bandwidth object tracking using sparse approximation", 《COMPUTER SCIENCE AND AUTOMATION ENGINEERING》 *
TIANXIANG BAI ET AL: "Robust visual tracking with structured sparse representation appearance model", 《PATTERN RECOGNITION》 *
XU JIA ET AL: "Visual tracking via adaptive structural local sparse appearance model", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
XUE MEI ET AL: "Robust visual tracking and vehicle classification via sparse representation", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951482A (en) * 2014-03-31 2015-09-30 炬芯(珠海)科技有限公司 Method and device for operating Sparse-format mirror image document
CN104951482B (en) * 2014-03-31 2018-09-25 炬芯(珠海)科技有限公司 A kind of method and device of the image file of operation Sparse formats
CN104537694A (en) * 2015-01-09 2015-04-22 温州大学 Online learning offline video tracking method based on key frames
CN104537694B (en) * 2015-01-09 2017-05-10 温州大学 Online learning offline video tracking method based on key frames
CN104599275B (en) * 2015-01-27 2018-06-12 浙江大学 The RGB-D scene understanding methods of imparametrization based on probability graph model
CN104599275A (en) * 2015-01-27 2015-05-06 浙江大学 Understanding method of non-parametric RGB-D scene based on probabilistic graphical model
CN104616324A (en) * 2015-03-06 2015-05-13 厦门大学 Target tracking method based on adaptive appearance model and point-set distance metric learning
CN104616324B (en) * 2015-03-06 2017-07-28 厦门大学 Method for tracking target based on adaptive apparent model and point set learning distance metric
US10129478B2 (en) 2015-09-15 2018-11-13 SZ DJI Technology Co., Ltd. System and method for supporting smooth target following
US10928838B2 (en) 2015-09-15 2021-02-23 SZ DJI Technology Co., Ltd. Method and device of determining position of target, tracking device and tracking system
WO2017045116A1 (en) * 2015-09-15 2017-03-23 SZ DJI Technology Co., Ltd. System and method for supporting smooth target following
US11635775B2 (en) 2015-09-15 2023-04-25 SZ DJI Technology Co., Ltd. Systems and methods for UAV interactive instructions and control
US10976753B2 (en) 2015-09-15 2021-04-13 SZ DJI Technology Co., Ltd. System and method for supporting smooth target following
US10860040B2 (en) 2015-10-30 2020-12-08 SZ DJI Technology Co., Ltd. Systems and methods for UAV path planning and control
CN105590328A (en) * 2015-12-07 2016-05-18 天津大学 Sparsely represented selective appearance model-based frame-adaptive target tracking algorithm
CN105590328B (en) * 2015-12-07 2018-04-03 天津大学 Frame adaptive target tracking algorism based on rarefaction representation selectivity display model
CN107527028A (en) * 2017-08-18 2017-12-29 深圳乐普智能医疗器械有限公司 Target cell recognition methods, device and terminal
CN107527028B (en) * 2017-08-18 2020-03-24 深圳乐普智能医疗器械有限公司 Target cell identification method and device and terminal
CN110148117B (en) * 2019-04-22 2021-07-20 南方电网科学研究院有限责任公司 Power equipment defect identification method and device based on power image and storage medium
CN110148117A (en) * 2019-04-22 2019-08-20 南方电网科学研究院有限责任公司 Power equipment defect identification method and device based on power image and storage medium
WO2021139484A1 (en) * 2020-01-06 2021-07-15 上海商汤临港智能科技有限公司 Target tracking method and apparatus, electronic device, and storage medium
CN111274966A (en) * 2020-01-20 2020-06-12 临沂大学 Long-term visual tracking method and device based on structured model
CN111274966B (en) * 2020-01-20 2022-06-03 临沂大学 Long-term visual tracking method and device based on structured model
CN112116634A (en) * 2020-07-30 2020-12-22 西安交通大学 Multi-target tracking method of semi-online machine
CN112116634B (en) * 2020-07-30 2024-05-07 西安交通大学 Multi-target tracking method of semi-online machine
CN116385497A (en) * 2023-05-29 2023-07-04 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity
CN116385497B (en) * 2023-05-29 2023-08-22 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity

Also Published As

Publication number Publication date
CN103530894B (en) 2016-04-20

Similar Documents

Publication Publication Date Title
CN103530894B (en) A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof
CN103544483A (en) United target tracking method based on local sparse representation and system thereof
US11410435B2 (en) Ground mark extraction method, model training METHOD, device and storage medium
CN106529394B (en) A kind of indoor scene object identifies simultaneously and modeling method
AU2020103716A4 (en) Training method and device of automatic identification device of pointer instrument with numbers in natural scene
CN103400109B (en) A kind of cartographical sketching identified off-line and shaping methods
CN108280852B (en) Door and window point cloud shape detection method and system based on laser point cloud data
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN110838145B (en) Visual positioning and mapping method for indoor dynamic scene
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN106340010A (en) Corner detection method based on second-order contour difference
CN109919955A (en) The tunnel axis of ground formula laser radar point cloud extracts and dividing method
CN102833492A (en) Color similarity-based video scene segmenting method
CN111275821A (en) Power line fitting method, system and terminal
CN114092906B (en) Lane line segmentation fitting method, system, electronic equipment and storage medium
CN112529018A (en) Training method and device for local features of image and storage medium
CN104021372A (en) Face recognition method and device thereof
CN114549394B (en) Tumor focus region semantic segmentation method and system based on deep learning
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN103049766A (en) Ultrasonic image renal artery blood flow spectrum signal curve classification method
CN103208003A (en) Geometric graphic feature point-based method for establishing shape descriptor
CN115588178A (en) Method for automatically extracting high-precision map elements
CN104199742A (en) Method for accurately dividing blade cross section character point cloud
CN112991451B (en) Image recognition method, related device and computer program product
CN108154521A (en) A kind of moving target detecting method based on object block fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant