CN105654505A - Collaborative tracking algorithm based on super-pixel and system thereof - Google Patents

Collaborative tracking algorithm based on super-pixel and system thereof Download PDF

Info

Publication number
CN105654505A
CN105654505A CN201510971312.1A CN201510971312A CN105654505A CN 105654505 A CN105654505 A CN 105654505A CN 201510971312 A CN201510971312 A CN 201510971312A CN 105654505 A CN105654505 A CN 105654505A
Authority
CN
China
Prior art keywords
pixel
generation model
super
area
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510971312.1A
Other languages
Chinese (zh)
Other versions
CN105654505B (en
Inventor
纪庆革
袁大龙
韩非凡
杜景洪
印鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Sun Yat Sen University
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
Original Assignee
GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Sun Yat Sen University
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd, Sun Yat Sen University, Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd filed Critical GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510971312.1A priority Critical patent/CN105654505B/en
Publication of CN105654505A publication Critical patent/CN105654505A/en
Application granted granted Critical
Publication of CN105654505B publication Critical patent/CN105654505B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a collaborative tracking algorithm based on a super-pixel and a system thereof. According to the method, whether a candidate image includes a target area is determined through global judgment and local judgment so that the tracking problem of shielding of the target area can be solved. Meanwhile, an updating strategy is introduced so that the method is enabled to be adaptive to various appearance changes of the target area in the tracking process, and accuracy and applicability are greatly enhanced.

Description

A kind of collaborative track algorithm based on super-pixel and system
Technical field
The present invention relates to computer vision target tracking domain, more specifically, it relates to a kind of collaborative track algorithm based on super-pixel and system.
Background technology
Along with the development of computer and universal, people more and more expect to calculate function and have the perception as the mankind and recognition capability, and the direction of one of them effort is exactly the visually-perceptible system of the similar mankind. Computer vision the graphic information of input is processed by computer, and simulation people's eye, to the perception of visual information and identification, completes the such as task such as Target Recognition, tracking. Along with the raising of computing power and the universal of camera, we can both get the video image information of magnanimity every day, but also in continuous growth so that people increase day by day for the demand of visual information automatic business processing.
Target tracking is detected by previously selected interesting target in one group of image sequence, frame by frame tracking target. According to the number of tracking target, target tracking algorism can simply be divided into monotrack algorithm and multiple target tracking algorithm; According to the camera number used in tracking process, target tracking algorism can be divided into single camera to be followed the tracks of and multi-cam tracking. The present invention is mainly for the tracking problem of single camera single goal. Target tracking is a utilisation technology in computer vision inherently, and it is again the basis of other senior application simultaneously. Some typically used of target tracking comprise: man-machine interaction, security monitoring, Vehicle Detection, intelligent robot navigation etc. But, target tracking is a complicated process, and this field also exists many challenges, such as, partial occlusion in object tracking process, appearance change, light change, strenuous exercise, target disappear in the visual field after reproduction, background influence etc.
Summary of the invention
The present invention is the defect solving above prior art, it provides a kind of collaborative track algorithm based on super-pixel, the method can process block, FAQs in the target tracking such as appearance change, and there is good stability and robustness.
For realizing above goal of the invention, the technical scheme of employing is:
Based on a collaborative track algorithm for super-pixel segmentation, for solving the tracking problem of single camera single goal, comprise the following steps:
One, the training stage
S1. building overall situation discrimination model, described overall situation discrimination model, for extracting the Haar_Like feature of target area, then according to the Haar_Like feature construction overall situation sorter GC extracted, and determines the parameter of overall situation sorter GC;
S2. use the sharding method based on overlapping moving window that target area is carried out burst, obtain N number of subregion, then N number of overall situation discrimination model is built respectively, described N number of local discrimination model is for extracting Haar_Like feature respectively to N number of subregion, then build local classifiers respectively according to extraction Haar_Like feature, and determine the parameter of local classifiers;
S3. building and adapt to generation model, and confirm to adapt to the model parameter of generation model, its concrete steps are as follows:
Target area is carried out super-pixel segmentation, and extracts the proper vector of each super-pixel respectively, then use K-means algorithm that all super-pixel of target area are carried out cluster, so that it is determined that adapt to the model parameter of generation model;
Two, tracking phase
S4. by candidate image piInputing to overall situation discrimination model, overall situation discrimination model is to candidate image piHaar_Like feature extract, then use the overall situation sorter GC to candidate image piHaar_Like feature classify, GC (pi) represent candidate image piClassification results;
S5. use the method for step S2 by candidate image piDivide N number of subregion, then make N number of local discrimination model that N number of subregion is extracted Haar_Like feature respectively, then use N number of local classifiers the Haar_Like feature of N number of subregion to be classified respectively, LCj(pi) represent that jth local classifiers is to the classification results of subregion;
S6. combine the classification results of the overall situation point class model, local point class model, whether candidate's image comprised target area and judges:
thrGC��thrLCRepresent two threshold values of overall situation classification, local classification respectively, as y (piDuring)=1, represent candidate image piInclude target area;
S7. all candidate's images are carried out the operation of step S4��S6 thus judge whether include target area in it, then all judgements be it contains candidate's image of target area and input to adaptation generation model;
S8. for each candidate's image, adapt to generation model and it is carried out super-pixel segmentation, then extract the proper vector of each super-pixel, then use K-means algorithm that the proper vector of all super-pixel is carried out cluster, and the cluster degree of confidence of calculated candidate image; Then the candidate's image choosing degree of confidence the highest exports as tracking results, exports the degree of confidence conf of the current tracking results of data compriseTWith target area mate area areaT, whereinWherein AiFor the area of each super-pixel, N represents the number comprising super-pixel in candidate's image sheet,
Above-mentioned formula shows when super-pixel is close in feature space with cluster centre, also close at target area relative position with template super-pixel in cluster, and during the target of affiliated cluster/background degree of confidence height, this patent think such super-pixel the appearance information of current target can be described more fully and discriminating power strong, wherein gi' represent the super-pixel that candidate's image sheet comprises, ki' represent cluster belonging to super-pixel, SiThe distance of cluster belonging to ' expression super-pixel,Represent the target/background degree of confidence of each cluster, Rj' represent cluster radius, confi' represent gi' degree of confidence, LiThe template represented in candidate's image sheet in each super-pixel and affiliated cluster is super as the minimum space distance between several, asIt is the weighting factor of control space distance weighting,Represent gi' with the template super-pixel of affiliated clusterSpace distance in the target area;
conf c l s = A t arg e t A t arg e t + A b a c k g o u n d ,
Wherein AtargetRepresent that in each cluster, all class members belong to the sum of the pixel number of target area, AbackgroundRepresent the sum of the pixel number of background area;
Wherein area T = Σ j = 1 M F ( g i ) A , F ( g j ) = A j ′ A j ′ ≤ A t arg et j A t arg e t j A j ′ > A t arg e t j
Aj' represent the pixel number of each super-pixel in current tracking results,Representing the target area pixel number that each super-pixel cluster comprises, M represents the sum of super-pixel;
Three, detecting stage
S9. build template base generation model, and make template base generation model detect target area in present frame, return the degree of confidence conf of detected resultD, then according to the current position in the Output rusults estimating target region adapting to generation model and template base generation model:
1) area is worked asT��thrPLAnd confT��thrTHTime
Wherein thrTH��thrPLRepresent confidence threshold value and coupling area threshold respectively, the tracking results now adapting to generation model has higher degree of confidence and coupling area, adapt to generation model normal operation and adapted to target area outward appearance, so the Output rusults adapting to generation model is exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
2) area is worked asT< thrPLAnd confT��thrTHTime
The coupling area of the tracking results now adapting to generation model is lower, but the degree of confidence of tracking results is still higher than threshold value, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
3) area is worked asT��thrPLAnd confT< thrTHTime
The tracking results now adapting to generation model has lower degree of confidence, but has higher coupling area, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
4) area is worked asT< thrPL, confT��thrTHAnd confD��thrDHTime
thrDHRepresent the threshold value of detected result degree of confidence, now adapt to the degree of confidence of the tracking results of generation model and mate area all lower than default threshold value, and template base generation model detects the target location that a degree of confidence is higher, then the detected result of template base generation model is used as target location to export, then overall situation sorter GC, local classifiers, adaptation generation model is carried out heavily initialize.
Model modification is the key enabling track algorithm adapt to target appearance change, discrimination model have employed increment updating method in similar RealTimeCompressiveTracking document and (notes unrelated with this patent, so not repeating), generation model have employed a kind of update method based on moving window. In tracking process, every U two field picture, we just join a two field picture in model and carry out super-pixel segmentation, feature extraction, cluster. In order to ensure the real-time of algorithm, we have employed the window of a fixed size, and when upgrading every time, if the number of image frames of window is greater than predetermined size, then abandons by certain strategy and generation model affects minimum image.
Simultaneously, present invention also offers a kind of system applying described collaborative track algorithm, its concrete scheme is as follows: comprise tracking module, detection module and position estimation module, wherein said tracking module comprises overall situation discrimination model, local discrimination model and adaptability generation model, described detection model comprises template base generation model, and position estimation module is used for the current position according to the Output rusults estimating target region adapting to generation model and template base generation model.
Compared with prior art, the invention has the beneficial effects as follows:
Collaborative track algorithm based on super-pixel provided by the invention, the method can process block, FAQs in the target tracking such as appearance change, there is good stability and robustness.
Accompanying drawing explanation
Fig. 1 is the framework figure of present method.
Fig. 2 is the training schematic diagram of discrimination model.
Fig. 3 is the training schematic diagram adapting to generation model.
Embodiment
Accompanying drawing, only for exemplary illustration, can not be interpreted as the restriction to this patent;
Below in conjunction with drawings and Examples, the present invention is further elaborated.
Embodiment 1
Based on a collaborative track algorithm for super-pixel segmentation, for solving the tracking problem of single camera single goal, comprise the following steps:
One, the training stage
S1. overall situation discrimination model is built, described overall situation discrimination model is for extracting the Haar_Like feature of target area, then according to extraction global compaction Haar_Like feature construction overall situation sorter GC, and the parameter of overall situation sorter GC is determined, concrete as shown in Figure 2;
S2. use the sharding method based on overlapping moving window that target area is carried out burst, obtain N number of subregion, then N number of overall situation discrimination model is built respectively, described N number of local discrimination model is for extracting Haar_Like feature respectively to N number of subregion, then local classifiers is built respectively according to extraction local compression Haar_Like feature, and determine the parameter of local classifiers, concrete as shown in Figure 2;
S3. building and adapt to generation model, and confirm to adapt to the model parameter of generation model, its concrete steps are as follows:
Target area is carried out super-pixel segmentation, and extracts the proper vector of each super-pixel respectively, then use K-means algorithm that all super-pixel of target area are carried out cluster, so that it is determined that adapt to the model parameter of generation model, concrete as shown in Figure 3;
Two, tracking phase
S4. by candidate image piInputing to overall situation discrimination model, overall situation discrimination model is to candidate image piHaar_Like feature extract, then use the overall situation sorter GC to candidate image piGlobal compaction Haar_Like feature classify, GC (pi) represent candidate image piClassification results;
S5. use the method for step S2 by candidate image piDivide N number of subregion, then make N number of local discrimination model that N number of subregion is extracted Haar_Like feature respectively, then use N number of local classifiers the local compression Haar_Like feature of N number of subregion to be classified respectively, LCj(pi) represent that jth local classifiers is to the classification results of subregion. When target is blocked, target area possibly cannot correctly be differentiated by overall situation discrimination model, but the local classifiers usually still having one or more corresponding zone not to be blocked in the discrimination model of N number of local can correctly differentiate target area.
S6. combine the classification results of the overall situation point class model, local point class model, whether candidate's image comprised target area and judges:
thrGC��thrLCRepresent two threshold values of overall situation classification, local classification respectively, as y (piDuring)=1, represent candidate image piInclude target area;
In such scheme, when target area is stopped, the overall situation discrimination model cannot normal operation, in order to avoid this kind of defect, method provided by the invention will judge in conjunction with the overall situation and local judges to determine whether include target area in candidate's image, and its accuracy, suitability improve greatly.
S7. all candidate's images are carried out the operation of step S4��S6 thus judge whether include target area in it, then all judgements be it contains candidate's image of target area and input to adaptation generation model;
S8. for each candidate's image, adapt to generation model and it is carried out super-pixel segmentation, then extract the proper vector of each super-pixel, then use K-means algorithm that the proper vector of all super-pixel is carried out cluster, and the cluster degree of confidence of calculated candidate image;Then the candidate's image choosing degree of confidence the highest exports as tracking results, exports the degree of confidence conf of the current tracking results of data compriseTWith target area mate area areaT, whereinWherein AiFor the area of each super-pixel, N represents the number comprising super-pixel in candidate's image sheet,
Above-mentioned formula shows when super-pixel is close in feature space with cluster centre, also close at target area relative position with template super-pixel in cluster, and during the target of affiliated cluster/background degree of confidence height, this patent think such super-pixel the appearance information of current target can be described more fully and discriminating power strong, wherein gi' represent the super-pixel that candidate's image sheet comprises, ki' represent cluster belonging to super-pixel, SiThe distance of cluster belonging to ' expression super-pixel,Represent the target/background degree of confidence of each cluster, Rj' represent cluster radius, confi' represent gi' degree of confidence, LiThe template represented in candidate's image sheet in each super-pixel and affiliated cluster is super as the minimum space distance between several, asIt is the weighting factor of control space distance weighting,Represent gi' with the template super-pixel of affiliated clusterSpace distance in the target area;
conf c l s = A t arg e t A t arg e t + A b a c k g o u n d ,
Wherein AtargetRepresent that in each cluster, all class members belong to the sum of the pixel number of target area, AbackgroundRepresent the sum of the pixel number of background area;
Wherein area T = &Sigma; j = 1 M F ( g j ) A , F ( g j ) = A j &prime; A j &prime; &le; A t arg e t j A t arg e t j A j &prime; > A t arg e t j
Aj' represent the pixel number of each super-pixel in current tracking results,Representing the target area pixel number that each super-pixel cluster comprises, M represents the sum of super-pixel;
Three, detecting stage
S9. build template base generation model, and make template base generation model detect target area in present frame, return the degree of confidence conf of detected resultD, then according to the current position in the Output rusults estimating target region adapting to generation model and template base generation model:
1) area is worked asT��thrPLAnd confT��thrTHTime
Wherein thrTH��thrPLRepresent confidence threshold value and coupling area threshold respectively, the tracking results now adapting to generation model has higher degree of confidence and coupling area, adapt to generation model normal operation and adapted to target area outward appearance, so the Output rusults adapting to generation model is exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
2) area is worked asT< thrPLAnd confT��thrTHTime
The coupling area of the tracking results now adapting to generation model is lower, but the degree of confidence of tracking results is still higher than threshold value, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
3) area is worked asT��thrPLAnd confT< thrTHTime
The tracking results now adapting to generation model has lower degree of confidence, but has higher coupling area, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
4) area is worked asT< thrPL, confT��thrTHAnd confD��thrDHTime
thrDHRepresent the threshold value of detected result degree of confidence, now adapt to the degree of confidence of the tracking results of generation model and mate area all lower than default threshold value, and template base generation model detects the target location that a degree of confidence is higher, then the detected result of template base generation model is used as target location to export, then overall situation sorter GC, local classifiers, adaptation generation model is carried out heavily initialize.
In such scheme, template base generation model is determined the working order of current each model and target location according to certain strategy and is exported, feed back in the overall situation sorter GC, local classifiers, adaptation generation model simultaneously, and upgrade in the overall situation sorter GC, local classifiers, adaptation generation model, so that the method can adapt to target area various appearance change in tracking process.
Embodiment 2
Present invention also offers a kind of system applying described collaborative track algorithm, as shown in Figure 3, its concrete scheme is as follows:
Comprise tracking module, detection module and position estimation module, wherein said tracking module comprises overall situation discrimination model, local discrimination model and adaptability generation model, described detection model comprises template base generation model, and position estimation module is used for the current position according to the Output rusults estimating target region adapting to generation model and template base generation model.
Obviously, the above embodiment of the present invention is only for example of the present invention is clearly described, and is not the restriction to embodiments of the present invention. For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description. Here without the need to also cannot all enforcement modes be given exhaustive. All any amendment, equivalent replacement and improvement etc. done within the spirit and principles in the present invention, all should be included within the protection domain of the claims in the present invention.

Claims (2)

1. based on a collaborative track algorithm for super-pixel segmentation, for solving the tracking problem of single camera single goal, it is characterised in that: comprise the following steps:
One, the training stage
S1. building overall situation discrimination model, described overall situation discrimination model, for extracting the Haar_Like feature of target area, then according to the Haar_Like feature construction overall situation sorter GC extracted, and determines the parameter of overall situation sorter GC;
S2. use the sharding method based on overlapping moving window that target area is carried out burst, obtain N number of subregion, then N number of overall situation discrimination model is built respectively, described N number of local discrimination model is for extracting Haar_Like feature respectively to N number of subregion, then build local classifiers respectively according to extraction Haar_Like feature, and determine the parameter of local classifiers;
S3. building and adapt to generation model, and confirm to adapt to the model parameter of generation model, its concrete steps are as follows:
Target area is carried out super-pixel segmentation, and extracts the proper vector of each super-pixel respectively, then use K-means algorithm that all super-pixel of target area are carried out cluster, so that it is determined that adapt to the model parameter of generation model;
Two, tracking phase
S4. by candidate image piInputing to overall situation discrimination model, overall situation discrimination model is to candidate image piHaar_Like feature extract, then use the overall situation sorter GC to candidate image piHaar_Like feature classify, GC (pi) represent candidate image piClassification results;
S5. use the method for step S2 by candidate image piDivide N number of subregion, then make N number of local discrimination model that N number of subregion is extracted Haar_Like feature respectively, then use N number of local classifiers the Haar_Like feature of N number of subregion to be classified respectively, LCj(pi) represent that jth local classifiers is to the classification results of subregion;
S6. combine the classification results of the overall situation point class model, local point class model, whether candidate's image comprised target area and judges:
thrGC��thrLCRepresent two threshold values of overall situation classification, local classification respectively, as y (piDuring)=1, represent candidate image piInclude target area;
S7. all candidate's images are carried out the operation of step S4��S6 thus judge whether include target area in it, then all judgements be it contains candidate's image of target area and input to adaptation generation model;
S8. for each candidate's image, adapt to generation model and it is carried out super-pixel segmentation, then extract the proper vector of each super-pixel, then use K-means algorithm that the proper vector of all super-pixel is carried out cluster, and the cluster degree of confidence of calculated candidate image; Then the candidate's image choosing degree of confidence the highest exports as tracking results, exports the degree of confidence conf of the current tracking results of data compriseTWith target area mate area areaT, whereinWherein AiFor the area of each super-pixel, N represents the number comprising super-pixel in candidate's image sheet,
Wherein g 'iRepresent the super-pixel that candidate's image sheet comprises, k 'iRepresent cluster belonging to super-pixel, S 'iRepresent the distance of cluster belonging to super-pixel,Represent the target/background degree of confidence of each cluster, R 'jRepresent cluster radius, conf 'iRepresent g 'iDegree of confidence, LiThe template represented in candidate's image sheet in each super-pixel and affiliated cluster is super as the minimum space distance between several, asIt is the weighting factor of control space distance weighting, as�� (0,1),Represent g 'iWith the template super-pixel of affiliated clusterSpace distance in the target area;
conf c l s = A t arg e t A t arg e t + A b a c k g o u n d ,
Wherein AtargetRepresent that in each cluster, all class members belong to the sum of the pixel number of target area, AbackgroundRepresent the sum of the pixel number of background area;
Wherein area T = &Sigma; j = 1 M F ( g j ) A , F ( g j ) = A j &prime; A j &prime; &le; A t arg e t j A t arg e t j A j &prime; > A t arg e t j
A��jRepresent the pixel number of each super-pixel in current tracking results,Representing the target area pixel number that each super-pixel cluster comprises, M represents the sum of super-pixel;
Three, detecting stage
S9. build template base generation model, and make template base generation model detect target area in present frame, return the degree of confidence conf of detected resultD, then according to the current position in the Output rusults estimating target region adapting to generation model and template base generation model:
1) area is worked asT��thrPLAnd confT��thrTHTime
Wherein thrTH��thrPLRepresent confidence threshold value and coupling area threshold respectively, the tracking results now adapting to generation model has higher degree of confidence and coupling area, adapt to generation model normal operation and adapted to target area outward appearance, so the Output rusults adapting to generation model is exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
2) area is worked asT< thrPLAnd confT��thrTHTime
The coupling area of the tracking results now adapting to generation model is lower, but the degree of confidence of tracking results is still higher than threshold value, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
3) area is worked asT��thrPLAnd confT< thrTHTime
The tracking results now adapting to generation model has lower degree of confidence, but has higher coupling area, so still the Output rusults adapting to generation model being exported as target location; Then according to update strategy according to areaT��confTThe parameter of overall situation sorter GC, local classifiers, adaptation generation model is upgraded;
4) area is worked asT< thrPL, confT��thrTHAnd confD��thrDHTime
thrDHRepresent the threshold value of detected result degree of confidence, now adapt to the degree of confidence of the tracking results of generation model and mate area all lower than default threshold value, and template base generation model detects the target location that a degree of confidence is higher, then the detected result of template base generation model is used as target location to export, then overall situation sorter GC, local classifiers, adaptation generation model is carried out heavily initialize.
2. the system of the collaborative track algorithm split based on super-pixel according to claim 1, it is characterized in that: comprise tracking module, detection module and position estimation module, wherein said tracking module comprises overall situation discrimination model, local discrimination model and adaptability generation model, described detection model comprises template base generation model, and position estimation module is used for the current position according to the Output rusults estimating target region adapting to generation model and template base generation model.
CN201510971312.1A 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel Expired - Fee Related CN105654505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510971312.1A CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510971312.1A CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Publications (2)

Publication Number Publication Date
CN105654505A true CN105654505A (en) 2016-06-08
CN105654505B CN105654505B (en) 2018-06-26

Family

ID=56477692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510971312.1A Expired - Fee Related CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Country Status (1)

Country Link
CN (1) CN105654505B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504269A (en) * 2016-10-20 2017-03-15 北京信息科技大学 A kind of method for tracking target of many algorithm cooperations based on image classification
CN107273905A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of target active contour tracing method of combination movable information
CN107633500A (en) * 2016-07-14 2018-01-26 南京视察者图像识别科技有限公司 A kind of new image object testing process
CN109325387A (en) * 2017-07-31 2019-02-12 株式会社理光 Image processing method, device, electronic equipment
CN112444205A (en) * 2019-08-30 2021-03-05 富士通株式会社 Detection apparatus and detection method
CN112489085A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU LIU ET AL.: "《Tracking Based on SURF and Superpixel》", 《2011 SIXTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633500A (en) * 2016-07-14 2018-01-26 南京视察者图像识别科技有限公司 A kind of new image object testing process
CN106504269A (en) * 2016-10-20 2017-03-15 北京信息科技大学 A kind of method for tracking target of many algorithm cooperations based on image classification
CN106504269B (en) * 2016-10-20 2019-02-19 北京信息科技大学 A kind of method for tracking target of more algorithms cooperation based on image classification
CN107273905A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of target active contour tracing method of combination movable information
CN107273905B (en) * 2017-06-14 2020-05-08 电子科技大学 Target active contour tracking method combined with motion information
CN109325387A (en) * 2017-07-31 2019-02-12 株式会社理光 Image processing method, device, electronic equipment
CN109325387B (en) * 2017-07-31 2021-09-28 株式会社理光 Image processing method and device and electronic equipment
CN112444205A (en) * 2019-08-30 2021-03-05 富士通株式会社 Detection apparatus and detection method
CN112489085A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium

Also Published As

Publication number Publication date
CN105654505B (en) 2018-06-26

Similar Documents

Publication Publication Date Title
CN105654505A (en) Collaborative tracking algorithm based on super-pixel and system thereof
CN103198493B (en) A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation
US20180129919A1 (en) Apparatuses and methods for semantic image labeling
CN102598057B (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN103679168B (en) Detection method and detection device for character region
CN101901334B (en) Static object detection method
CN102147861A (en) Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN103208008A (en) Fast adaptation method for traffic video monitoring target detection based on machine vision
CN202563526U (en) Transportation vehicle detection and recognition system based on video
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN104063885A (en) Improved movement target detecting and tracking method
CN107909081A (en) The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN103123726B (en) A kind of target tracking algorism analyzed based on motor behavior
CN110298297A (en) Flame identification method and device
CN103605969A (en) Method and device for face inputting
CN106325485A (en) Gesture detection and identification method and system
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
CN102542244A (en) Face detection method and system and computer program product
Ling et al. A background modeling and foreground segmentation approach based on the feedback of moving objects in traffic surveillance systems
CN102799862A (en) System and method for pedestrian rapid positioning and event detection based on high definition video monitor image
CN106204586A (en) A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN101551852A (en) Training system, training method and detection method
CN109712171B (en) Target tracking system and target tracking method based on correlation filter
Escalera et al. Fast greyscale road sign model matching and recognition
CN103455996A (en) Edge extraction method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180626