CN105654505B - A kind of collaboration track algorithm and system based on super-pixel - Google Patents

A kind of collaboration track algorithm and system based on super-pixel Download PDF

Info

Publication number
CN105654505B
CN105654505B CN201510971312.1A CN201510971312A CN105654505B CN 105654505 B CN105654505 B CN 105654505B CN 201510971312 A CN201510971312 A CN 201510971312A CN 105654505 B CN105654505 B CN 105654505B
Authority
CN
China
Prior art keywords
model
pixel
super
generation model
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510971312.1A
Other languages
Chinese (zh)
Other versions
CN105654505A (en
Inventor
纪庆革
袁大龙
韩非凡
杜景洪
印鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Sun Yat Sen University
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
Original Assignee
GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Sun Yat Sen University
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd, Sun Yat Sen University, Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd filed Critical GUANGZHOU INFINITE WISDOM ASPECT INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510971312.1A priority Critical patent/CN105654505B/en
Publication of CN105654505A publication Critical patent/CN105654505A/en
Application granted granted Critical
Publication of CN105654505B publication Critical patent/CN105654505B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of collaboration track algorithms and system based on super-pixel.Method provided by the invention will judge with reference to the overall situation and part judges determine whether include target area in candidate image, therefore the tracking problem that target area is blocked can be solved, simultaneously, by introducing more new strategy, so that this method is adapted to target area various cosmetic variations during tracking, accuracy, applicability greatly improve.

Description

A kind of collaboration track algorithm and system based on super-pixel
Technical field
The present invention relates to computer vision target tracking domain, more particularly, to it is a kind of based on the collaboration of super-pixel with Track algorithm and system.
Background technology
With the development of computer and universal, people increasingly expect that computer capacity has the perception and knowledge as the mankind Other ability, one of direction made great efforts is exactly the anthropoid visual perception system of class.Computer vision is by computer pair The image information of input is handled, and target identification, tracking etc. are completed in perception and identification of the simulation human eye to visual information Task.With the raising of computer performance and popularizing for camera, we can get the video image of magnanimity daily Information, but also constantly increasing so that people increasingly increase the demand of visual information automatic business processing.
Target following is that previously selected interesting target is detected in one group of image sequence, frame by frame tracking mesh Mark.According to the number of tracking target, target tracking algorism can be briefly divided into monotrack algorithm and multiple target tracking is calculated Method;According to the camera number that tracking uses in the process, target tracking algorism is segmented into single camera tracking and multi-cam Tracking.Present invention is generally directed to the tracking problems of single camera single goal.One in target following inherently computer vision A application technology, while it is the basis of other advanced applications again.Some typical cases of target following include:Human-computer interaction, Security monitoring, Vehicle Detection, intelligent robot navigation etc..However, target following is a complicated process, which also exists Many challenges, such as partial occlusion in object tracking process, cosmetic variation, light variation, strenuous exercise, target are in the visual field Reproduction, background influence etc. after disappearance.
Invention content
The defects of present invention is solves the more than prior art, provides a kind of collaboration track algorithm based on super-pixel, should Method can handle block, the FAQs in the target followings such as cosmetic variation, and with good stability and robustness.
For realization more than goal of the invention, the technical solution adopted is that:
A kind of collaboration track algorithm based on super-pixel segmentation, for solving the tracking problem of single camera single goal, packet Include following steps:
First, the training stage
S1. global discrimination model is built, the overall situation discrimination model is used to extract the Haar_Like features of target area, Then according to the Haar_Like feature construction global classification device GC of extraction, and the parameter of global classification device GC is determined;
S2. the sharding method based on overlapping sliding window is used to carry out fragment to target area, obtains N number of subregion, so After construct N number of local discriminant model, N number of local discriminant model for extracting N number of subregion Haar_Like spies respectively Then sign builds local classifiers according to the Haar_Like features of extraction, and determines the parameter of local classifiers respectively;
S3. structure adapts to generation model, and confirms the model parameter for adapting to generation model, is as follows:
Super-pixel segmentation is carried out, and extract the feature vector of each super-pixel respectively to target area, then using K- Means algorithms cluster all super-pixel of target area, so that it is determined that adapting to the model parameter of generation model;
2nd, tracking phase
S4. by candidate image piGlobal discrimination model is input to, global discrimination model is to candidate image piHaar_Like Feature extracts, then using global classification device GC to candidate image piHaar_Like features classify, GC (pi) table Show candidate image piClassification results;
S5. using the method for step S2 by candidate image piN number of subregion is divided, then makes N number of local discriminant model to N Sub-regions extract Haar_Like features respectively, then using N number of local classifiers respectively to the Haar_Like of N number of subregion Feature is classified, LCj(pi) represent classification results of j-th of local classifiers to subregion;
S6. the classification results of global classification model, local disaggregated model are combined, whether target area is included to candidate image Judged:
thrGC、thrLCTwo threshold values for represent global classification respectively, locally classifying, as y (piDuring)=1, candidate figure is represented As piInclude target area;
S7. all candidate images are subjected to the operation of step S4~S6 so as to judge whether include target area in it Domain, then by all judgements it contains the candidate image of target area is input to adaptation generation model;
S8. it for each candidate image, adapts to generation model and super-pixel segmentation is carried out to it, then extract each super picture The feature vector of element, then clusters, and calculate candidate image the feature vector of all super-pixel using K-means algorithms Cluster confidence level;Then it chooses the highest candidate image of confidence level to be exported as tracking result, output data includes working as The confidence level conf of preceding tracking resultTWith the matching area area of target areaT, whereinIts Middle AiFor the area of each super-pixel, N represents the number for including super-pixel in candidate image piece,
Above-mentioned formula shows to work as super-pixel and cluster centre is close in feature space, with template super-pixel in cluster in mesh It is also close to mark region relative position, and when the target of affiliated cluster/background confidence level is high, this patent thinks that such super-pixel can be with It is described more fully the appearance information of current goal and discriminating power is strong, wherein g 'iRepresent the super picture that candidate image piece includes Element, k 'iIt represents to cluster belonging to super-pixel, S 'iRepresent the distance clustered belonging to super-pixel,Represent k 'iTarget/background Confidence level, R 'jRepresent cluster radius, confi' expression g 'iConfidence level, LiRepresent each super-pixel and institute in candidate image piece Belong to the minimum space distance between the template super-pixel in cluster, asIt is the weight factor for controlling space length weight,Represent g 'iWith the template super-pixel of affiliated clusterSpace in the target area away from From,It represents with asThe bottom of for, withPower operation for index;
Wherein
A′jRepresent the pixel number of each super-pixel in current tracking result,Represent each super-pixel cluster Comprising target area pixel number, M represent super-pixel sum;
3rd, detection-phase
S9. structure template library generation model, and template library generation model is made to detect target area in present frame, return to inspection Survey the confidence level conf of resultD, target area is then estimated according to the output result for adapting to generation model and template library generation model The current location in domain:
1) work as areaT≥thrPLAnd confT≥thrTHWhen
Wherein thrTH、thrPLConfidence threshold value and matching area threshold are represented respectively, adapt to the tracking of generation model at this time As a result there is higher confidence level and matching area, adapt to generation model normal work and adapted to target area appearance, institute The output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo complete Office grader GC, local classifiers, the parameter of adaptation generation model are updated;
2) work as areaT<thrPLAnd confT≥thrTHWhen
The matching area for adapting to the tracking result of generation model at this time is relatively low, but the confidence level of tracking result is still higher than threshold Value, so still the output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、 confTThe parameter of global classification device GC, local classifiers, adaptation generation model is updated;
3) work as areaT≥thrPLAnd confT<thrTHWhen
Adapting to the tracking result of generation model at this time has relatively low confidence level, but with higher matching area, so Still the output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo complete Office grader GC, local classifiers, the parameter of adaptation generation model are updated;
4) work as areaT<thrPL, confT≥thrTHAnd confD≥thrDHWhen
thrDHIt represents the threshold value of testing result confidence level, adapts to confidence level and the matching of the tracking result of generation model at this time Area is below preset threshold value, and template library generates model inspection to the higher target location of a confidence level, then template The testing result of library generation model is exported as target location, then to global classification device GC, local classifiers, adaptation generation mould Type is initialized again.
Model modification is the key that track algorithm is made to can adapt to target appearance variation, and discrimination model employs similar Real Increment updating method (note is unrelated with this patent, so not repeating), generates mould in Time Compressive Tracking documents A kind of update method based on sliding window is employed in type.During tracking, every U frame images, we are just a frame figure As being added in model and carrying out super-pixel segmentation, feature extraction, cluster.In order to ensure the real-time of algorithm, we employ The window of one fixed size, and in each update, if the number of image frames of window is more than predefined size, by certain plan Slightly abandon influences minimum image to generation model.
Meanwhile the present invention also provides a kind of system using the collaboration track algorithm, concrete scheme is as follows:Including Tracking module, detection module and position estimation module, wherein the tracking module includes global discrimination model, local discriminant model Model is generated with adaptability, the detection model includes template library and generates model, and position estimation module is used to generate according to adaptation The current location of the output result estimation target area of model and template library generation model.
Compared with prior art, the beneficial effects of the invention are as follows:
Collaboration track algorithm provided by the invention based on super-pixel, this method can handle block, the mesh such as cosmetic variation FAQs in mark tracking, has good stability and robustness.
Description of the drawings
Fig. 1 is the frame diagram of this method.
Fig. 2 is the training schematic diagram of discrimination model.
Fig. 3 is the training schematic diagram for adapting to generation model.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
A kind of collaboration track algorithm based on super-pixel segmentation, for solving the tracking problem of single camera single goal, packet Include following steps:
First, the training stage
S1. global discrimination model is built, the overall situation discrimination model is used to extract the Haar_Like features of target area, Then according to extraction global compaction Haar_Like feature construction global classification device GC, and the parameter of global classification device GC is determined, tool Body is for example as shown in Figure 2;
S2. the sharding method based on overlapping sliding window is used to carry out fragment to target area, obtains N number of subregion, so Build N number of global discrimination model respectively afterwards, N number of local discriminant model is used to extract Haar_Like respectively to N number of subregion Then feature builds local classifiers according to extraction Local Contraction Haar_Like features, and determines the ginseng of local classifiers respectively Number, it is specific as shown in Figure 2;
S3. structure adapts to generation model, and confirms the model parameter for adapting to generation model, is as follows:
Super-pixel segmentation is carried out, and extract the feature vector of each super-pixel respectively to target area, then using K- Means algorithms cluster all super-pixel of target area, so that it is determined that adapting to the model parameter of generation model, specifically such as As shown in Figure 3;
2nd, tracking phase
S4. by candidate image piGlobal discrimination model is input to, global discrimination model is to candidate image piHaar_Like Feature extracts, then using global classification device GC to candidate image piGlobal compaction Haar_Like features classify, GC(pi) represent candidate image piClassification results;
S5. using the method for step S2 by candidate image piN number of subregion is divided, then makes N number of local discriminant model to N Sub-regions extract Haar_Like features respectively, then using N number of local classifiers respectively to the Local Contraction of N number of subregion Haar_Like features are classified, LCj(pi) represent classification results of j-th of local classifiers to subregion.Target hides During gear, global discrimination model possibly can not carry out correct decision, but usually still have in N number of local discriminant model to target area The local classifiers that one or more corresponding region is not blocked being capable of correct decision target area.
S6. the classification results of global classification model, local disaggregated model are combined, whether target area is included to candidate image Judged:
thrGC、thrLCTwo threshold values for represent global classification respectively, locally classifying, as y (piDuring)=1, candidate figure is represented As piInclude target area;
In said program, when target area is blocked, global discrimination model can not work normally, and be lacked in order to avoid such It falls into, method provided by the invention will judge with reference to the overall situation and part judges determine whether include target area in candidate image Domain, accuracy, applicability greatly improve.
S7. all candidate images are subjected to the operation of step S4~S6 so as to judge whether include target area in it Domain, then by all judgements it contains the candidate image of target area is input to adaptation generation model;
S8. it for each candidate image, adapts to generation model and super-pixel segmentation is carried out to it, then extract each super picture The feature vector of element, then clusters, and calculate candidate image the feature vector of all super-pixel using K-means algorithms Cluster confidence level;Then it chooses the highest candidate image of confidence level to be exported as tracking result, output data includes working as The confidence level conf of preceding tracking resultTWith the matching area area of target areaT, whereinIts Middle AiFor the area of each super-pixel, N represents the number for including super-pixel in candidate image piece,
Above-mentioned formula shows to work as super-pixel and cluster centre is close in feature space, with template super-pixel in cluster in mesh It is also close to mark region relative position, and when the target of affiliated cluster/background confidence level is high, this patent thinks that such super-pixel can be with It is described more fully the appearance information of current goal and discriminating power is strong, wherein g 'iRepresent the super picture that candidate image piece includes Element, k 'iIt represents to cluster belonging to super-pixel, S 'iRepresent the distance clustered belonging to super-pixel,The target that expression each clusters/ Background confidence level, R 'jRepresent cluster radius, confi' expression g 'iConfidence level, LiRepresent each super-pixel in candidate image piece Surpass with the template in affiliated cluster as the minimum space distance between several, asIt is the weight factor for controlling space length weight,Represent g 'iWith the template super-pixel of affiliated clusterSpace in the target area away from From;
Wherein AtargetAll class members belong to the sum of the pixel number of target area in each cluster of expression, AbackgroundRepresent the sum of the pixel number of background area;
Wherein
A′jRepresent the pixel number of each super-pixel in current tracking result,Represent each super-pixel cluster Comprising target area pixel number, M represent super-pixel sum;
3rd, detection-phase
S9. structure template library generation model, and template library generation model is made to detect target area in present frame, return to inspection Survey the confidence level conf of resultD, target area is then estimated according to the output result for adapting to generation model and template library generation model The current location in domain:
1) work as areaT≥thrPLAnd confT≥thrTHWhen
Wherein thrTH、thrPLConfidence threshold value and matching area threshold are represented respectively, adapt to the tracking of generation model at this time As a result there is higher confidence level and matching area, adapt to generation model normal work and adapted to target area appearance, institute The output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo complete Office grader GC, local classifiers, the parameter of adaptation generation model are updated;
2) work as areaT<thrPLAnd confT≥thrTHWhen
The matching area for adapting to the tracking result of generation model at this time is relatively low, but the confidence level of tracking result is still higher than threshold Value, so still the output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、 confTThe parameter of global classification device GC, local classifiers, adaptation generation model is updated;
3) work as areaT≥thrPLAnd confT<thrTHWhen
Adapting to the tracking result of generation model at this time has relatively low confidence level, but with higher matching area, so Still the output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo complete Office grader GC, local classifiers, the parameter of adaptation generation model are updated;
4) work as areaT<thrPL, confT≥thrTHAnd confD≥thrDHWhen
thrDHIt represents the threshold value of testing result confidence level, adapts to confidence level and the matching of the tracking result of generation model at this time Area is below preset threshold value, and template library generates model inspection to the higher target location of a confidence level, then template The testing result of library generation model is exported as target location, then to global classification device GC, local classifiers, adaptation generation mould Type is initialized again.
In said program, template library generates model according to certain strategy come the working condition and mesh of determining current each model Cursor position simultaneously exports, while feeds back to global classification device GC, local classifiers, adapts in generation model, and to global classification device GC, local classifiers adapt to be updated in generation model, so that this method is adapted to target area in tracking process In various cosmetic variations.
Embodiment 2
The present invention also provides a kind of system using the collaboration track algorithm, as shown in figure 3, its concrete scheme is such as Under:
Including tracking module, detection module and position estimation module, wherein the tracking module include global discrimination model, Local discriminant model and adaptability generation model, the detection model include template library and generate model, and position estimation module is used for According to the current location for the output result estimation target area for adapting to generation model and template library generation model.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this All any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in the claims in the present invention Protection domain within.

Claims (2)

1. a kind of collaboration track algorithm based on super-pixel segmentation, special for solving the tracking problem of single camera single goal Sign is:Include the following steps:
First, the training stage
S1. global discrimination model is built, the overall situation discrimination model is used to extract the Haar_Like features of target area, then According to the Haar_Like feature construction global classification device GC of extraction, and determine the parameter of global classification device GC;
S2. the sharding method based on overlapping sliding window is used to carry out fragment to target area, obtains N number of subregion, then structure N number of local discriminant model is built out, N number of local discriminant model is used to extract Haar_Like features respectively to N number of subregion, Then local classifiers are built according to the Haar_Like features of extraction respectively, and determines the parameter of local classifiers;
S3. structure adapts to generation model, and confirms the model parameter for adapting to generation model, is as follows:
Super-pixel segmentation is carried out, and extract the feature vector of each super-pixel respectively to target area, is then calculated using K-means Method clusters all super-pixel of target area, so that it is determined that adapting to the model parameter of generation model;
2nd, tracking phase
S4. by candidate image piGlobal discrimination model is input to, global discrimination model is to candidate image piHaar_Like features It extracts, then using global classification device GC to candidate image piHaar_Like features classify, GC (pi) represent to wait Select image piClassification results;
S5. using the method for step S2 by candidate image piN number of subregion is divided, then makes N number of local discriminant model to N number of son Haar_Like features are extracted in region respectively, then using N number of local classifiers respectively to the Haar_Like features of N number of subregion Classify, LCj(pi) represent classification results of j-th of local classifiers to subregion;
S6. the classification results of global classification model, local disaggregated model are combined, whether target area progress is included to candidate image Judge:
thrGC、thrLCTwo threshold values for represent global classification respectively, locally classifying, as y (piDuring)=1, candidate image p is representedi Include target area;
S7. by the operation of all candidate image progress step S4~S6 so as to judge whether include target area in it, so Afterwards by all judgements it contains the candidate image of target area is input to adaptation generation model;
S8. it for each candidate image, adapts to generation model and super-pixel segmentation is carried out to it, then extract each super-pixel Then feature vector clusters the feature vector of all super-pixel using K-means algorithms, and calculate the poly- of candidate image Class confidence level;Then choose the highest candidate image of confidence level exported as tracking result, output data including currently with The confidence level conf of track resultTWith the matching area area of target areaT, whereinWherein Ai For the area of each super-pixel, N represents the number for including super-pixel in candidate image piece,
Wherein g 'iRepresent the super-pixel that candidate image piece includes, k 'iIt represents to cluster belonging to super-pixel, S 'iIt represents belonging to super-pixel The distance of cluster,Represent k 'iTarget/background confidence level, R 'jRepresent cluster radius, confi' expression g 'iConfidence Degree, LiRepresent the minimum space distance between the template super-pixel in candidate image piece in each super-pixel and affiliated cluster, asIt is control The weight factor of space length weight processed, as∈(0,1),Represent g 'iWith the template super-pixel of affiliated clusterSpace length in the target area,It represents with asThe bottom of for, withPower for index Operation;
Wherein
A′jRepresent the pixel number of each super-pixel in current tracking result,Represent what each super-pixel cluster included Target area pixel number, M represent the sum of super-pixel;
3rd, detection-phase
S9. structure template library generation model, and template library generation model is made to detect target area in present frame, return to detection knot The confidence level conf of fruitD, then according to the output result estimation target area for adapting to generation model and template library generation model Current location:
1) work as areaT≥thrPLAnd confT≥thrTHWhen
Wherein thrTH、thrPLConfidence threshold value and matching area threshold are represented respectively, adapt to the tracking result of generation model at this time With higher confidence level and matching area, adapt to generation model normal work and adapted to target area appearance, so handle The output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo the overall situation point Class device GC, local classifiers, the parameter of adaptation generation model are updated;
2) work as areaT<thrPLAnd confT≥thrTHWhen
The matching area for adapting to the tracking result of generation model at this time is relatively low, but the confidence level of tracking result is still higher than threshold value, So still the output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confT The parameter of global classification device GC, local classifiers, adaptation generation model is updated;
3) work as areaT≥thrPLAnd confT<thrTHWhen
The tracking result for adapting to generation model at this time has relatively low confidence level, but with higher matching area, thus still The output result for adapting to generation model is exported as target location;Then according to more new strategy according to areaT、confTTo the overall situation point Class device GC, local classifiers, the parameter of adaptation generation model are updated;
4) work as areaT<thrPL, confT≥thrTHAnd confD≥thrDHWhen
thrDHIt represents the threshold value of testing result confidence level, adapts to the confidence level of the tracking result of generation model and matching area at this time Below preset threshold value, and template library generates model inspection to the higher target location of a confidence level, then template library is given birth to Into model testing result as target location export, then to global classification device GC, local classifiers, adapt to generation model into Row initializes again.
2. a kind of system of the collaboration track algorithm based on super-pixel segmentation according to claim 1, it is characterised in that:Including Tracking module, detection module and position estimation module, wherein the tracking module includes global discrimination model, local discriminant model Model is generated with adaptability, the detection module includes template library and generates model, and position estimation module is used to generate according to adaptation The current location of the output result estimation target area of model and template library generation model.
CN201510971312.1A 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel Expired - Fee Related CN105654505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510971312.1A CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510971312.1A CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Publications (2)

Publication Number Publication Date
CN105654505A CN105654505A (en) 2016-06-08
CN105654505B true CN105654505B (en) 2018-06-26

Family

ID=56477692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510971312.1A Expired - Fee Related CN105654505B (en) 2015-12-18 2015-12-18 A kind of collaboration track algorithm and system based on super-pixel

Country Status (1)

Country Link
CN (1) CN105654505B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633500A (en) * 2016-07-14 2018-01-26 南京视察者图像识别科技有限公司 A kind of new image object testing process
CN106504269B (en) * 2016-10-20 2019-02-19 北京信息科技大学 A kind of method for tracking target of more algorithms cooperation based on image classification
CN107273905B (en) * 2017-06-14 2020-05-08 电子科技大学 Target active contour tracking method combined with motion information
CN109325387B (en) * 2017-07-31 2021-09-28 株式会社理光 Image processing method and device and electronic equipment
JP7263983B2 (en) * 2019-08-30 2023-04-25 富士通株式会社 Photography omission detection device and photography omission detection method
CN112489085A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Tracking Based on SURF and Superpixel》;Yu liu et al.;《2011 Sixth International Conference on Image and Graphics》;20111231;714-719 *

Also Published As

Publication number Publication date
CN105654505A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
CN105654505B (en) A kind of collaboration track algorithm and system based on super-pixel
CN109492581B (en) Human body action recognition method based on TP-STG frame
CN108960080B (en) Face recognition method based on active defense image anti-attack
CN106325485B (en) A kind of gestures detection recognition methods and system
Endres et al. Category-independent object proposals with diverse ranking
Bregonzio et al. Fusing appearance and distribution information of interest points for action recognition
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN109711262B (en) Intelligent excavator pedestrian detection method based on deep convolutional neural network
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN111666843A (en) Pedestrian re-identification method based on global feature and local feature splicing
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN104063719A (en) Method and device for pedestrian detection based on depth convolutional network
CN103310466A (en) Single target tracking method and achievement device thereof
CN103996046A (en) Personnel recognition method based on multi-visual-feature fusion
Kashika et al. Deep learning technique for object detection from panoramic video frames
KR101762010B1 (en) Method of modeling a video-based interactive activity using the skeleton posture datset
CN107025442A (en) A kind of multi-modal fusion gesture identification method based on color and depth information
CN115527269B (en) Intelligent human body posture image recognition method and system
KR101074953B1 (en) Method for hybrid face recognition using pca and gabor wavelet and system thereof
CN107886110A (en) Method for detecting human face, device and electronic equipment
Alksasbeh et al. Smart hand gestures recognition using K-NN based algorithm for video annotation purposes
de Oliveira Silva et al. Human action recognition based on a two-stream convolutional network classifier
Hu et al. RGB-D image multi-target detection method based on 3D DSF R-CNN
CN102609715B (en) Object type identification method combining plurality of interest point testers
CN102289685A (en) Behavior identification method for rank-1 tensor projection based on canonical return

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180626