CN106327527A - Online Boosting-based target fine contour tracking method - Google Patents

Online Boosting-based target fine contour tracking method Download PDF

Info

Publication number
CN106327527A
CN106327527A CN201610657342.XA CN201610657342A CN106327527A CN 106327527 A CN106327527 A CN 106327527A CN 201610657342 A CN201610657342 A CN 201610657342A CN 106327527 A CN106327527 A CN 106327527A
Authority
CN
China
Prior art keywords
super
feature
pixel
l2ecm
grader
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610657342.XA
Other languages
Chinese (zh)
Other versions
CN106327527B (en
Inventor
解梅
王建国
朱倩
周扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Houpu Clean Energy Group Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610657342.XA priority Critical patent/CN106327527B/en
Publication of CN106327527A publication Critical patent/CN106327527A/en
Application granted granted Critical
Publication of CN106327527B publication Critical patent/CN106327527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides an online Boosting-based target fine contour tracking method. According to the method, in target fine tracking, super-pixels are adopted to perform block partitioning on an image containing a tracking target, and each super-pixel is regarded as a point, and therefore, computational complexity can be decreased; and an online learning method is adopted to segment the target and background. According to a traditional Online Boosting algorithm, the weights of training samples are the same and do not change with time. However, in the case of target fine tracking, since the moving object changes at all times, as for an online classifier, the longer the time interval between an image frame and a current frame is, the smaller of the weight of the image frame is; in order to realize a weight gradually attenuating effect, an online Boosting classifier which enables the weights of the samples to be decreased progressively with the length of time is designed; and with the increase of the number of video frames, the performance of the classifier is better and better, and therefore, the accurate fine contour of the tracking target can be realized.

Description

A kind of target fine definition tracking based on Online Boosting
Technical field
The invention belongs to computer vision field, be specifically related to field of intelligent monitoring.
Background technology
Target fine definition tracking technique based on video not only needs the position of enough tracking targets, but also will be accurately Describing the shape of target, this technology is one of most basic technology of computer vision field, can obtain the tracking of objective contour Result.Upper strata algorithm is followed the tracks of result according to objective contour further and is analyzed and processes, and realizes the understanding to scene, to mesh The identification of mark action and the identification etc. to human body behavior are applied.The prospect that is widely applied of this technology and the highest researching value Excite the great interest of domestic and international research worker.
The expression that it is critical only that time consistency and Space Consistency of target fine definition tracking technique based on video. Time consistency describes the similarity of target in successive frame, and Space Consistency describes target and background in a two field picture Resolution capability.Following the tracks of the fine definition of target in video is considered as one two problem classified, the most Occurring in that a lot of relevant algorithm, such as based on level set method, the estimation of motion and the segmentation of target are divided into by the method Two single stages, using the result of estimation as the input of segmentation.So when estimation is inaccurate, can affect The precision of segmentation, self has in the video of motion at a lot of photographic head, the tracking effect hardly resulting in the estimation moved.For Solving the situation of cam movement, it has been proposed that a kind of method cut based on figure, multiple clue functions merge by the method To together, the movable information of target is typically one of them important clue function, but the sports ground of background would generally disturb The movable information of target so that the objective contour of tracking is inaccurate.Also having some automanual dividing methods, these methods need Some target and background regions of artificial demarcation, this just greatly limit its application.
Summary of the invention
The present invention solves above-mentioned technical problem is that, it is provided that one target fine definition track algorithm fast and accurately.
The present invention solves that above-mentioned technical problem be employed technical scheme comprise that, a kind of based on Online Boosting Target fine definition tracking, comprises the following steps:
1) initialization step:
1-1) the 1st two field picture of video is divided into super-pixel;
1-2) to being divided into image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X after super-pixel, L2ECM feature x of the corresponding super-pixel of every string in L2ECM feature;The L2ECM feature of the 1st two field picture is carried out target special Levying the differentiation with background characteristics, {-1 ,+1} ,+1 represent target, and-1 represents the back of the body to obtain tag along sort y ∈ corresponding to each super-pixel Scape, finally gives the classification results Y of image;
L2ECM feature X 1-3) used and classification results Y train Online Boosting grader h;
2) step followed the tracks of:
2-1) t two field picture in video is divided into super-pixel and extracts L2ECM feature X, t=2,3 ..., use Every string of eigenmatrix X is classified by Online Boosting grader h, obtains classification results Yp
2-2) use the region that plavini connection target breaks, the classification results after being updated
2-3) use L2ECM feature X and classification resultsOnline Boosting grader h is updated, updates t =t+1, returns step 2-1) process the next frame image in video;
Wherein, Online Boosting grader h is constituted h by M Weak Classifierm, Weak Classifier numbering m ∈ 1, 2,…,M};Specifically comprising the following steps that of Online Boosting grader h training
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Grader hmReceive L2ECM feature x of the super-pixel of input and corresponding tag along sort y, it is judged that current class device hm The L2ECM feature x classification results of super-pixel is judged: if hmThe L2ECM correct h of feature x classification results to super-pixelm X ()=y, then updateεmRepresent plus penalty coefficient λ Grader h afterwardsmError rate;If hmL2ECM feature x classification results mistake h to super-pixelmX () ≠ y, then update
Renewal grader isClassification function I: Judge whether to reach to terminate update condition, as no, return training step, under L2ECM feature x of one super-pixel and corresponding tag along sort y process, in this way, terminate training step.
The present invention uses the method for Online Boosting on-line study from the previous frame image learning of video to target With the grader of background, and this grader is used for the classification of target and background in next frame image so that its processing speed adds Fast a lot.
The innovation of the present invention is: use super-pixel to containing following the tracks of target in the problem that target is finely followed the tracks of Image carries out piecemeal, and each super-pixel is seen as a point, it reduces the complexity of calculating;The method using on-line study Come segmentation object and background.In traditional Online Boosting algorithm, the weight of training sample is identical, will not be at any time Between change and change.But inside the fine tracking problem of target, owing to the moving target moment changes, so online is divided For class device distance when the current frame between the weight of picture frame the most remote should be the least, in order to realize what this weight gradually decayed Effect, the present invention devises a kind of sample weights degree remote in time and the Online Boosting grader that successively decreases, with The increase of video frame number, the performance of grader is become better and better, thus realizes following the tracks of accurately the fine definition of target.
Present invention have the advantages that, sample weights degree remote in time and the Online Boosting classification successively decreased The Fast Classification ability of device makes to have reached the tracking of target fine definition real-time tracking effect.
Accompanying drawing explanation
Fig. 1 super-pixel schematic diagram;
Fig. 2 system flow chart.
Detailed description of the invention
The present invention uses super-pixel to divide this candidate region;The target and background using video the first two field picture is come Initialize Online Boosting grader, use this grader to carry out the target in classification chart picture in later each frame picture And background area, update grader self by the result of classification simultaneously.The method connection target finally using expansion breaks Region, thus obtain the target and background split.
Describe present disclosure for convenience, first some terms are illustrated.
1: super-pixel, the segmentation of super-pixel and feature be extracted as existing ripe algorithm.Super-pixel refer in the picture by The zonule of the pixel composition that a series of positions are adjacent and color, brightness, Texture eigenvalue are similar, these zonules are protected mostly Stay the effective information carrying out image segmentation further, and typically will not destroy the boundary information of objects in images.Ours For image is carried out piecemeal in algorithm so that a cumularsharolith puts adjacent and feature similarity pixel can carry out table by a super-pixel Show.Super-pixel is the figure of Pixel-level (pixel-level) originally a width, is divided into region class (district-level) Figure, be a kind of essential information is carried out abstract.The superpixel segmentation method SLIC algorithm used in this algorithm is at " SLIC Superpixels Compared to State-of-the-art Superpixel Methods " literary composition has specifically Bright, its segmentation result is as it is shown in figure 1, one super-pixel of region representation of fencing up of red contours.
2:L2ECM feature, partial log Euclidean distance covariance matrix Local Log-Euclidean Covariance Matrix, this feature be extracted as existing ripe algorithm.For piece image, its primitive character is used to be configured to formula 1 institute The form shown, wherein I (x, y) represent in image I (x, y) pixel value of position, | | represent absolute value, Ix(x, y) and Iy(x, Y) first-order partial derivative to x and y direction, I are represented respectivelyxx(x, y) and Iyy(x y) represents the Second Order Partial to x and y direction respectively Derivative.For super-pixel s, orderWherein (xi,yi) ∈ s, d represents primitive characterLength,Represent d dimension space, NsRepresent the number of the pixel comprised in super-pixel s, then GsIt it is a size For dxNsMatrix, GsEvery string be a primitive characterCalculate GsCovariance matrix Cs, then CsIt is a d The matrix of × d, its latitude and NsUnrelated.In order to avoid the geodesic curve that calculates between covariance matrix in the Riemann space away from From, we are by CsBe converted to the log (C in theorem in Euclid spaces), due to log (Cs) symmetry of matrix, we take log (Cs) matrix Half (upper triangular matrix) be arranged in a vector and just constitute L2ECM feature, then L2ECM corresponding to super-pixel is special That levies is a length of
3:Online Boosting grader.One Online Boosting grader h is by M Weak Classifier hm,m∈ 1,2 ..., M} is constituted.Input is<x, y>, and wherein x is the L2ECM feature of 120 dimensions, y ∈ {-1 ,+1}.
One Online Boosting grader h is constituted h by M Weak Classifierm, Weak Classifier numbering m ∈ 1,2 ..., M};Specifically comprising the following steps that of Online Boosting grader h training
For 1~M Weak Classifier, initialize:WithRepresent Weak Classifier h respectivelymPoint The accuracy of class and error rate;
Initial setting up penalty coefficient λ=1, λ mono-aspect is used for punishing hmThe correctness of classification, on the other hand is used for punishing sample This weight decay remote in time;
For each grader hm, according to Poisson distribution P (λ=1),Obtain a circulation Number of times k;The condition of loop ends can be to reach cycle-index k, it is possible to so that other loop stop conditions customary in the art;Follow Method those skilled in the art that ring number of times k obtains can also obtain by other means;
Circulate k time:
Seek m-th Weak Classifier hmOptimum division surface: L0(hm, (x, y));L0(hm, (x, y)) represents a Weak Classifier Training process, be used herein as existing decision tree decision stump as Weak Classifier, this training process is with traditional Boosting grader is identical, may be otherwise other existing Weak Classifiers of use and is trained;
If hmX () classification is correct, i.e. y=hm(x),
ThenεmRepresent plus this penalty term of λ Grader h afterwardsmError rate;
If hm(x) classification error, i.e. y ≠ hm(x),
Then
New grader isNew for one Input x, it is possible to it is classified:
I ( h m ( x ) = y ) = 0 , i f h m ( x ) = y 1 , i f h m ( x ) &NotEqual; y ;
?WithIn the update mode of both λ,WithThese two are used for punishing hmThe correctness of classification, this of+1 is for decaying sample weight in time.
Concrete operation step is as shown in Figure 2:
Initialization step:
Step 1, the first two field picture for video, use SLIC algorithm to divide the image into into super-pixel, arrange super-pixel Maximum number be 200.
Step 2, to being divided into the image zooming-out L2ECM feature after super-pixel, for coloured image, have tri-passages of RGB, So L2ECM feature corresponding to each super-pixel is the column vector of one 120 dimension.Assume that entire image is divided into N number of super picture Element, then image characteristic of correspondence X is the matrix of 120xN.According to the markup information of the first frame, each super-pixel can be obtained corresponding Tag along sort y ∈ {-1 ,+1}, then the classification results Y of entire image is exactly the matrix of a Nx1.
Step 3, using in step 2 X and Y obtained, feature X is made up of each super-pixel feature x, classification results Y by The difference label y composition that each training super-pixel feature x is corresponding, Online Boosting grader h.
Tracking step:
Step 4, from the beginning of the second two field picture of video, for each two field picture, use SLIC algorithm to divide the image into into Super-pixel, extracts L2ECM feature, obtains characteristic of correspondence matrix X.Use grader h every string (i.e. each super picture to X Element) classify, obtain classification results Yp∈{-1,+1}。
Step 5, use the region that breaks of method connection target expanded, thus obtain new target and background Classification results
Step 6, use X andGrader h is updated, obtains new grader h, forward step 4 to and carry out next frame The process of image.

Claims (1)

1. a target fine definition tracking based on Online Boosting, it is characterised in that comprise the following steps:
1) initialization step:
1-1) the 1st two field picture of video is divided into super-pixel;
1-2) to being divided into image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X after super-pixel, L2ECM L2ECM feature x of the corresponding super-pixel of every string in feature;The L2ECM feature of the 1st two field picture is carried out target characteristic and the back of the body The differentiation of scape feature, {-1 ,+1} ,+1 represent target, and-1 represents background, to obtain tag along sort y ∈ corresponding to each super-pixel Obtain the classification results Y of image eventually;
L2ECM feature X 1-3) used and classification results Y train Online Boosting grader h;
2) step followed the tracks of:
2-1) t two field picture in video is divided into super-pixel and extracts L2ECM feature X, t=2,3 ..., use Online Every string of eigenmatrix X is classified by Boosting grader h, obtains classification results Yp
2-2) use the region that plavini connection target breaks, the classification results after being updated
2-3) use L2ECM feature X and classification resultsOnline Boosting grader h is updated, updates t=t+ 1, return step 2-1) process the next frame image in video;
Wherein, Online Boosting grader h is constituted h by M Weak Classifierm, Weak Classifier numbering m ∈ 1,2 ..., M}; Specifically comprising the following steps that of Online Boosting grader h training
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Grader hmReceive L2ECM feature x of the super-pixel of input and corresponding tag along sort y, it is judged that current class device hmTo super The L2ECM feature x classification results of pixel judges: if hmThe L2ECM correct h of feature x classification results to super-pixelm(x)= Y, then updateεmRepresent plus after penalty coefficient λ point Class device hmError rate;If hmL2ECM feature x classification results mistake h to super-pixelmX () ≠ y, then update
Renewal grader isClassification function I:Judge whether to reach to terminate update condition, as no, return training step, under L2ECM feature x of one super-pixel and corresponding tag along sort y process, in this way, terminate training step.
CN201610657342.XA 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting Active CN106327527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Publications (2)

Publication Number Publication Date
CN106327527A true CN106327527A (en) 2017-01-11
CN106327527B CN106327527B (en) 2019-05-14

Family

ID=57740810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610657342.XA Active CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Country Status (1)

Country Link
CN (1) CN106327527B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952287A (en) * 2017-03-27 2017-07-14 成都航空职业技术学院 A kind of video multi-target dividing method expressed based on low-rank sparse
CN112348826A (en) * 2020-10-26 2021-02-09 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050249401A1 (en) * 2004-05-10 2005-11-10 Claus Bahlmann Method for combining boosted classifiers for efficient multi-class object detection
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050249401A1 (en) * 2004-05-10 2005-11-10 Claus Bahlmann Method for combining boosted classifiers for efficient multi-class object detection
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HELMUT GRABNER 等: "Real-Time Tracking via On-line Boosting", 《BMVC 2006》 *
LI XU 等: "ONLINE REAL BOOSTING FOR OBJECT TRACKING UNDER SEVERE APPEARANCE CHANGES AND OCCLUSION", 《ICASSP 07》 *
孙来兵 等: "改进的基于在线Boosting的目标跟踪方法", 《计算机应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952287A (en) * 2017-03-27 2017-07-14 成都航空职业技术学院 A kind of video multi-target dividing method expressed based on low-rank sparse
CN112348826A (en) * 2020-10-26 2021-02-09 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net
CN112348826B (en) * 2020-10-26 2023-04-07 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net

Also Published As

Publication number Publication date
CN106327527B (en) 2019-05-14

Similar Documents

Publication Publication Date Title
Yang et al. Real-time face detection based on YOLO
CN104850865B (en) A kind of Real Time Compression tracking of multiple features transfer learning
CN109685067A (en) A kind of image, semantic dividing method based on region and depth residual error network
CN109919122A (en) A kind of timing behavioral value method based on 3D human body key point
CN106204638A (en) A kind of based on dimension self-adaption with the method for tracking target of taking photo by plane blocking process
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN108154159B (en) A kind of method for tracking target with automatic recovery ability based on Multistage Detector
Huang et al. Spatial-temproal based lane detection using deep learning
CN110111338A (en) A kind of visual tracking method based on the segmentation of super-pixel time and space significance
Tan et al. Vehicle detection in high resolution satellite remote sensing images based on deep learning
CN108021889A (en) A kind of binary channels infrared behavior recognition methods based on posture shape and movable information
CN103605984B (en) Indoor scene sorting technique based on hypergraph study
CN107045722B (en) Merge the video signal process method of static information and multidate information
CN105760831A (en) Pedestrian tracking method based on low-altitude aerial photographing infrared video
CN103984955B (en) Multi-camera object identification method based on salience features and migration incremental learning
CN103237197B (en) For the method for the self adaptation multiple features fusion of robust tracking
CN104794737A (en) Depth-information-aided particle filter tracking method
CN112507845B (en) Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix
CN101908214B (en) Moving object detection method with background reconstruction based on neighborhood correlation
CN109919246A (en) Pedestrian&#39;s recognition methods again based on self-adaptive features cluster and multiple risks fusion
CN103913740A (en) Bird flock target tracking method based on spatial distribution characteristics
CN111797785A (en) Multi-aircraft tracking method based on airport scene prior and deep learning
CN106327527A (en) Online Boosting-based target fine contour tracking method
Yuan et al. Multi-objects change detection based on Res-UNet
Wei Small object detection based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210512

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy (Group) Co.,Ltd.

Address before: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee before: Houpu clean energy Co.,Ltd.