CN106327527B - Target profile tracing method based on Online Boosting - Google Patents

Target profile tracing method based on Online Boosting Download PDF

Info

Publication number
CN106327527B
CN106327527B CN201610657342.XA CN201610657342A CN106327527B CN 106327527 B CN106327527 B CN 106327527B CN 201610657342 A CN201610657342 A CN 201610657342A CN 106327527 B CN106327527 B CN 106327527B
Authority
CN
China
Prior art keywords
feature
classifier
pixel
l2ecm
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610657342.XA
Other languages
Chinese (zh)
Other versions
CN106327527A (en
Inventor
解梅
王建国
朱倩
周扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Houpu Clean Energy Group Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610657342.XA priority Critical patent/CN106327527B/en
Publication of CN106327527A publication Critical patent/CN106327527A/en
Application granted granted Critical
Publication of CN106327527B publication Critical patent/CN106327527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供了一种基于Online Boosting的目标轮廓跟踪方法,在目标精细跟踪的问题中使用超像素对含有跟踪目标的图像进行分块,每一个超像素被看做一个点,这降低了计算的复杂度;使用在线学习的方法来分割目标和背景。传统的Online Boosting算法中,训练样本的权重是相同的,不会随时间的变化而变化。但是在目标精细跟踪问题里面,由于运动目标时刻变化,所以对在线的分类器来说距离当前帧时间越久远的图像帧的权重应该越小,为了实现这种权重逐渐衰减的效果,本发明设计了一种样本权重随时间的久远程度而递减的Online Boosting分类器,随着视频帧数的增加,分类器的性能越来越好,从而实现准确的跟踪目标的精细轮廓。

The invention provides a target contour tracking method based on Online Boosting. In the problem of fine target tracking, superpixels are used to block the image containing the tracking target, and each superpixel is regarded as a point, which reduces the computational complexity. Complexity; using online learning to segment objects and backgrounds. In the traditional Online Boosting algorithm, the weights of the training samples are the same and will not change over time. However, in the problem of fine target tracking, since the moving target changes all the time, the weight of the image frame that is farther away from the current frame time should be smaller for the online classifier. An Online Boosting classifier in which the sample weight decreases with time is proposed. With the increase of the number of video frames, the performance of the classifier is getting better and better, so as to accurately track the fine contour of the target.

Description

Target profile tracing method based on Online Boosting
Technical field
The invention belongs to computer vision fields, and in particular to field of intelligent monitoring.
Background technique
Target fine definition tracking technique based on video not only needs the position of enough tracking targets, but also will be accurately The shape of target is described, which is one of most basic technology of computer vision field, the tracking of available objective contour As a result.Upper layer algorithm is further analyzed and is handled according to objective contour tracking result, come realize understanding to scene, to mesh The identification and identification of human body behavior etc. is applied that mark acts.The broad application prospect of the technology and very high researching value Excite the great interest of researchers at home and abroad.
The key of target fine definition tracking technique based on video is the expression of time consistency and Space Consistency. Time consistency describes the similitude of the target in successive frame, and Space Consistency describes target in one frame of image and background Resolution capability.The problem of fine definition tracking to target in video is considered as one two classification, at present both at home and abroad There are many relevant algorithms, such as the method based on level set, the estimation of movement and the segmentation of target are divided by this method Two individual stages, using the result of estimation as the input of segmentation.In this way when estimation inaccuracy, it will affect The precision of segmentation, in the video that many cameras itself have movement, tracking effect that the estimation of movement has been hardly resulted in.For The case where solution cam movement, it has been proposed that carrying out a kind of method cut based on figure, this method merges multiple clue functions To together, the motion information of target is usually one of important clue function, however the sports ground of background would generally interfere The motion information of target, so that the objective contour inaccuracy of tracking.There are also some automanual dividing methods, these methods need The some target and background regions of artificial calibration, this just greatly limits its application field.
Summary of the invention
The present invention has been to provide a kind of fast and accurately target fine definition track algorithm to solve above-mentioned technical problem.
The present invention is to solve above-mentioned technical problem the technical scheme adopted is that the target based on Online Boosting Contour tracing method, comprising the following steps:
1) initialization step:
1-1) by the 1st frame image segmentation of video at super-pixel;
1-2) to being divided into the image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X after super-pixel, Each L2ECM feature x for arranging a corresponding super-pixel in L2ECM feature;It is special that the L2ECM feature of 1st frame image is subjected to target The differentiation of sign and background characteristics obtains the corresponding tag along sort y ∈ { -1 ,+1 } of each super-pixel, and+1 indicates target, and -1 indicates back Scape finally obtains the classification results Y of image;
L2ECM feature X and classification results Y training Online Boosting classifier h 1-3) used;
2) the step of tracking:
At super-pixel and L2ECM feature X, t=2,3 ... 2-1) are extracted to t frame image segmentation in video, used OnlineBoosting classifier h classifies to each column of eigenmatrix X, obtains classification results Yp
2-2) using the region disconnected in plavini connection target, updated classification results are obtained
2-3) use L2ECM feature X and classification resultsOnline Boosting classifier h is updated, t is updated =t+1, return step 2-1) processing video in next frame image;
Wherein, Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ 1, 2 ..., M };Specific step is as follows for Online Boosting classifier h training:
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Classifier hmThe L2ECM feature x and corresponding tag along sort y for receiving the super-pixel of input, judge current class device hm The L2ECM feature x classification results of super-pixel are judged: if hmTo the correct h of L2ECM feature x classification results of super-pixelm (x)=y, then updateεmIt indicates to add penalty coefficient λ Classifier h latermError rate;If hmTo the L2ECM feature x classification results mistake h of super-pixelm(x) ≠ y, then update
Updating classifier isIndicative function I:Judge whether to reach end update condition, if not, training step is returned to, under The L2ECM feature x of one super-pixel is handled with corresponding tag along sort y, if so, terminating training step.
The present invention is learnt from the previous frame image of video to target using the method for Online Boosting on-line study With the classifier of background, and the classifier to be used for the classification of target and background in next frame image, so that its processing speed adds It is fast very much.
Innovation of the invention is: using super-pixel to containing tracking target in the problem of target finely tracks Image carries out piecemeal, each super-pixel is seen as a point, and it reduce the complexities of calculating;Use the method for on-line study Come segmentation object and background.In traditional Online Boosting algorithm, the weight of training sample be it is identical, will not be at any time Between variation and change.But inside the fine tracking problem of target, since the moving target moment changes, so to online point For class device distance when the current frame between the weight of picture frame more remote should be smaller, in order to realize what this weight gradually decayed Effect, the present invention devise a kind of sample weights degree remote at any time and the Online Boosting classifier that successively decreases, with The increase of video frame number, the performance of classifier become better and better, thus realize accurately tracking target fine definition.
Present invention has the advantages that sample weights degree remote at any time and the Online Boosting classification successively decreased The Fast Classification ability of device makes the tracking to target fine definition reach real-time tracking effect.
Detailed description of the invention
Fig. 1 super-pixel schematic diagram;
Fig. 2 system flow chart.
Specific embodiment
The present invention divides the candidate region using super-pixel;Come using the target and background of video first frame image Online Boosting classifier is initialized, the target classified to later each frame picture using the classifier in image And background area, while classifier itself is updated with the result of classification.It is finally connected in target and is disconnected using the method for expansion Region, thus obtain the target and background divided.
In order to facilitate the description contents of the present invention, some terms are illustrated first.
1: super-pixel, the segmentation of super-pixel and feature are extracted as existing ripe algorithm.Super-pixel refer in the picture by A series of positions are adjacent and color, brightness, the similar pixel composition of Texture eigenvalue zonule, these zonules are protected mostly The effective information of further progress image segmentation has been stayed, and will not generally destroy the boundary information of objects in images.Ours For carrying out piecemeal to image in algorithm, a cumularsharolith is allowed to set the similar pixel of adjacent and feature with a super-pixel come table Show.Super-pixel a width originally be Pixel-level (pixel-level) figure, be divided into region class (district-level) Figure is a kind of to be abstracted to essential information.Superpixel segmentation method SLIC algorithm is in " SLIC used in this algorithm A Superpixels Compared to State-of-the-art Superpixel Methods " Wen Zhongyou is specifically Bright, segmentation result is as shown in Figure 1, the region that red contours fence up indicates a super-pixel.
2:L2ECM feature, partial log Euclidean distance covariance matrix Local Log-Euclidean Covariance Matrix, this feature are extracted as existing ripe algorithm.For piece image, 1 institute of formula is configured to using its primitive character The form shown, wherein I (x, y) indicates the pixel value of position (x, y) in image I, | | indicate absolute value, Ix(x, y) and Iy(x, Y) first-order partial derivative to the direction x and y, I are respectively indicatedxx(x, y) and Iyy(x, y) respectively indicates the Second Order Partial to the direction x and y Derivative.For a super-pixel s, enableWherein (xi, yi) ∈ s, d expression primitive characterLength,Indicate d dimension space, NsThe number for indicating the pixel for including in super-pixel s, then GsIt is a size For dxNsMatrix, GsEach column be a primitive characterCalculate GsCovariance matrix Cs, then CsIt is a d The matrix of × d, its latitude and NsIt is unrelated.In order to avoid calculate between covariance matrix geodesic curve in the Riemann space away from From we are by CsBe converted to the log (C in theorem in Euclid spaces), due to log (Cs) matrix symmetry, we take log (Cs) matrix Half (upper triangular matrix) be arranged in a vector and just constitute L2ECM feature, then the corresponding L2ECM of a super-pixel is special The length of sign is
3:Online Boosting classifier.One Online Boosting classifier h is by M Weak Classifier hm, m ∈ { 1,2 ..., M } is constituted.Inputting is<x, y>, wherein x is the L2ECM features of 120 dimensions, y ∈ { -1 ,+1 }.
One Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ 1,2 ..., M};Specific step is as follows for Online Boosting classifier h training:
For 1~M Weak Classifier, initialize: WithRespectively indicate Weak Classifier hmPoint The accuracy and error rate of class;
On the one hand initial setting up penalty coefficient λ=1, λ are used to punish hmThe correctness of classification, on the other hand for punishing sample Originally weight decaying remote at any time;
For each classifier hm, according to Poisson distribution P (λ=1),It is followed to obtain one Ring number k;The condition that circulation terminates, which can be, reaches cycle-index k, can also make other loop stop conditions customary in the art; Method those skilled in the art that cycle-index k is obtained can also obtain by other means;
Circulation k times:
Seek m-th of Weak Classifier hmOptimal division surface: L0(hm, (x, y));L0(hm, (x, y)) and indicate a Weak Classifier Training process, be used herein as existing decision tree decision stump as Weak Classifier, this training process and traditional Boosting classifier is identical, may be otherwise and is trained using other existing Weak Classifiers;
If hm(x) classification is correct, i.e. y=hm(x),
ThenεmIt indicates to add this penalty term of λ Classifier h latermError rate;
If hm(x) classification error, i.e. y ≠ hm(x),
Then
New classifier isNew for one Input x, so that it may classify to it:
?WithIn the update mode of both λ,WithThis two for punishing hmThe correctness of classification, this of+1 is for the weight of sample at any time that decays.
Concrete operation step is as shown in Figure 2:
Initialization step:
Step 1, the first frame image for video divide the image into super-pixel using SLIC algorithm, super-pixel are arranged Maximum number be 200.
Step 2, to being divided into the image zooming-out L2ECM feature after super-pixel to have tri- channels RGB for color image, So the corresponding L2ECM feature of each super-pixel is the column vector of one 120 dimension.Assuming that entire image is divided into N number of super picture Element, then the corresponding feature X of image is the matrix of 120xN.According to the markup information of first frame, available each super-pixel is corresponding Tag along sort y ∈ { -1 ,+1 }, then the classification results Y of entire image is exactly the matrix of a Nx1.
Step 3 is made of using X obtained in step 2 and Y, feature X each super-pixel feature x, classification results Y by The corresponding label y composition respectively of each training super-pixel feature x, Online Boosting classifier h.
Tracking step:
Step 4, since the second frame image of video, for each frame image, divided the image into using SLIC algorithm Super-pixel extracts L2ECM feature, obtains corresponding eigenmatrix X.Using classifier h to each column (i.e. each super picture of X Element) classify, obtain classification results Yp∈ { -1 ,+1 }.
Step 5 is connected to the region disconnected in target using the method for expansion, thus obtains new target and background Classification results
Step 6, using X andClassifier h is updated, new classifier h is obtained, step 4 is gone to and carries out next frame The processing of image.

Claims (1)

1. the target profile tracing method based on Online Boosting, which comprises the following steps:
1) initialization step:
1-1) by the 1st frame image segmentation of video at super-pixel;
1-2) to being divided into image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X, L2ECM after super-pixel Each L2ECM feature x for arranging a corresponding super-pixel in feature;The L2ECM feature of 1st frame image is subjected to target signature and back The differentiation of scape feature obtains the corresponding tag along sort y ∈ { -1 ,+1 } of each super-pixel, and+1 indicates target, and -1 indicates background, most The classification results Y of image is obtained eventually;
L2ECM feature X and classification results Y training Online Boosting classifier h 1-3) used;
2) the step of tracking:
At super-pixel and L2ECM feature X, t=2,3 2-1) are extracted ..., using Online to t frame image segmentation in video Boosting classifier h classifies to each column of eigenmatrix X, obtains classification results Yp
2-2) using the region disconnected in plavini connection target, updated classification results are obtained
2-3) use L2ECM feature X and classification resultsOnline Boosting classifier h is updated, t=t+ is updated 1, return step 2-1) processing video in next frame image;
Wherein, Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ { 1,2 ..., M }; Specific step is as follows for Online Boosting classifier h training:
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Classifier hmThe L2ECM feature x and corresponding tag along sort y for receiving the super-pixel of input, judge current class device hmTo super The L2ECM feature x classification results of pixel are judged: if hmTo the correct h of L2ECM feature x classification results of super-pixelm(x)= Y then updatesεmIt indicates plus after penalty coefficient λ Classifier hmError rate;If hmTo the L2ECM feature x classification results mistake h of super-pixelm(x) ≠ y, then update
Updating classifier isIndicative function I:Judge whether to reach end update condition, if not, training step is returned to, under The L2ECM feature x of one super-pixel is handled with corresponding tag along sort y, if so, terminating training step.
CN201610657342.XA 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting Active CN106327527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Publications (2)

Publication Number Publication Date
CN106327527A CN106327527A (en) 2017-01-11
CN106327527B true CN106327527B (en) 2019-05-14

Family

ID=57740810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610657342.XA Active CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Country Status (1)

Country Link
CN (1) CN106327527B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952287A (en) * 2017-03-27 2017-07-14 成都航空职业技术学院 A kind of video multi-target dividing method expressed based on low-rank sparse
CN112348826B (en) * 2020-10-26 2023-04-07 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7769228B2 (en) * 2004-05-10 2010-08-03 Siemens Corporation Method for combining boosted classifiers for efficient multi-class object detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ONLINE REAL BOOSTING FOR OBJECT TRACKING UNDER SEVERE APPEARANCE CHANGES AND OCCLUSION;LI XU 等;《ICASSP 07》;20070604;925-928 *
Real-Time Tracking via On-line Boosting;Helmut Grabner 等;《BMVC 2006》;20060131;1-10 *
改进的基于在线Boosting的目标跟踪方法;孙来兵 等;《计算机应用》;20130201;第33卷(第2期);495-498 *

Also Published As

Publication number Publication date
CN106327527A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
Fu et al. Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model
Naseer et al. Semantics-aware visual localization under challenging perceptual conditions
CN108734723B (en) Relevant filtering target tracking method based on adaptive weight joint learning
CN105930868B (en) A low-resolution airport object detection method based on hierarchical reinforcement learning
CN103886619B (en) A kind of method for tracking target merging multiple dimensioned super-pixel
CN108257154B (en) Polarimetric SAR image change detection method based on regional information and CNN
CN107092870A (en) A kind of high resolution image semantics information extracting method and system
CN105118049A (en) Image segmentation method based on super pixel clustering
CN111739053B (en) An online multi-pedestrian detection and tracking method in complex scenes
Sun et al. Detection of tomato organs based on convolutional neural network under the overlap and occlusion backgrounds
Shuai et al. An improved YOLOv5-based method for multi-species tea shoot detection and picking point location in complex backgrounds
CN108109162A (en) A kind of multiscale target tracking merged using self-adaptive features
CN110647906A (en) Clothing target detection method based on fast R-CNN method
CN104537355A (en) Remarkable object detecting method utilizing image boundary information and area connectivity
CN108256462A (en) A kind of demographic method in market monitor video
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network
CN110276363A (en) A small bird target detection method based on density map estimation
CN110009060A (en) A Robust Long-Term Tracking Method Based on Correlation Filtering and Object Detection
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
CN106780564A (en) A kind of anti-interference contour tracing method based on Model Prior
CN106127144B (en) Using when compose the point source risk source extraction method of empty integral feature model
Xu et al. Multiscale edge-guided network for accurate cultivated land parcel boundary extraction from remote sensing images
CN110363100A (en) A video object detection method based on YOLOv3
CN106327527B (en) Target profile tracing method based on Online Boosting
Li et al. Development and challenges of object detection: A survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210512

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy (Group) Co.,Ltd.

Address before: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee before: Houpu clean energy Co.,Ltd.

CP01 Change in the name or title of a patent holder