CN106327527B - Target profile tracing method based on Online Boosting - Google Patents

Target profile tracing method based on Online Boosting Download PDF

Info

Publication number
CN106327527B
CN106327527B CN201610657342.XA CN201610657342A CN106327527B CN 106327527 B CN106327527 B CN 106327527B CN 201610657342 A CN201610657342 A CN 201610657342A CN 106327527 B CN106327527 B CN 106327527B
Authority
CN
China
Prior art keywords
pixel
super
feature
classifier
l2ecm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610657342.XA
Other languages
Chinese (zh)
Other versions
CN106327527A (en
Inventor
解梅
王建国
朱倩
周扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Houpu Clean Energy Group Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610657342.XA priority Critical patent/CN106327527B/en
Publication of CN106327527A publication Critical patent/CN106327527A/en
Application granted granted Critical
Publication of CN106327527B publication Critical patent/CN106327527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of target profile tracing methods based on Online Boosting, piecemeal is carried out to the image containing tracking target using super-pixel in the problem of target finely tracks, each super-pixel is seen as a point, and it reduce the complexities of calculating;Come segmentation object and background using the method for on-line study.In traditional Online Boosting algorithm, the weight of training sample be it is identical, will not change with time.But inside the fine tracking problem of target, since the moving target moment changes, so for online classifier distance when the current frame between the weight of picture frame more remote should be smaller, in order to realize effect that this weight gradually decays, the present invention devises a kind of sample weights degree remote at any time and the Online Boosting classifier that successively decreases, with the increase of video frame number, the performance of classifier is become better and better, to realize the fine definition of accurately tracking target.

Description

Target profile tracing method based on Online Boosting
Technical field
The invention belongs to computer vision fields, and in particular to field of intelligent monitoring.
Background technique
Target fine definition tracking technique based on video not only needs the position of enough tracking targets, but also will be accurately The shape of target is described, which is one of most basic technology of computer vision field, the tracking of available objective contour As a result.Upper layer algorithm is further analyzed and is handled according to objective contour tracking result, come realize understanding to scene, to mesh The identification and identification of human body behavior etc. is applied that mark acts.The broad application prospect of the technology and very high researching value Excite the great interest of researchers at home and abroad.
The key of target fine definition tracking technique based on video is the expression of time consistency and Space Consistency. Time consistency describes the similitude of the target in successive frame, and Space Consistency describes target in one frame of image and background Resolution capability.The problem of fine definition tracking to target in video is considered as one two classification, at present both at home and abroad There are many relevant algorithms, such as the method based on level set, the estimation of movement and the segmentation of target are divided by this method Two individual stages, using the result of estimation as the input of segmentation.In this way when estimation inaccuracy, it will affect The precision of segmentation, in the video that many cameras itself have movement, tracking effect that the estimation of movement has been hardly resulted in.For The case where solution cam movement, it has been proposed that carrying out a kind of method cut based on figure, this method merges multiple clue functions To together, the motion information of target is usually one of important clue function, however the sports ground of background would generally interfere The motion information of target, so that the objective contour inaccuracy of tracking.There are also some automanual dividing methods, these methods need The some target and background regions of artificial calibration, this just greatly limits its application field.
Summary of the invention
The present invention has been to provide a kind of fast and accurately target fine definition track algorithm to solve above-mentioned technical problem.
The present invention is to solve above-mentioned technical problem the technical scheme adopted is that the target based on Online Boosting Contour tracing method, comprising the following steps:
1) initialization step:
1-1) by the 1st frame image segmentation of video at super-pixel;
1-2) to being divided into the image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X after super-pixel, Each L2ECM feature x for arranging a corresponding super-pixel in L2ECM feature;It is special that the L2ECM feature of 1st frame image is subjected to target The differentiation of sign and background characteristics obtains the corresponding tag along sort y ∈ { -1 ,+1 } of each super-pixel, and+1 indicates target, and -1 indicates back Scape finally obtains the classification results Y of image;
L2ECM feature X and classification results Y training Online Boosting classifier h 1-3) used;
2) the step of tracking:
At super-pixel and L2ECM feature X, t=2,3 ... 2-1) are extracted to t frame image segmentation in video, used OnlineBoosting classifier h classifies to each column of eigenmatrix X, obtains classification results Yp
2-2) using the region disconnected in plavini connection target, updated classification results are obtained
2-3) use L2ECM feature X and classification resultsOnline Boosting classifier h is updated, t is updated =t+1, return step 2-1) processing video in next frame image;
Wherein, Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ 1, 2 ..., M };Specific step is as follows for Online Boosting classifier h training:
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Classifier hmThe L2ECM feature x and corresponding tag along sort y for receiving the super-pixel of input, judge current class device hm The L2ECM feature x classification results of super-pixel are judged: if hmTo the correct h of L2ECM feature x classification results of super-pixelm (x)=y, then updateεmIt indicates to add penalty coefficient λ Classifier h latermError rate;If hmTo the L2ECM feature x classification results mistake h of super-pixelm(x) ≠ y, then update
Updating classifier isIndicative function I:Judge whether to reach end update condition, if not, training step is returned to, under The L2ECM feature x of one super-pixel is handled with corresponding tag along sort y, if so, terminating training step.
The present invention is learnt from the previous frame image of video to target using the method for Online Boosting on-line study With the classifier of background, and the classifier to be used for the classification of target and background in next frame image, so that its processing speed adds It is fast very much.
Innovation of the invention is: using super-pixel to containing tracking target in the problem of target finely tracks Image carries out piecemeal, each super-pixel is seen as a point, and it reduce the complexities of calculating;Use the method for on-line study Come segmentation object and background.In traditional Online Boosting algorithm, the weight of training sample be it is identical, will not be at any time Between variation and change.But inside the fine tracking problem of target, since the moving target moment changes, so to online point For class device distance when the current frame between the weight of picture frame more remote should be smaller, in order to realize what this weight gradually decayed Effect, the present invention devise a kind of sample weights degree remote at any time and the Online Boosting classifier that successively decreases, with The increase of video frame number, the performance of classifier become better and better, thus realize accurately tracking target fine definition.
Present invention has the advantages that sample weights degree remote at any time and the Online Boosting classification successively decreased The Fast Classification ability of device makes the tracking to target fine definition reach real-time tracking effect.
Detailed description of the invention
Fig. 1 super-pixel schematic diagram;
Fig. 2 system flow chart.
Specific embodiment
The present invention divides the candidate region using super-pixel;Come using the target and background of video first frame image Online Boosting classifier is initialized, the target classified to later each frame picture using the classifier in image And background area, while classifier itself is updated with the result of classification.It is finally connected in target and is disconnected using the method for expansion Region, thus obtain the target and background divided.
In order to facilitate the description contents of the present invention, some terms are illustrated first.
1: super-pixel, the segmentation of super-pixel and feature are extracted as existing ripe algorithm.Super-pixel refer in the picture by A series of positions are adjacent and color, brightness, the similar pixel composition of Texture eigenvalue zonule, these zonules are protected mostly The effective information of further progress image segmentation has been stayed, and will not generally destroy the boundary information of objects in images.Ours For carrying out piecemeal to image in algorithm, a cumularsharolith is allowed to set the similar pixel of adjacent and feature with a super-pixel come table Show.Super-pixel a width originally be Pixel-level (pixel-level) figure, be divided into region class (district-level) Figure is a kind of to be abstracted to essential information.Superpixel segmentation method SLIC algorithm is in " SLIC used in this algorithm A Superpixels Compared to State-of-the-art Superpixel Methods " Wen Zhongyou is specifically Bright, segmentation result is as shown in Figure 1, the region that red contours fence up indicates a super-pixel.
2:L2ECM feature, partial log Euclidean distance covariance matrix Local Log-Euclidean Covariance Matrix, this feature are extracted as existing ripe algorithm.For piece image, 1 institute of formula is configured to using its primitive character The form shown, wherein I (x, y) indicates the pixel value of position (x, y) in image I, | | indicate absolute value, Ix(x, y) and Iy(x, Y) first-order partial derivative to the direction x and y, I are respectively indicatedxx(x, y) and Iyy(x, y) respectively indicates the Second Order Partial to the direction x and y Derivative.For a super-pixel s, enableWherein (xi, yi) ∈ s, d expression primitive characterLength,Indicate d dimension space, NsThe number for indicating the pixel for including in super-pixel s, then GsIt is a size For dxNsMatrix, GsEach column be a primitive characterCalculate GsCovariance matrix Cs, then CsIt is a d The matrix of × d, its latitude and NsIt is unrelated.In order to avoid calculate between covariance matrix geodesic curve in the Riemann space away from From we are by CsBe converted to the log (C in theorem in Euclid spaces), due to log (Cs) matrix symmetry, we take log (Cs) matrix Half (upper triangular matrix) be arranged in a vector and just constitute L2ECM feature, then the corresponding L2ECM of a super-pixel is special The length of sign is
3:Online Boosting classifier.One Online Boosting classifier h is by M Weak Classifier hm, m ∈ { 1,2 ..., M } is constituted.Inputting is<x, y>, wherein x is the L2ECM features of 120 dimensions, y ∈ { -1 ,+1 }.
One Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ 1,2 ..., M};Specific step is as follows for Online Boosting classifier h training:
For 1~M Weak Classifier, initialize: WithRespectively indicate Weak Classifier hmPoint The accuracy and error rate of class;
On the one hand initial setting up penalty coefficient λ=1, λ are used to punish hmThe correctness of classification, on the other hand for punishing sample Originally weight decaying remote at any time;
For each classifier hm, according to Poisson distribution P (λ=1),It is followed to obtain one Ring number k;The condition that circulation terminates, which can be, reaches cycle-index k, can also make other loop stop conditions customary in the art; Method those skilled in the art that cycle-index k is obtained can also obtain by other means;
Circulation k times:
Seek m-th of Weak Classifier hmOptimal division surface: L0(hm, (x, y));L0(hm, (x, y)) and indicate a Weak Classifier Training process, be used herein as existing decision tree decision stump as Weak Classifier, this training process and traditional Boosting classifier is identical, may be otherwise and is trained using other existing Weak Classifiers;
If hm(x) classification is correct, i.e. y=hm(x),
ThenεmIt indicates to add this penalty term of λ Classifier h latermError rate;
If hm(x) classification error, i.e. y ≠ hm(x),
Then
New classifier isNew for one Input x, so that it may classify to it:
?WithIn the update mode of both λ,WithThis two for punishing hmThe correctness of classification, this of+1 is for the weight of sample at any time that decays.
Concrete operation step is as shown in Figure 2:
Initialization step:
Step 1, the first frame image for video divide the image into super-pixel using SLIC algorithm, super-pixel are arranged Maximum number be 200.
Step 2, to being divided into the image zooming-out L2ECM feature after super-pixel to have tri- channels RGB for color image, So the corresponding L2ECM feature of each super-pixel is the column vector of one 120 dimension.Assuming that entire image is divided into N number of super picture Element, then the corresponding feature X of image is the matrix of 120xN.According to the markup information of first frame, available each super-pixel is corresponding Tag along sort y ∈ { -1 ,+1 }, then the classification results Y of entire image is exactly the matrix of a Nx1.
Step 3 is made of using X obtained in step 2 and Y, feature X each super-pixel feature x, classification results Y by The corresponding label y composition respectively of each training super-pixel feature x, Online Boosting classifier h.
Tracking step:
Step 4, since the second frame image of video, for each frame image, divided the image into using SLIC algorithm Super-pixel extracts L2ECM feature, obtains corresponding eigenmatrix X.Using classifier h to each column (i.e. each super picture of X Element) classify, obtain classification results Yp∈ { -1 ,+1 }.
Step 5 is connected to the region disconnected in target using the method for expansion, thus obtains new target and background Classification results
Step 6, using X andClassifier h is updated, new classifier h is obtained, step 4 is gone to and carries out next frame The processing of image.

Claims (1)

1. the target profile tracing method based on Online Boosting, which comprises the following steps:
1) initialization step:
1-1) by the 1st frame image segmentation of video at super-pixel;
1-2) to being divided into image zooming-out partial log Euclidean distance covariance matrix L2ECM feature X, L2ECM after super-pixel Each L2ECM feature x for arranging a corresponding super-pixel in feature;The L2ECM feature of 1st frame image is subjected to target signature and back The differentiation of scape feature obtains the corresponding tag along sort y ∈ { -1 ,+1 } of each super-pixel, and+1 indicates target, and -1 indicates background, most The classification results Y of image is obtained eventually;
L2ECM feature X and classification results Y training Online Boosting classifier h 1-3) used;
2) the step of tracking:
At super-pixel and L2ECM feature X, t=2,3 2-1) are extracted ..., using Online to t frame image segmentation in video Boosting classifier h classifies to each column of eigenmatrix X, obtains classification results Yp
2-2) using the region disconnected in plavini connection target, updated classification results are obtained
2-3) use L2ECM feature X and classification resultsOnline Boosting classifier h is updated, t=t+ is updated 1, return step 2-1) processing video in next frame image;
Wherein, Online Boosting classifier h is by M Weak Classifier hmIt constitutes, Weak Classifier number m ∈ { 1,2 ..., M }; Specific step is as follows for Online Boosting classifier h training:
Initialization step: initial setting up Weak Classifier hmThe accuracy of classificationError rateWith penalty coefficient λ,
Training step:
Classifier hmThe L2ECM feature x and corresponding tag along sort y for receiving the super-pixel of input, judge current class device hmTo super The L2ECM feature x classification results of pixel are judged: if hmTo the correct h of L2ECM feature x classification results of super-pixelm(x)= Y then updatesεmIt indicates plus after penalty coefficient λ Classifier hmError rate;If hmTo the L2ECM feature x classification results mistake h of super-pixelm(x) ≠ y, then update
Updating classifier isIndicative function I:Judge whether to reach end update condition, if not, training step is returned to, under The L2ECM feature x of one super-pixel is handled with corresponding tag along sort y, if so, terminating training step.
CN201610657342.XA 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting Active CN106327527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610657342.XA CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Publications (2)

Publication Number Publication Date
CN106327527A CN106327527A (en) 2017-01-11
CN106327527B true CN106327527B (en) 2019-05-14

Family

ID=57740810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610657342.XA Active CN106327527B (en) 2016-08-11 2016-08-11 Target profile tracing method based on Online Boosting

Country Status (1)

Country Link
CN (1) CN106327527B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952287A (en) * 2017-03-27 2017-07-14 成都航空职业技术学院 A kind of video multi-target dividing method expressed based on low-rank sparse
CN112348826B (en) * 2020-10-26 2023-04-07 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7769228B2 (en) * 2004-05-10 2010-08-03 Siemens Corporation Method for combining boosted classifiers for efficient multi-class object detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256629A (en) * 2007-02-28 2008-09-03 三菱电机株式会社 Method for adapting a boosted classifier to new samples
CN101814149A (en) * 2010-05-10 2010-08-25 华中科技大学 Self-adaptive cascade classifier training method based on online learning
CN104123555A (en) * 2014-02-24 2014-10-29 西安电子科技大学 Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN103886619A (en) * 2014-03-18 2014-06-25 电子科技大学 Multi-scale superpixel-fused target tracking method
CN103871081A (en) * 2014-03-29 2014-06-18 湘潭大学 Method for tracking self-adaptive robust on-line target
CN105719292A (en) * 2016-01-20 2016-06-29 华东师范大学 Method of realizing video target tracking by adopting two-layer cascading Boosting classification algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ONLINE REAL BOOSTING FOR OBJECT TRACKING UNDER SEVERE APPEARANCE CHANGES AND OCCLUSION;LI XU 等;《ICASSP 07》;20070604;925-928 *
Real-Time Tracking via On-line Boosting;Helmut Grabner 等;《BMVC 2006》;20060131;1-10 *
改进的基于在线Boosting的目标跟踪方法;孙来兵 等;《计算机应用》;20130201;第33卷(第2期);495-498 *

Also Published As

Publication number Publication date
CN106327527A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
Zhang et al. Real-time strawberry detection using deep neural networks on embedded system (rtsd-net): An edge AI application
Fu et al. Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model
Sun et al. SLIC_SVM based leaf diseases saliency map extraction of tea plant
Maheswari et al. Intelligent fruit yield estimation for orchards using deep learning based semantic segmentation techniques—a review
Naseer et al. Semantics-aware visual localization under challenging perceptual conditions
CN106997597B (en) It is a kind of based on have supervision conspicuousness detection method for tracking target
CN105528794B (en) Moving target detecting method based on mixed Gauss model and super-pixel segmentation
Cao et al. Large scale crowd analysis based on convolutional neural network
Tan et al. Vehicle detection in high resolution satellite remote sensing images based on deep learning
CN103679154A (en) Three-dimensional gesture action recognition method based on depth images
CN106127791A (en) A kind of contour of building line drawing method of aviation remote sensing image
CN107045722B (en) Merge the video signal process method of static information and multidate information
CN107146219B (en) Image significance detection method based on manifold regularization support vector machine
CN103984955A (en) Multi-camera object identification method based on salience features and migration incremental learning
Shuai et al. An improved YOLOv5-based method for multi-species tea shoot detection and picking point location in complex backgrounds
CN103309982A (en) Remote sensing image retrieval method based on vision saliency point characteristics
Tang et al. Unsupervised joint adversarial domain adaptation for cross-scene hyperspectral image classification
CN106327527B (en) Target profile tracing method based on Online Boosting
CN104050674B (en) Salient region detection method and device
CN114689038A (en) Fruit detection positioning and orchard map construction method based on machine vision
Wang et al. MRUNet: A two-stage segmentation model for small insect targets in complex environments
Wei Small object detection based on deep learning
Pillai et al. Fine-Tuned EfficientNetB4 Transfer Learning Model for Weather Classification
Yue et al. SCFNet: Semantic correction and focus network for remote sensing image object detection
CN103927517B (en) Motion detection method based on human body global feature histogram entropies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210512

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy (Group) Co.,Ltd.

Address before: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee before: Houpu clean energy Co.,Ltd.