CN106570887A - Adaptive Mean Shift target tracking method based on LBP features - Google Patents
Adaptive Mean Shift target tracking method based on LBP features Download PDFInfo
- Publication number
- CN106570887A CN106570887A CN201610965929.7A CN201610965929A CN106570887A CN 106570887 A CN106570887 A CN 106570887A CN 201610965929 A CN201610965929 A CN 201610965929A CN 106570887 A CN106570887 A CN 106570887A
- Authority
- CN
- China
- Prior art keywords
- target
- candidate
- tracking
- represent
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30221—Sports video; Sports image
Abstract
The invention relates to the technical fields of computer vision and target tracking, and aims to solve the problem on how to ensure the robustness of target tracking in different background interference situations, improve the robustness and adaptability of an algorithm and effectively overcome constant change of the dimension and direction of a target in the process of tracking. According to the technical scheme employed in the invention, an adaptive Mean Shift target tracking method based on LBP features comprises the following steps: (1) generation of a target model; (2) similarity measurement: the similarity between the target model and a target candidate model is measured with a Bhattacharyya coefficient; and (3) target dimension and direction estimation: Mean shift iteration is performed on a target area to make the target area converge to the spatial location of a candidate target, matrix decomposition is performed on a target candidate area weight map integrating texture and color features, and the dimension and direction of the target candidate area are calculated through matrix analysis. The method is mainly used in target tracking occasions.
Description
Technical field
The present invention relates to computer vision and target following technical field, more particularly to a kind of two-dimentional local binary spy of joint
Levy Mean Shift method for tracking target adaptive with the dimension of color histogram.
Background technology
Computer vision is a new branch of science developed in recent years, and its research contents covers intelligent monitoring system
The fields such as system, robot visual guidance, man-machine interaction, object dimensional reconstruction, automatic Pilot.In numerous researchs of computer vision
In field, the extensive concern of domestic and international academia and industrial quarters is received based on the motion target tracking of image sequence, in intelligence
Monitoring, robot navigation, intelligent transportation, video content analysis with understand etc. field there is important using value, be one not
The key technology that can or lack.
For the research of image sequence motion target tracking, a large amount of outstanding track algorithms have been emerged in large numbers.Numerous outstanding
In track algorithm, the Moving Target Tracking Algorithm based on Mean Shift is little with its amount of calculation, insensitive to target rotation, deformation
The advantages of and get the attention, become current goal track field study hotspot.2003, Comaniciu et al. will
Mean shift algorithms are incorporated into target tracking domain, it is proposed that the motion mesh based on Mean shift with milestone significance
Mark track algorithm (see document [1,2]).Hereafter, some defects for existing for the method, domestic and international researcher are proposed in a large number
Outstanding innovatory algorithm (see document [3]~[8]).
Although original Mean shift target tracking algorisms have amount of calculation little, to target distortion, rotation, partial occlusion
Insensitive the advantages of, however, some limitations of the algorithm are also obvious:
(1) using color histogram as the appearance features of target, it is impossible to all information comprising target;And color characteristic pair
Illumination variation is sensitive, in the case of illumination variation, tracking target easy to lose;Additionally, in target and the close situation of background color
Under, it is impossible to effectively identify tracking target (see document [3] and [4]).
(3) cannot effectively estimate target scale and direction (see document [5]).
(2) lack effective object module more New Policy (see document [6] and [7]).
(4) background clutter cannot be overcome to disturb this defect, when more clutter occurs in background, tracking target easy to lose (see
Document [8]).
List of references:
[1]Comaniciu D,Ramesh V,Meer P.Real-time tracking of non-rigid
objects using Mean shift[C].IEEE conference on Computer Vision and Pattern
Recognition.2000:142-149
[2]Comaniciu D,Ramesh V,Meer P.Kernel-based object tracking[J].IEEE
trans on pattern Analysis and Machine Intelligence,2003,25(5):564-575
[3]Leichter I,Lindenbaum M,Rivlin E.Mesn Shift tracking with multiple
reference color histograms[J].Computer Vision and Image Understanding,2010,
114(3):400-408
[4]Tan Xiao-yang,Triggs B.Enhanced Local Texture Feature Sets for
Face Recognition Under Difficult Lighting Conditions[J].IEEE Transaction on
Image Processing,2010,19(6):1635-1650
[5]Tomas Vojir,Jana Noskova,Jiri Matas.Robust scale-adaptive mean-
shift for tracking[J].Pattern Recognition Letters,2014,49(1):250-258
[6]Huiyu Zhou,Yuan Yuan,Chunmei Shi.Object tracking using SIFT
features and mean shift[J].Computer Vision and Image Understanding,2009,113
(3):345-352
[7]Nan Luo,Quansen Sun,Qiang Chen.A Novel Tracking Algorithm via
Feature Points Matching[J].PLoS ONE,2015
[8]Fouad Bousetouane,Lynda Dib,Hichem Snoussi.Improved mean shift
integrating texture and color features for robust real time object tracking
[J],2013,29:155-170。
The content of the invention
To overcome the deficiencies in the prior art, it is contemplated that realize can guarantee under different background interference cases target with
The robustness of track, improves the robustness and adaptability of algorithm, constantly becomes efficiently against target scale during tracking and direction
The problem of change.The technical solution used in the present invention is, with reference to LBP feature adaptive M ean Shift method for tracking target, step
It is as follows:
(1) object module is generated:
The joint histogram that object module is made up of with color characteristic the local binary feature of image describing, that is, utilize by
The color in mask that local binary patterns are formed and textural characteristics build the mesh of joint texture-color characteristic describing target
Mark model;
(2) similarity measurement:
The similarity between object module and target candidate model is weighed using Bhattacharyya coefficients,
Bhattacharyya coefficients represent two vectors and between angle cosine value, its value is bigger, represents object module and waits with target
Modeling type is more similar, calculates the Bhattacharyya coefficients of above-mentioned object module and target candidate model first, and specifies certain
Measurement criterion so as to the similarity highest under the criterion;
(3) target scale direction estimation:
During tracking, Mean shift iteration is carried out first to target area so as to converge to the sky of candidate target
Between at position, the object candidate area weight map of the joint texture-color characteristic to generating in (1) carries out matrix decomposition, utilizes
Matrix analyses are calculating yardstick and the direction of object candidate area.
Local binary patterns LBP operators with grey scale invariance and rotational invariance are obtained by following model
:
Wherein, represent gcThe pixel value of correspondence window center point, P represent (xc,yc) pixel around window center point,
gpCentered on pixel value in vertex neighborhood, R represents the scope of neighborhood, and riv2 represents invariable rotary equivalent formulations, U (LBPP,R)
For defining for invariable rotary pattern LBP operator, its value≤2, represent and light from starting point 0, calculate at adjacent 2 points with central point picture
The difference of function s (x) of element value, travels through P pixel successively:
Wherein, gP-1Represent the pixel value of pixel P-1 in neighborhood, g0Represent the pixel value of starting point in neighborhood;With S (gP-
gc+ a) replace S (gP-gc), a is to make up the little threshold value for setting of pixel fluctuation in flat site;Also, | a | is bigger, pixel ripple
Dynamic permissible value is bigger;
Local binary feature LBP of effectively joint objective is comprised the following steps that with color characteristic:
There are 9 uniform texture patterns, each LBP texture pattern can be considered as a microtexture primitive,In topography's block that operator is detected, feature is included a little, flat site, edge, the starting point and terminal of line segment,
In object representation, above-mentioned microtexture primitive includes angle point, edge, line segment, and referred to as main target pattern represents target
Most of feature, and point, plane domain are referred to as secondary target pattern, are the secondary textures of target;By equation below
To extract most of target pattern of target:
In operator, the secondary target pattern of the correspondence of labelling 0,1,7,8, labelling 9 do not correspond to target pattern.Cause
This, main target pattern is marked with:2~6, the main LBP patterns of target are extracted by formula (7), then the face of joint image
Color characteristic describes target, is built into the joint histogram of 8 × 8 × 8 × 5 four-dimensional joint texture-color.
Object module adopts Bhattacharyya coefficients ρ (y) to weigh with the similarity of candidate target model, i.e.,
Wherein, y represents candidate target region center, and u=1,2...m represent any one color index, Object module and candidate target model are represented respectively;quAnd puY () is illustrated respectively in target area
And in candidate target region probability distribution rectangular histogram feature u probability;Bhattacharyya coefficients ρ (y) defines tracking target
With the similarity of candidate target, additionally, defining metric function d (y), which is represented between tracking target and candidate target model
Distance:
Bhattacharyya is bigger, and object module is more similar to candidate target, to ρ [p (y) q] in pu(y0) place carries out Thailand
Series expansion is strangled, high-order term is removed, only retains single order expansion, the linear approximation formula for obtaining ρ [p (y) q] is as follows:
y0Represent the center of target candidate model in previous frame, pu(yo) represent previous frame target candidate model;Will
Object module is substituted into candidate target model and can be obtained:
Wherein,
In formula (11), Section 1 is unrelated with y, and Section 2 represents the Density Estimator at candidate target region central point y,
Wherein each pixel xiWeight w (xi) represent.So find best candidate target problem translate into searching probability it is close
The problem of degree function local extremum position, and above formula (11) is maximized completing by the method for Mean shift iteration, evenObtain the iteration form of Mean shift:
Wherein, g (x)=- k'(x), with previous frame target location y0For initial value, the y in above formula is iterated to calculate, until receiving
Hold back or reach the maximum iteration time of setting.
Target scale direction estimation comprises the concrete steps that, estimates to track using the square information of candidate target region weight map
In journey, the dimension change of target, is carried out the deformation of adaptive targets, in previous frame, is sought using Mean shift iterative algorithms
Optimal objective region is found, as current tracking result, the zeroth order square M in the region is then calculated using following formula00:
w(xi) represent each pixel xiThe weight at place, n are the number of pixel in target area;Utilize
Bhattacharyya coefficients define following formula correcting the error caused by zeroth order square:
A=c (ρ) M00 (15)
C (ρ) is the monotonically increasing function related to Bhattacharyya coefficient ρ, and its value is between 0~1:
Wherein, σ is adjustable parameter.When ρ (0~1) reduces, c (ρ) (0~1) also reduces;I.e. with object module and time
Model similarity is selected to reduce, M00Bigger than the area in real goal region, i.e., error is bigger;
The center of next frame object candidate area, yardstick and direction, by the first order and second order moments to weight map
Carry out matrix analyses to obtain, first moment M is calculated to weight map10、M01With second moment M20、M02、M11Difference is as follows:
Wherein, (xi,1,xi,2) for the coordinate of pixel i, and the center of next frame candidate target region is by above-mentioned one
The ratio of rank square and zeroth order square is tried to achieve:
Wherein, centers of the y for next frame candidate target region,Represent the coordinate of center y;Equally, two
The square information of rank square can be used to describe the shape of target area and direction, calculate the ratio and centre bit of second moment and zeroth order square
Put coordinateThe difference of two squares, respectively μ20μ11μ02:
To analyze scale size and the direction of target area, μ20,μ02, μ11Write covariance matrix Cov, and to association side
Difference matrix carries out singular value decomposition:
Wherein,
U, S carry out two matrixes after singular value decomposition for covariance matrix, wherein, (u11,u21)T(u12,u22)TPoint
The direction of two axles of target in object candidate area is not represented, the direction of target can be obtained by the angle between major axis and trunnion axis
Go out, additionally, λ1,λ2For the eigenvalue of covariance matrix Cov, its ratio is identical with the ratio of target area major and minor axis, i.e. λ1/λ2
=a/b, introduces scale factor k so that a=k λ1, b=k λ2, then the area of target area be:π ab=π (k λ1)(kλ2)=A,
So, the major and minor axis (a, b) of target area are respectively:
Thus, the yardstick of target and direction during tracking just are estimated.
The characteristics of of the invention and beneficial effect are:
In the image sequence method for tracking target that the present invention is provided, the introducing of local binary feature (improved LBP) is caused
The description of object module is no longer limited to solid color feature, meanwhile, on this basis, square is carried out to the weight map of target area
Battle array is decomposed, using going square information effectively to estimate target scale and direction, therefore, the image sequence target following side that the present invention is provided
Method all has higher reliability and robustness for the image sequence target following under illumination condition and target deformation.
Description of the drawings:
Fig. 1 provides the tracking block schematic illustration of track algorithm for the present invention.
The part tracking result of track algorithm that Fig. 2 (a) is provided for the present invention, from left to right, from top to bottom respectively the
5th, the tracking result of 10,13,21,24,31 frames;
Part tracking results of the Fig. 2 (b) for SOAMST track algorithms, from left to right, from top to bottom the respectively the 5th, 10,
13rd, the tracking result of 21,24,31 frames;
The part tracking result of track algorithm that Fig. 3 (a) is provided for the present invention, from left to right, from top to bottom respectively the
4th, the tracking result of 22,31,45,49,51 frames;
Part tracking results of the Fig. 3 (b) for SOAMST track algorithms, from left to right, from top to bottom the respectively the 4th, 22,
31st, the tracking result of 45,49,51 frames;
The part tracking result of track algorithm that Fig. 4 (a) is provided for the present invention, from left to right, from top to bottom respectively the
5th, the tracking result of 10,13,21,24,31 frames;
Part tracking results of the Fig. 4 (b) for SOAMST track algorithms, from left to right, from top to bottom the respectively the 5th, 10,
13rd, the tracking result of 21,24,31 frames.
Specific embodiment
The invention provides the adaptive Mean of dimension of a kind of combination local binary feature and color histogram
Shift target tracking algorisms, are introduced local binary feature (LBP), and make which be combined with color characteristic in the present invention so that
On the basis of using color of object feature, further with the texture information such as edge and angle point of target so as in different background
The robustness of target following can be guaranteed under interference cases, the robustness and adaptability of algorithm is improve.Additionally, in tracking process
In matrix decomposition is carried out to target area, estimated using the square information of target area track during target yardstick and side
To so that the method for tracking target that the present invention is provided can be continually changing efficiently against target scale during tracking and direction
Problem, it is described below:
1) object module and candidate target description:The texture of object module and candidate target model by selected target region
The joint histogram of feature (improved LBP operators) and color characteristic composition describing, with mesh in classical Mean Shift algorithms
The description of mark model is compared, and overcomes the single shortcoming of object module:When illumination in scene is tracked changes, by target
The tracking failure that the distribution of color in region changes and causes.And the object module described by joint histogram in the present invention,
For illumination variation has stronger robustness, and when background is similar to color of object, there is stronger discriminating energy to target
Power;Additionally, the arithmetic speed of algorithm is also further improved.
2) similarity measurement:Bhattacharyya coefficients represent two vectors and between angle cosine value, its value gets over
Greatly, represent object module more similar to target candidate model, it calculates simple, in the target following based on Mean Shift algorithms
Used in it is the most extensive.
3) target scale direction estimation:On the basis of object module is described using joint histogram, during tracking,
Our square information based on union feature weight map using selection area, effectively estimate yardstick and the direction of tracking target
Change, the tracking failure caused due to target deformation during overcoming tracking.Therefore, the combination local two that the present invention is provided
Value tag and the adaptive Mean Shift target tracking algorisms of dimension of color histogram, can preferably adapt to illumination
Difficult point in the target following such as change and ambient interferences, while carrying for the limitation of target deformation in improving traditional algorithm
High track algorithm reliability and robustness based on MeanShift.
To make the object, technical solutions and advantages of the present invention clearer, further is made to embodiment of the present invention below
Ground is described in detail.
Embodiment 1
A kind of dimension adaptive Mean Shift target followings of combination local binary feature and color histogram
Method, its corresponding schematic diagram of tracking framework is as shown in figure 1, tracking can be divided into three steps:Object module is generated, target scale
Direction estimation and similarity measurement.
(1) object module is generated:
In the tracking that the present invention is provided, the connection that object module is made up of with color characteristic the local binary feature of image
Rectangular histogram is closed describing, that is, utilizes color and textural characteristics in the mask formed by local binary patterns to describe target, structure
Build the object module of joint texture-color characteristic.
(2) similarity measurement:
The similarity between object module and target candidate model is weighed using Bhattacharyya coefficients.
Bhattacharyya coefficients represent two vectors and between angle cosine value, its value is bigger, represents object module and waits with target
Modeling type is more similar, and which calculates simple, the most extensive used in the target following based on Mean Shift algorithms.Calculate first
The Bhattacharyya coefficients of above-mentioned object module and target candidate model, and specify certain measurement criterion so as in the criterion
Lower similarity highest.
(3) target scale direction estimation:
During tracking, Mean shift iteration is carried out first to target area so as to converge to the sky of candidate target
Between at position, the object candidate area weight map of the joint texture-color characteristic to generating in (1) carries out matrix decomposition, utilizes
Matrix analyses are calculating yardstick and the direction of object candidate area.
In sum, the introducing of local binary feature (improved LBP) causes the description of object module to be no longer limited to list
One color characteristic, meanwhile, on this basis, matrix decomposition is carried out to the weight map of target area, effectively estimated using square information is gone
Meter target scale and direction, therefore, the image sequence method for tracking target that the present invention is provided is for illumination condition and target deformation
Under image sequence target following all there is higher reliability and robustness;
Embodiment 2
The scheme in embodiment 1 is introduced in detail with reference to Fig. 1, design principle, it is described below:
A kind of dimension adaptive Mean Shift target followings of combination local binary feature and color histogram
Method, which adds the corresponding schematic diagram of tracking framework as shown in figure 1, object module generation, target scale direction estimation and similarity
Tolerance.Below the specific embodiment of this three part is described in detail respectively.
(1) object module generating portion:
The description of object module in the tracking that the present invention is provided, by local binary feature (LBP) and color characteristic group
Into joint histogram composition.Local binary feature with regard to how to extract target area describes in detail below.
Local binary patterns (Local Binary Pattern, LBP) are one kind for describing image local textural characteristics
Operator, its extraction is characterized in that the Local textural feature of image.Original LBP operator definitions be in 3 × 3 window, with
Window center pixel is threshold value, and the gray value of 8 adjacent pixels is compared with which, if surrounding pixel values are more than middle imago
Element value, then the position of the pixel is marked as 1, is otherwise 0.Therefore, 8 points in 3 × 3 fields can produce 8bit without symbol
Count, calculates its corresponding decimal number, that is, obtain the LBP values of the window, and believed come the texture for reflecting the region with this value
Breath.
Its basic mathematic(al) representation is:
Wherein, (xc,yc) window center point transverse and longitudinal coordinate, gcThe pixel value of correspondence window center point is represented, P is represented
(xc,yc) pixel around window center point, gpFor the pixel value in picture centre surrounding neighbors, R represents the model of neighborhood
Enclose.Function s (x) is defined as follows:
The LBP operators of said extracted obtain a LBP coding in each pixel, therefore, which is extracted to piece image
After original LBP operators, that is, binary feature is converted into, and the original LBP features for obtaining are still a width picture.From upper
The analysis in face is, it can be seen that original LBP features are closely related with positional information.Therefore, LBP is extracted to two width pictures directly special
Levy, and carry out discriminant analysiss, larger error can be produced due to position difference.Therefore, above-mentioned original LBP features only have ash
Degree scale invariability, not with rotational invariance, can not be directly used in discriminant analysiss.With grey scale invariance and rotation
The LBP operators for turning invariance can pass through following model acquisition:
Wherein, riv2 represents invariable rotary equivalent formulations, U (LBPP,R) defining for invariable rotary pattern LBP operator, its
Value≤2, expression light from starting point 0, calculate it is adjacent 2 points with the difference of function s (x) of central point pixel value, travel through P successively
Individual pixel:
Wherein, gp-1Represent the pixel value of pixel p-1 in neighborhood, g0Represent the pixel value of starting point in neighborhood;In image
Flat site in, the fluctuation very little of pixel value.Therefore the limitation of LBP operators is, it is impossible to effectively flat in description image
Region.In order that LBP operators can effectively overcome this shortcoming, the threshold value in LBP operators is improved.With S (gP-gc+ a) generation
For S (gP-gc).Wherein, a is to make up the little threshold value for setting of pixel fluctuation in flat site;And | a | is bigger, pixel fluctuation
Permissible value is bigger.The present invention adopts improved threshold method, and utilizesOperator is extracting the Local textural feature of target.
After the binary feature for extracting image, below with regard to how effectively joint objective local binary feature (LBP) and face
Color characteristic describes in detail:
Object module q based on solid color featureuWith target candidate model puRespectively:
Wherein, c is normaliztion constant, the center of y candidate target regions, quAnd puY () is illustrated respectively in target area
And in candidate target region probability distribution rectangular histogram feature u probability;MeetU=1,2...m is represented
Any one color index, b (xi) represent xiThe corresponding color index value of place's pixel;{xi}I=1,2..., nRepresent target area pixel
The coordinate position of point, y represent the center of candidate target model, n and nhPicture in target area and candidate region is represented respectively
The number of vegetarian refreshments, k (x) are Epanechinkov kernel functions, and h represents kernel function window width, and its effect is according to different pixels
The distance of point distance center point, gives each pixel different weights, and the nearer pixel of distance center, weights are bigger.
The joint histogram by obtained from the color characteristic of the LBP feature direct union images of said extracted can not be effective
Strengthen the tracking performance of mean shift algorithms, especially when target and background color similarity, constituted using this texture-color
Joint histogram can not effective district partial objectives for and background.It would therefore be desirable to set up a kind of significantly more efficient integrated processes.
It is above-mentioned improvedThere are 9 uniform texture patterns, each LBP texture pattern can be considered as a micro- stricture of vagina
Reason primitive,In topography's block that operator is detected, feature is included a little, flat site, edge, the starting point of line segment and
Terminal.In object representation, above-mentioned microtexture primitive such as angle point, edge, line segment etc. is referred to as main target pattern, represents
Most of feature of target, and point, plane domain are referred to as secondary target pattern, are the secondary textures of target.So, we
The main target pattern of target is extracted by equation below:
In operator, the secondary target pattern of the correspondence of labelling 0,1,7,8, labelling 9 do not correspond to target pattern.Cause
This, main target pattern is marked with:2~6.In general, compared with secondary LBP patterns, the main LBP features of target are more
It is important, target more effectively accurately can be described.Therefore, the main LBP patterns of target are extracted by formula (7), then joint image
Color feature target, be built into the joint histogram of 8 × 8 × 8 × 5 four-dimensional joint texture-color.Thus combine straight
The object module of side's figure description, for illumination variation has stronger robustness, and when background is similar to color of object, to mesh
Mark is with stronger distinguishing ability;
(2) similarity measurement part:
Object module adopts Bhattacharyya coefficients ρ (y) to weigh with the similarity of candidate target model, i.e.,
WhereinBhattacharyya coefficients ρ (y) defines tracking target and candidate's mesh
Target similarity, additionally, defining metric function d (y), which represents the distance between tracking target and candidate target model:
Bhattacharyya is bigger, and object module is more similar to candidate target, therefore finds candidate target optimal location,
Bhattacharyya coefficients need to be maximized, that is, minimizes d (y).To ρ [p (y) q] in pu(y0) place carries out Taylor series expansion,
High-order term is removed, only retains single order expansion, the linear approximation formula for obtaining ρ [p (y) q] is as follows:
y0Represent the center of target candidate model in previous frame, pu(yo) represent previous frame target candidate model;Will
Object module is substituted into candidate target model and can be obtained:
Wherein,
In formula (11), Section 1 is unrelated with y, and the Density Estimator at Section 2 denotation coordination y, wherein each pixel
xiWeight w (xi) represent.So the problem for finding best candidate target translates into searching probability density function local extremum
The problem of position, can be completed by the method for Mean shift iteration.Therefore, above formula (11) is maximized, evenObtain the iteration form of Mean shift:
Wherein, g (x)=- k'(x), with previous frame target location y0For initial value, the y in above formula is iterated to calculate, until receiving
Hold back or reach the maximum iteration time of setting.
(3) target scale direction estimation part:
In classical Mean shift target tracking algorisms, the width and height for tracking target window be it is fixed, and
It is always maintained at during tracking constant.When tracked target scale and direction change, fixed tracking window is just
The deformation of target can not be well adapted to, causes the tracking performance of algorithm to gradually reduce, or even tracking failure.
As the scale size of target area is closely related with the weight map in direction and goal region, therefore, present invention profit
During being estimated to track with the square information of candidate target region weight map, the dimension change of target, carrys out adaptive targets
Deformation.In previous frame, optimal objective region is searched out using Mean shift iterative algorithms, as current tracking result,
Then the zeroth order square M in the region is calculated using following formula00:
Generally, w (xi) represent each pixel xiThe weight at place, n are the number of pixel in target area;We are by zero
Rank square regards target area size as, but, when the weight of target reduces, the error of zeroth order square increases therewith;And
Bhattacharyya coefficients react the similarity of object module and target candidate model, its value between 1~0, therefore, we
The error caused by zeroth order square is corrected using Bhattacharyya coefficients, following formula is defined:
A=c (ρ) M00 (15)
C (ρ) is the monotonically increasing function related to Bhattacharyya coefficient ρ, and its value is between 0~1:
Wherein, σ is adjustable parameter.When ρ (0~1) reduces, c (ρ) (0~1) also reduces;I.e. with object module and time
Model similarity is selected to reduce, M00Bigger than the area in real goal region, i.e., error is bigger;Therefore, less c (ρ) can be very
Good correction M00The error for causing.
The center of next frame object candidate area, yardstick and direction, can be to the first order and second order moments of weight map
Carry out matrix analyses to obtain.First moment M is calculated to weight map10、M01With second moment M20、M02、M11Difference is as follows:
Wherein, (xi,1,xi,2) for pixel xiCoordinate, and the center y of next frame candidate target region can be by upper
The ratio for stating first moment and zeroth order square is tried to achieve:
Wherein, the center of y next frames candidate target region,Represent the coordinate of center y;Equally, second order
The square information of square can be used to describe the shape of target area and direction, calculate ratio and the center of second moment and zeroth order square
CoordinateThe difference of two squares, respectively μ20μ11μ02:
To analyze scale size and the direction of target area, we are μ20,μ02, μ11Write covariance matrix Cov, and to association
Variance matrix carries out singular value decomposition:
Wherein,
U, S carry out two matrixes after singular value decomposition for covariance matrix, wherein, (u11,u21)T(u12,u22)TPoint
The direction of two axles of target in object candidate area is not represented, the direction of target can be obtained by the angle between major axis and trunnion axis
Go out.Additionally, λ1,λ2For the eigenvalue of covariance matrix Cov, its ratio is identical with the ratio of target area major and minor axis, i.e. λ1/λ2
=a/b.Introduce scale factor k so that a=k λ1, b=k λ2, then the area of target area be:π ab=π (k λ1)(kλ2)=A,
So, the major and minor axis (a, b) of target area are respectively:
Thus, the yardstick of target and direction during tracking just are estimated, enables the tracking of the present invention adaptive
The size of adjustment tracking outlet and direction.
In sum, the introducing of improved local binary feature (LBP) causes the description of object module to be no longer limited to list
One color characteristic, it is similar to background color for illumination variation and color of object in the case of tracking with higher robust
Property.Meanwhile, on this basis, its zeroth order square, first moment and second moment is calculated to the weight map of target area, and by matrix point
Analysis method effectively estimates target scale and direction.Therefore, the image sequence method for tracking target that the present invention is provided is for illumination bar
Image sequence target following under part and target deformation all has higher reliability and robustness;
Embodiment 3
Feasibility checking is carried out to the scheme in embodiment 1 and 2 with reference to specific accompanying drawing, it is described below:
The tracking for providing is implemented under 3 groups of illumination variations and background and color of object similar situation using the present invention
Video is tracked, and is compared with the tracking result of SOAMST algorithms under the same conditions, the part tracking result for obtaining
Respectively as shown in figure (2), (3), (4).
In the video sequence shown in figure (2), the region of wild goose and its surrounding is chosen to be tracking target, target in video
Identification is very low, and the dimension of wild goose constantly changes during tracking.Fig. 2 (a), (b) represent this respectively
Part tracking result obtained by the track algorithm and SOAMST algorithms of bright offer, and the mesh that two kinds of algorithms are selected at initial frame
Mark region is identical.Knowable to the tracking result of Fig. 2, the track algorithm that the present invention is provided can be positioned to target well,
Tracking accuracy is higher, achieves tracking effect well.And SOAMST algorithms start to shift in 31 frame, 33 frames with
Afterwards, there is larger skew with tracking box center in target's center, cause follow-up tracking failure.
In the video sequence shown in figure (3), the region of automobile and its surrounding is chosen to be tracking target, this video sequence
In, background color is more complicated, and color of object is similar to background color.Fig. 3 (a), (b) represent the tracking of present invention offer respectively
Part tracking result obtained by algorithm and SOAMST algorithms, and the target area that two kinds of algorithms are selected at initial frame is identical.
The tracking result of contrast Fig. 3 understands that the track algorithm that the present invention is provided can lock tracked target well, with higher
Tracking precision, and during whole tracking, there is not tracking failure phenomenon.
In the video sequence shown in figure (4), the region of automobile and its surrounding is chosen to be tracking target, this video sequence
In, background color is more complicated, and with obvious illumination variation.Fig. 4 (a), (b) represent the track algorithm of present invention offer respectively
And part tracking result obtained by SOAMST algorithms, and the target area that two kinds of algorithms are selected at initial frame is identical.Contrast
The tracking result of Fig. 4 understands, under the tracking situation that background is complicated and illumination variation is strong, the track algorithm energy that the present invention is provided
Enough with accurately tracking target, tracking box is coincide substantially with target, not comprising unnecessary background color.
Thus, the present invention is provided combination local binary feature and the adaptive Mean of dimension of color histogram
Shift method for tracking target, for background complexity, illumination variation, target is similar to background color and dimension changes scene
Under image sequence target following there is higher robustness.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention
Sequence number is for illustration only, does not represent the quality of embodiment.
The foregoing is only presently preferred embodiments of the present invention, not to limit the present invention, all spirit in the present invention and
Within principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.
Claims (5)
1. it is a kind of to combine LBP feature adaptive M ean Shift method for tracking target, it is characterized in that, step is as follows:
(1) object module is generated:
The joint histogram that object module is made up of with color characteristic the local binary feature of image is utilized by local describing
The color in mask that binary pattern is formed and textural characteristics build the target mould of joint texture-color characteristic describing target
Type;
(2) similarity measurement:
The similarity between object module and target candidate model is weighed using Bhattacharyya coefficients,
Bhattacharyya coefficients represent two vectors and between angle cosine value, its value is bigger, represents object module and waits with target
Modeling type is more similar, calculates the Bhattacharyya coefficients of above-mentioned object module and target candidate model first, and specifies certain
Measurement criterion so as to the similarity highest under the criterion;
(3) target scale direction estimation:
During tracking, Mean shift iteration is carried out first to target area so as to converge to the space bit of candidate target
Place is put, the object candidate area weight map of the joint texture-color characteristic to generating in (1) carries out matrix decomposition, using matrix
Yardstick and the direction for analyzing to calculate object candidate area.
2. LBP feature adaptive M ean Shift method for tracking target is combined as claimed in claim 1, it is characterized in that having
The local binary patterns LBP operators of grey scale invariance and rotational invariance are obtained by following model:
Wherein, represent gcThe pixel value of correspondence window center point, P represent (xc,yc) pixel around window center point, gpFor in
Pixel value in heart vertex neighborhood, R represent the scope of neighborhood, and riv2 represents invariable rotary equivalent formulations, U (LBPP,R) for rotation
Constant pattern LBP operator is defined, its value≤2, and expression is lighted from starting point 0, calculates at adjacent 2 points with central point pixel value
The difference of function s (x), travels through P pixel successively:
Wherein, gP-1Represent the pixel value of pixel P-1 in neighborhood, g0Represent the pixel value of starting point in neighborhood;With S (gP-gc+
A) S (g are replacedP-gc), a is to make up the little threshold value for setting of pixel fluctuation in flat site;Also, | a | is bigger, pixel fluctuation
Permissible value is bigger.
3. LBP feature adaptive M ean Shift method for tracking target is combined as claimed in claim 1, be it is characterized in that, effectively
Local binary feature LBP and the color characteristic of joint objective is comprised the following steps that:
There are 9 uniform texture patterns, each LBP texture pattern can be considered as a microtexture primitive,Calculate
In topography's block that son is detected, feature is included a little, flat site, edge, the starting point and terminal of line segment, in object representation
In, above-mentioned microtexture primitive includes angle point, edge, line segment, and referred to as main target pattern represents the major part of target
Feature, and point, plane domain are referred to as secondary target pattern, are the secondary textures of target;Mesh is extracted by equation below
Target major part target pattern:
In operator, the secondary target pattern of the correspondence of labelling 0,1,7,8, labelling 9 do not correspond to target pattern.Therefore, it is main
The target pattern wanted is marked with:2~6, the main LBP patterns of target are extracted by formula (7), then the color characteristic of joint image
Description target, is built into the joint histogram of 8 × 8 × 8 × 5 four-dimensional joint texture-color.
4. LBP feature adaptive M ean Shift method for tracking target is combined as claimed in claim 1, be it is characterized in that, target
Model adopts Bhattacharyya coefficients ρ (y) to weigh with the similarity of candidate target model, i.e.,
Wherein, y represents candidate target region center, and u=1,2...m represent any one color index, Object module and candidate target model are represented respectively;quAnd puY () is illustrated respectively in target area and candidate's mesh
The probability of feature u in mark areal probability distribution rectangular histogram;Bhattacharyya coefficients ρ (y) defines tracking target and candidate's mesh
Target similarity, additionally, defining metric function d (y), which represents the distance between tracking target and candidate target model:
Bhattacharyya is bigger, and object module is more similar to candidate target, to ρ [p (y) q] in pu(y0) place carries out Taylor's level
Number launches, and removes high-order term, only retains single order expansion, and the linear approximation formula for obtaining ρ [p (y) q] is as follows:
y0Represent the center of target candidate model in previous frame, pu(yo) represent previous frame target candidate model;By target
Model is substituted into candidate target model and can be obtained:
Wherein,
In formula (11), Section 1 is unrelated with y, and Section 2 represents the Density Estimator at candidate target region central point y, wherein
Each pixel xiWeight w (xi) represent.So the problem for finding best candidate target translates into searching probability density letter
The problem of number local extremum positions, and above formula (11) is maximized completing by the method for Mean shift iteration, evenObtain the iteration form of Mean shift:
Wherein, g (x)=- k'(x), with previous frame target location y0For initial value, iterate to calculate the y in above formula, until convergence or
Person reaches the maximum iteration time of setting.
5. LBP feature adaptive M ean Shift method for tracking target is combined as claimed in claim 1, be it is characterized in that, target
Dimension estimate comprise the concrete steps that, estimated using the square information of candidate target region weight map track during target chi
Degree direction change, carrys out the deformation of adaptive targets, in previous frame, searches out optimal objective using Mean shift iterative algorithms
Region, as current tracking result, then calculates the zeroth order square M in the region using following formula00:
w(xi) represent each pixel xiThe weight at place, n are the number of pixel in target area;Using Bhattacharyya
Coefficient defines following formula correcting the error caused by zeroth order square:
A=c (ρ) M00 (15)
C (ρ) is the monotonically increasing function related to Bhattacharyya coefficient ρ, and its value is between 0~1:
Wherein, σ is adjustable parameter.When ρ (0~1) reduces, c (ρ) (0~1) also reduces;I.e. with object module and candidate's mould
Type similarity reduces, M00Bigger than the area in real goal region, i.e., error is bigger;
The center of next frame object candidate area, yardstick and direction, are carried out by the first order and second order moments to weight map
Matrix analyses are obtained, and calculate first moment M to weight map10、M01With second moment M20、M02、M11Difference is as follows:
Wherein, (xi,1,xi,2) for the coordinate of pixel i, and the center of next frame candidate target region is by above-mentioned first moment
Try to achieve with the ratio of zeroth order square:
Wherein, centers of the y for next frame candidate target region,Represent the coordinate of center y;Equally, second moment
Square information can be used to describe the shape of target area and direction, the ratio for calculating second moment and zeroth order square is sat with center
MarkThe difference of two squares, respectively μ20μ11μ02:
To analyze scale size and the direction of target area, μ20,μ02, μ11Write covariance matrix Cov, and to covariance square
Battle array carries out singular value decomposition:
Wherein,
U, S carry out two matrixes after singular value decomposition for covariance matrix, wherein, (u11,u21)T(u12,u22)TRepresent respectively
The direction of two axles of target in object candidate area, the direction of target can be drawn by the angle between major axis and trunnion axis, additionally,
λ1,λ2For the eigenvalue of covariance matrix Cov, its ratio is identical with the ratio of target area major and minor axis, i.e. λ1/λ2=a/b, draws
Enter scale factor k so that a=k λ1, b=k λ2, then the area of target area be:π ab=π (k λ1)(kλ2)=A, so, target
The major and minor axis (a, b) in region are respectively:
Thus, the yardstick of target and direction during tracking just are estimated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610965929.7A CN106570887A (en) | 2016-11-04 | 2016-11-04 | Adaptive Mean Shift target tracking method based on LBP features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610965929.7A CN106570887A (en) | 2016-11-04 | 2016-11-04 | Adaptive Mean Shift target tracking method based on LBP features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106570887A true CN106570887A (en) | 2017-04-19 |
Family
ID=58536299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610965929.7A Pending CN106570887A (en) | 2016-11-04 | 2016-11-04 | Adaptive Mean Shift target tracking method based on LBP features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106570887A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391433A (en) * | 2017-06-30 | 2017-11-24 | 天津大学 | A kind of feature selection approach based on composite character KDE conditional entropies |
CN107403175A (en) * | 2017-09-21 | 2017-11-28 | 昆明理工大学 | Visual tracking method and Visual Tracking System under a kind of movement background |
CN109785366A (en) * | 2019-01-21 | 2019-05-21 | 中国科学技术大学 | It is a kind of for the correlation filtering method for tracking target blocked |
CN110008795A (en) * | 2018-01-04 | 2019-07-12 | 纬创资通股份有限公司 | Image object method for tracing and its system and computer-readable storage medium |
CN110033006A (en) * | 2019-04-04 | 2019-07-19 | 中设设计集团股份有限公司 | Vehicle detecting and tracking method based on color characteristic Nonlinear Dimension Reduction |
CN110619658A (en) * | 2019-09-16 | 2019-12-27 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
CN110648368A (en) * | 2019-08-30 | 2020-01-03 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN111209842A (en) * | 2020-01-02 | 2020-05-29 | 珠海格力电器股份有限公司 | Visual positioning processing method and device and robot |
CN112132862A (en) * | 2020-09-11 | 2020-12-25 | 桂林电子科技大学 | Adaptive scale estimation target tracking algorithm based on unmanned aerial vehicle |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102737385A (en) * | 2012-04-24 | 2012-10-17 | 中山大学 | Video target tracking method based on CAMSHIFT and Kalman filtering |
CN103886324A (en) * | 2014-02-18 | 2014-06-25 | 浙江大学 | Scale adaptive target tracking method based on log likelihood image |
-
2016
- 2016-11-04 CN CN201610965929.7A patent/CN106570887A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102737385A (en) * | 2012-04-24 | 2012-10-17 | 中山大学 | Video target tracking method based on CAMSHIFT and Kalman filtering |
CN103886324A (en) * | 2014-02-18 | 2014-06-25 | 浙江大学 | Scale adaptive target tracking method based on log likelihood image |
Non-Patent Citations (3)
Title |
---|
李菊等: "基于颜色和LBP 多特征的mean shift 的跟踪算法", 《合肥工业大学学报( 自然科学版)》 * |
王保云等: "基于颜色纹理联合特征直方图的自适应Meanshift 跟踪算法", 《南京邮电大学学报( 自然科学版)》 * |
谢捷: "尺度自适应的扩展Mean shift跟踪算法", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391433A (en) * | 2017-06-30 | 2017-11-24 | 天津大学 | A kind of feature selection approach based on composite character KDE conditional entropies |
CN107403175A (en) * | 2017-09-21 | 2017-11-28 | 昆明理工大学 | Visual tracking method and Visual Tracking System under a kind of movement background |
CN110008795B (en) * | 2018-01-04 | 2021-09-14 | 纬创资通股份有限公司 | Image target tracking method and system and computer readable recording medium |
CN110008795A (en) * | 2018-01-04 | 2019-07-12 | 纬创资通股份有限公司 | Image object method for tracing and its system and computer-readable storage medium |
CN109785366A (en) * | 2019-01-21 | 2019-05-21 | 中国科学技术大学 | It is a kind of for the correlation filtering method for tracking target blocked |
CN110033006A (en) * | 2019-04-04 | 2019-07-19 | 中设设计集团股份有限公司 | Vehicle detecting and tracking method based on color characteristic Nonlinear Dimension Reduction |
CN110648368A (en) * | 2019-08-30 | 2020-01-03 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN110648368B (en) * | 2019-08-30 | 2022-05-17 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN110619658A (en) * | 2019-09-16 | 2019-12-27 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
CN110619658B (en) * | 2019-09-16 | 2022-04-19 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
CN111209842A (en) * | 2020-01-02 | 2020-05-29 | 珠海格力电器股份有限公司 | Visual positioning processing method and device and robot |
CN111209842B (en) * | 2020-01-02 | 2023-06-30 | 珠海格力电器股份有限公司 | Visual positioning processing method and device and robot |
CN112132862A (en) * | 2020-09-11 | 2020-12-25 | 桂林电子科技大学 | Adaptive scale estimation target tracking algorithm based on unmanned aerial vehicle |
CN112132862B (en) * | 2020-09-11 | 2023-08-15 | 桂林电子科技大学 | Adaptive scale estimation target tracking algorithm based on unmanned aerial vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106570887A (en) | Adaptive Mean Shift target tracking method based on LBP features | |
CN105869178B (en) | A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature | |
Wen et al. | A novel automatic change detection method for urban high-resolution remotely sensed imagery based on multiindex scene representation | |
Jia et al. | Visual tracking via adaptive structural local sparse appearance model | |
CN105139412A (en) | Hyperspectral image corner detection method and system | |
CN103473551A (en) | Station logo recognition method and system based on SIFT operators | |
CN104715251B (en) | A kind of well-marked target detection method based on histogram linear fit | |
CN104123554B (en) | SIFT image characteristic extracting methods based on MMTD | |
CN103927511A (en) | Image identification method based on difference feature description | |
CN104077605A (en) | Pedestrian search and recognition method based on color topological structure | |
Shen et al. | Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement | |
CN110222661B (en) | Feature extraction method for moving target identification and tracking | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
CN115496928A (en) | Multi-modal image feature matching method based on multi-feature matching | |
CN103854290A (en) | Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors | |
Hui | RETRACTED ARTICLE: Motion video tracking technology in sports training based on Mean-Shift algorithm | |
CN110909778B (en) | Image semantic feature matching method based on geometric consistency | |
Guo et al. | Image classification based on SURF and KNN | |
Yang et al. | Collaborative strategy for visual object tracking | |
CN107146215A (en) | A kind of conspicuousness detection method based on color histogram and convex closure | |
Mateus et al. | Articulated shape matching using locally linear embedding and orthogonal alignment | |
CN107330436B (en) | Scale criterion-based panoramic image SIFT optimization method | |
Zheng | Pattern‐driven color pattern recognition for printed fabric motif design | |
CN110751189A (en) | Ellipse detection method based on perception contrast and feature selection | |
Wu et al. | Superpixel tensor pooling for visual tracking using multiple midlevel visual cues fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170419 |