CN103456029B - The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference - Google Patents

The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference Download PDF

Info

Publication number
CN103456029B
CN103456029B CN201310395734.XA CN201310395734A CN103456029B CN 103456029 B CN103456029 B CN 103456029B CN 201310395734 A CN201310395734 A CN 201310395734A CN 103456029 B CN103456029 B CN 103456029B
Authority
CN
China
Prior art keywords
target
tracking
pixel
local
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310395734.XA
Other languages
Chinese (zh)
Other versions
CN103456029A (en
Inventor
张红颖
胡正
孙毅刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN201310395734.XA priority Critical patent/CN103456029B/en
Publication of CN103456029A publication Critical patent/CN103456029A/en
Application granted granted Critical
Publication of CN103456029B publication Critical patent/CN103456029B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The Mean of a kind of anti-Similar color and illumination variation interference? Shift tracking.It is started with from object representation method, make full use of the magnitude relationship of object pixel and its eight neighborhood grey scale pixel value, local conspicuousness operator LSN is expanded, conspicuousness texture operator-local three, a kind of local value quantity is proposed, to improve the separating capacity of object representation method under Similar color background.In order to improve the separating capacity of textural characteristics further, extract crucial pixel on edge, line and angle point and generate target mask, then the LTN feature of mask internal object pixel is combined with the chrominance information less by illumination variation, obtain a kind of new object module to improve the ability of anti-Similar color and illumination variation interference.Do you finally the object module of proposition is embedded into Mean? Shift follows the tracks of under framework, and when there is similar background color and intensity of illumination change interference in scene, and completing still can the task of tracking target sustainedly and stably.

Description

The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference
Technical field
The invention belongs to technical field of computer vision, particularly relate to the MeanShift tracking of a kind of anti-Similar color and illumination variation interference.
Background technology
Target following refers to according to the location of area-of-interest in the present frame of video sequence in the next frame of parameter realize target in sequence such as position, it is an important step of computer vision field, has a wide range of applications at Military and civil fields such as man-machine interaction, intelligent monitoring, vision guided navigations.Reality is followed the tracks of in scene, background color may be similar to target to be tracked and affect the accuracy of following the tracks of, the overall variation of illumination also can disturb the stability of tracking, continues a study hotspot that target following is accurately computer vision field in recent years and difficult point.
At present in target tracking domain, the core tracking most that the people such as Comaniciu propose is representative, and core tracking list of references is shown in:
[1]ComaniciuD,RameshV,MeerP.Kernel-basedobjecttracking[J].IEEETransactionsonPatternAnalysisandMachineIntelligence,2003,25(5):564-577.
Core tracking uses the color histogram of kernel function weighting as goal description model, and the candidate region finding Bhattacharyya coefficient maximum by the optimizing of MeanShift iteration is as the tracking results of target.Core tracking can obtain good tracking effect in general scene, and has the advantages such as efficient, practical.But use single colouring information to describe the sign scarce capacity that target makes target, easily cause tracking inaccurate even failed by the background of Similar color in scene and the interference of illumination variation.
According to the method for expressing of target, to tracking results important, this is true, and a lot of scholar starts with from object representation method, proposes the various core Image Tracking Algorithms Performance improving one's methods to improve traditional RGB color model.
Color and texture are two kinds of bottom visual signatures of having complementary advantages, first the visual cognition process of the mankind normally finds target by color characteristic, then differentiated by textural characteristics, Fusion of Color and textural characteristics carry out the common recognition that characterized target has become scholar.The people such as Ning propose the histogrammic object module of a kind of color-texture for core tracking, and list of references is as follows:
[2]NingJ,ZhangL,ZhangD,etal.Robustobjecttrackingusingjointcolor-texturehistogram[J].InternationalJournalofPatternRecognitionandArtificialIntelligence,2009,23(07):1245-1263.
This models coupling RGB color model and 5 kinds of crucial UniformLBP texture patterns, enhance the sign ability of target to a certain extent and real-time is better, but this object representation method has given up more LBP texture pattern, thus limits the separating capacity of textural characteristics.
The people such as Tavakoli are by analyzing LBP texture operator and it being simplified, local similar quantity (LocalSimilarityNumber is proposed, LSN) this new vision descriptor, this operator adds up the local significance measure of each pixel quantity similar to its 8 neighborhood territory pixel gray-scale values as this pixel, then combined by LSN and RGB color model and obtain a kind of color-significance object module and be incorporated into MeanShift framework achieving target following, list of references is as follows:
[3]TavakoliHR,MoinMS,HeikkilaJ.LocalSimilarityNumberanditsapplicationtoobjecttracking[J].Internationaljournalofadvanceroboticsystems,2013,10.
Compared with the color-texture model proposed with people such as Ning, the people such as this model can make full use of each partial structurtes of 8 neighborhood territory pixels to extract more useful informations, Tavakoli prove to obtain better tracking effect compared with the method for the people such as Ning by experiment.Color-significance object module weak point that the people such as Tavakoli extract is, each significance degree of LSN operator may include multiple different local grain structure and can not distinguish these different local grain structures, be unfavorable for distinguishing target in the scene that color is similar to background, and RGB color model affects comparatively large by illumination Strength Changes, affect the tracking stability of algorithm out of doors etc. in illumination variation situation.
Summary of the invention
In order to solve the problem, the object of the present invention is to provide the MeanShift tracking of a kind of anti-Similar color and illumination variation interference.
In order to achieve the above object, the MeanShift tracking of anti-Similar color provided by the invention and illumination variation interference comprises the following step carried out in order:
1) present frame of video is read in, and select interested target to be tracked by the mode of man-machine interaction, the centre coordinate of initialization target to be tracked, the parameters such as yardstick, local conspicuousness texture operator-local three value quantity (LocalTernaryNumber is extracted to object pixel, LTN), by getting edge, on line and angle point, crucial pixel generates target mask, to improve the separating capacity of LTN textural characteristics further, finally by the LTN feature of pixel in target mask with affect less chrominance information by illumination Strength Changes and combine and obtain object representation method, and all pixel said methods treating tracking target represent, to set up object reference model,
2) next frame of video is read in as present frame, using the tracing positional of target to be tracked in previous frame as starting point, the object representation method establishment target candidate model in step 1) is utilized in the candidate region of present frame, then using Bhattacharyya coefficient as the similarity measurement of object reference model and target candidate model, the candidate region finding Bhattacharyya coefficient maximum in neighborhood by the optimizing of MeanShift iteration is as the tracking results of interesting target, wherein iteration convergence condition can preset, then the convergence position of target to be tracked is upgraded, complete the target to be tracked video that tracks in the next frame in the same way to terminate.
Object representation method in described step 1) is in hsv color space, be 16 grades by the H element quantization of object pixel, the H component after then being quantized by object pixel in target LTN mask to be tracked and chrominance information combine with the local conspicuousness texture operator LTN of object pixel and obtain.
The MeanShift tracking of anti-Similar color provided by the invention and illumination variation interference is started with from object representation method, make full use of the magnitude relationship of object pixel and its eight neighborhood grey scale pixel value, local conspicuousness operator LSN is expanded, a kind of conspicuousness texture operator-local three, local value quantity is newly proposed, to improve the separating capacity of object representation method under Similar color background.In order to improve the separating capacity of textural characteristics further, extract crucial pixel on edge, line and angle point and generate target mask, then the LTN feature of mask internal object pixel is combined with the chrominance information less by illumination variation, obtain a kind of new object module to improve the ability of anti-Similar color and illumination variation interference.Finally the object module of proposition is embedded into MeanShift to follow the tracks of under framework, and when there is similar background color and intensity of illumination change interference in scene, completing still can the task of tracking target sustainedly and stably.
Accompanying drawing explanation
Fig. 1 is the MeanShift tracking process flow diagram of anti-Similar color provided by the invention and illumination variation interference.
The local significance degree schematic diagram of pixel centered by Fig. 2 (a)-(i).
Fig. 3 (a)-(f) is two groups with operator generative process schematic diagram.
Fig. 4 is that one group of target to be tracked and LTN mask thereof extract result.
Fig. 5 is the part tracking results adopting the MeanShift tracking of anti-Similar color provided by the invention and illumination variation interference to follow the tracks of dolly sequence.
Fig. 6 is when adopting respectively based on the core tracking of conventional color model and tracking of the present invention, the tracking error that before above-mentioned dolly sequence, 100 frames are concrete and iterations curve.
Fig. 7 is the part tracking results adopting the MeanShift tracking of anti-Similar color provided by the invention and illumination variation interference to follow the tracks of women's sequence.
Fig. 8 (a) and (b) sets forth the MeanShift iterations distribution histogram of core tracking based on conventional color model and tracking of the present invention.
Fig. 9 (a) and (b) sets forth the error distribution histogram that above-mentioned two kinds of methods follow the tracks of women's sequence.
Embodiment
Be described in detail below in conjunction with the MeanShift tracking of the drawings and specific embodiments to anti-Similar color provided by the invention and illumination variation interference.
As shown in Figure 1, the MeanShift tracking of anti-Similar color provided by the invention and illumination variation interference comprises the following step carried out in order:
1) present frame of video is read in, and select interested target to be tracked by the mode of man-machine interaction, the parameter such as centre coordinate, yardstick of initialization target to be tracked, local conspicuousness texture operator-local three value quantity (LTN) is extracted to object pixel, this operator is considering that each significance degree of LSN operator may include multiple different local grain structure, be unfavorable for distinguishing on the basis of the color target similar to background, LSN operator expanded and obtains.
The LSN operator that the people such as Tavakoli propose weighs the local significance degree of center pixel by adding up the 8 neighborhood territory pixels quantity similar with center pixel gray-scale value, be defined as follows:
LSN P , R d = Σ i = 0 P - 1 f ( g i - g c , d ) - - - ( 1 )
G in formula i|i=0 ..., P-1with g ccentered by, R is P neighborhood territory pixel gray-scale value of radius, d is similar differences degree,
f ( x , d ) = 1 , | x | ≤ d 0 , | x | > d - - - ( 2 )
represent that a pixel and its radius are the 8 neighborhood territory pixel similarities of 1, span is [0,8], correspond to 9 kinds of significance degrees of this center pixel respectively, as shown in Fig. 2 (a)-(i).
Represent by white circle when neighborhood territory pixel is similar to center pixel in Fig. 2, otherwise represent by black circles.Fig. 2 (a) represents that the gray-scale value of 8 neighborhood territory pixels is all similar to center pixel, only has a kind of texture pattern corresponding with it, and center pixel corresponding to this pattern is least remarkable.Fig. 2 (b)-(h) expression has a different neighborhood territory pixel similar to center pixel respectively, and wherein each significance degree all correspond to multiple texture pattern, and center pixel significance degree strengthens successively.Fig. 2 (i) represents that 8 neighborhood territory pixels are all not similar to center pixel, and this center pixel is the most remarkable, has and only has a kind of texture pattern corresponding with it.As can be seen here, each local significance degree represented by LSN operator may include multiple different local grain structure, these different texture structures can not be distinguished by this operator, therefore when target is present in the scene close with its color, object pixel may be similar to the local significance of background pixel, only uses local similar quantity can not distinguish target and background well.
LSN operator is expanded by the present invention, proposes a kind of conspicuousness texture operator-local three, local value quantity LTN newly.This operator adds up the neighborhood territory pixel quantity large and less than center pixel gray-scale value respectively by Two Variables, and definition local three value quantity LTN are a bivector, and expression formula is as follows:
LTN P , R d = ( Σ i = 0 P - 1 f s ( g i - g c , d ) , Σ i = 0 P - 1 f l ( g i - g c , d ) ) - - - ( 3 )
In formula, Section 1 represents the neighborhood territory pixel quantity less than center pixel gray-scale value, wherein:
f s ( x , d ) = 1 , x < d 0 , x &GreaterEqual; d - - - ( 4 )
Section 2 represents the neighborhood territory pixel quantity larger than center pixel gray-scale value, wherein:
f l ( x , d ) = 1 , x > d 0 , x &le; d - - - ( 5 )
The present invention does not adopt tri-vector the quantity that neighborhood territory pixel is similar to center pixel gray-scale value to be included when defining local three value quantity LTN operator, it is the local significance degree because uniquely can be determined pixel by formula (3), uniquely can determine the quantity that neighborhood territory pixel is similar to center pixel gray-scale value, formula (3) does not produce heavy remaining information. represent that a radius is the 8 neighborhood territory pixel local conspicuousness textural characteristics of 1,9 kinds of significance degrees of center pixel are with shown in Fig. 2.
The gray scale difference value relation of LTN operator computing center's pixel and neighborhood, the global illumination change for scene has good adaptability, also has good unchangeability to yardstick and rotation change.LTN operator is on LSN operator basis, take full advantage of the magnitude relationship of neighborhood territory pixel and center pixel gray-scale value, each local grain structure can be expressed and the different texture pattern that same significance degree can be comprised is distinguished, be conducive to distinguishing target and background in color similarity situation.
Fig. 3 is two groups with operator generative process schematic diagram.Fig. 3 (a), (d) are the intensity profile of a fritter target image, and grey fill area represents the current pixel of feature to be extracted.Fig. 3 (b), (e) are operator result of calculation, two center pixel local significances with different local grain structure are 3.Fig. 3 (c), (f) are operator result of calculation, can by (a), in (d) local significance be that 3(deducts two element sums in formula (3) with 8) object pixel distinguished according to the difference of local grain structure.
Shown in Fig. 29 kind of a local significance degree includes the multiple texture structures such as flat site, spot, line, angle point and edge respectively.The people such as Tavakoli show by experiment, and LSN mask can obtain more complete interesting target while the crucial pixels such as extraction target coboundary, line and angle point, remains more useful information.
In order to improve the texture separating capacity of above-mentioned LTN feature, the present invention uses for reference LSN masking method, and objective definition LTN mask is as follows:
mLTN 8,1 d = 1 + ( 8 - &Sigma; i = 0 7 f s ( g i - g c , d ) - &Sigma; i = 0 7 f l ( g i - g c , d ) ) , &Sigma; i = 0 7 f s ( g i - g c , d ) + &Sigma; i = 0 7 f l ( g i - g c , d ) &Element; { 4,5,6,7,8 } 0 , others - - - ( 6 )
In order to verify the validity of this LTN mask, Fig. 4 gives target to be tracked and the LTN mask extraction result thereof of one group of paddler's head.
As can be seen from Fig. 4 (a), there is certain similarity in the colour of skin of paddler to be tracked and background, directly pixels all in object block quantized to the accuracy that feature space may affect tracking results.Fig. 4 (b) shows, LTN mask, while the important pixels such as extraction target coboundary, line and angle point, remains the integrality of target preferably to provide more useful information, and inhibits color similarity and the unconspicuous flat site of textural characteristics.
The chrominance information of target has affects little advantage by illumination Strength Changes, can be used as the color of object feature in the tracking scene of the illumination variation such as open air.Based on this feature, the chrominance information of pixel in target LTN mask and LTN feature combine by following the present invention, obtain a kind of new object representation method.Be exactly specifically, in hsv color space, be 16 grades by the H element quantization of object pixel, then in target LTN mask, H component after being quantized by object pixel and chrominance information combine with the local conspicuousness textural characteristics LTN of object pixel and obtain a kind of new method for expressing, and wherein the two-dimentional element of LTN operator all has 5 kinds of values possibilities.Therefore, object pixel is quantized to the feature space of 16 × 5 × 5=400 dimension by the present invention.
Adopt same method to quantize to feature space to pixels all in target LTN mask, set up object reference model { q u} u=1 ..., mas follows:
q u = C &Sigma; i = 1 n k ( | | x i * - x 0 h | | 2 ) &delta; [ b ( x i * ) - u ] - - - ( 7 )
In formula (7) represent the coordinate of n pixel in target and centre coordinate is x 0coordinate is that the object pixel at x place quantizes to feature space by function b (x), m is the quantification gradation of feature space, k (x) is isotropy kernel function, pixel nearer for distance objective center is composed with larger weights by it, otherwise composes with less weights, and h is kernel function bandwidth, δ (x) is one dimension Kroneckerdelta function, and C is normalization coefficient:
C = 1 / &Sigma; i = 1 n k ( | | x i * - x 0 h | | 2 ) - - - ( 8 )
When this object reference model physical meaning is feature space object pixel being quantized to the present invention's proposition, the probability density distribution of feature at different levels.
2) read in next frame of video as present frame, using the tracing positional of target to be tracked in previous frame as starting point, in the candidate region of present frame, utilize the object representation method establishment target candidate model in step 1).
The centre coordinate of hypothetical target tracking results in previous frame is y, represent the coordinate of the object candidate area pixel of coordinate centered by y, according to the object reference method for establishing model of step 1), in like manner set up the target candidate model { p in current location u(y) } u=1 ..., mas follows:
p u ( y ) = C h &Sigma; i = 1 n h k ( | | x i - y h | | 2 ) &delta; [ b ( x i ) - u ] - - - ( 9 )
The wherein same step 1) of each meaning of parameters, and normalization coefficient:
C h = 1 / &Sigma; i = 1 n h k ( | | x i - y h | | 2 ) - - - ( 10 )
So far, reference model and the candidate family of target are set up, and follow the tracks of and are just to locate target region the most similar in the current frame, with the method for measuring similarity of Bhattacharyya coefficient as object reference model and candidate family:
&rho; ( p ( y ) , q ) = &Sigma; u = 1 m p u ( y ) q u - - - ( 11 )
In order to the candidate region finding Bhattacharyya coefficient maximum, by formula (11) at tracking starting point y 0(tracking results of target in previous frame) place's Taylor expansion obtains:
&rho; ( p ( y ) , q ) &ap; 1 2 &Sigma; u = 1 m p u ( y 0 ) q u + C h 2 &Sigma; i = 1 n h w i k ( | | x i - y h | | 2 ) - - - ( 12 )
Wherein weight:
w i = &Sigma; u = 1 m q u p u ( y 0 ) &delta; [ b ( x i - u ) ] - - - ( 13 )
Formula (12) left side Section 1 is constant, and therefore Bhattacharyya coefficient maximum being equal to finds weight w ithe region that core weighted sum is maximum.And weight w iphysical meaning be exactly the confidence level that each pixel in current candidate region comes from target, whole weight distribution finds the region of maximum value to be the tracking results of target in current sequence by the optimizing of MeanShift iteration.Wherein each time iteration by candidate region centre coordinate by current location y 0move to reposition y 1:
y 1 = &Sigma; i = 1 n h x i w i g ( | | x i - y 0 h | | 2 ) &Sigma; i = 1 n h w i g ( | | x i - y 0 h | | 2 ) - - - ( 14 )
Wherein, g (x)=-k'(x).
According to the condition of convergence of the maximum iteration time N preset and displacement ε as MeanShift iteration, and final convergence position is recorded as interesting target tracking results in the current frame.
Finally using the reference position of target to be tracked tracking results in the current frame as its candidate target in the next frame, circulation step 2) continue the tracking of target to be tracked in next frame of video until video terminates.
As can be seen from formula (14), weight w accurately iit is the important prerequisite that MeanShift iteration searches out target actual position.Again as seen from formula (13), weight w idirectly determined by the method for expressing of target and candidate region.Therefore, theoretically, when there is the background similar to color of object or intensity of illumination in scene and changing, tracking provided by the invention can obtain better tracking performance compared with the core tracking based on conventional color model, the validity of the present invention of the results show below.
In order to verify the effect of the inventive method, shown below is and be configured to Pentium (R) Dual-Core2.70GHzCPU, on the PC of 2GBRAM, use VisualStudio2010 Integrated Development Environment, OpenCV2.4.3 and two standard sets cycle tests, test respectively the inventive method exist in scene similar background color and intensity of illumination change disturbed condition under tracking performance, tracking results is as shown in Fig. 5-Fig. 9.
Dolly sequence shown in Fig. 5 is the anti-similar background color interference performance test result of this method, dolly wherein to be tracked and the background color of scene have certain similarity, red boxes is the core tracking tracking results based on conventional color model, and green box is the tracking results of this method.Can find out qualitatively, based on the core tracking of conventional color model under the background interference of Similar color, there is larger tracking error, and the serious consequence that target loses completely occurs after the 93rd frame.Adopt this method can continue tracking target exactly, mainly because the LTN feature in this method has good texture separating capacity, the target and background of color similarity is distinguished by textural characteristics.Table 1 respectively from average tracking error, standard deviation, iterations and tracking velocity four aspect count the target following result of 100 frames before dolly sequence (after the 93rd frame, the core tracking of conventional color model makes target lose completely), analyze the tracking performance of two kinds of methods quantitatively.
Table 1 two kinds of methods are to the tracking performance of dolly sequence
As can be seen from Table 1, the tracking error of this method is significantly less than the core tracking based on conventional color model.This method has a small amount of consuming time when calculating LTN operator, but the separating capacity of the object representation method in this method is stronger, iterations decreases half, and iteration is time-consuming portion main in MeanShift tracking framework, therefore the average tracking speed of this method can slightly faster than the core tracking based on conventional color model, and the tracking error that before sequence, 100 frames are concrete and iterations curve are respectively as shown in Fig. 6 (a) and (b).
Women's sequence shown in Fig. 7 is the adaptive faculty test result of this method to scene illumination Strength Changes, and wherein red boxes is the tracking results of the core tracking based on conventional color model, and green box is the tracking results of this method.As can be seen from tracking results, the core tracking based on conventional color model is subject to the impact of outdoor intensity of illumination change, causes the tracking results of target inaccurate.This method is less by illumination variable effect, good tracking results can be obtained, this is mainly because chrominance information has the insensitive advantage of illumination variation, and LTN operator statistics is the gray-scale value magnitude relationship of center pixel and neighborhood territory pixel, has good adaptability to the overall variation of scene light intensity.Table 2 respectively from average tracking error, standard deviation, iterations and tracking velocity four aspect analyze two kinds of methods quantitatively to the tracking performance of women's sequence.
Table 2 two kinds of methods are to women's sequential tracks performance
This method tracking performance is obviously better than the core tracking based on conventional color model as can be seen from Table 2.Fig. 8 (a) and (b) sets forth the MeanShift iterations distribution histogram of core tracking based on conventional color model and this tracking.As seen from Figure 8, the iterations of this method is distributed in 1 ~ 3 time more, compared with the core tracking based on conventional color model, there is iterative convergence speed faster, this compensate for the used time of LTN operator to a certain extent, and therefore the average tracking speed of this tracking is just a little less than the core tracking based on conventional color model.
Fig. 9 (a) and (b) sets forth the error distribution histogram that above-mentioned two kinds of methods follow the tracks of women's sequence.Comparison diagram 9(a) and 9(b) error distribution can find out, core tracking based on conventional color model is larger by illumination effect, the tracking error of women's sequence is comparatively evenly distributed between 0 ~ 20, and the tracking error of this method is more distributed between 0 ~ 15, and mainly concentrate between 0 ~ 10, compared with the core tracking based on conventional color model, there is higher tracking accuracy, confirm the robustness that this method changes outdoor intensity of illumination.

Claims (2)

1. a MeanShift tracking for anti-Similar color and illumination variation interference, is characterized in that: described tracking comprises the following step carried out in order:
1) present frame of video is read in, and select interested target to be tracked by the mode of man-machine interaction, the centre coordinate of initialization target to be tracked, scale parameter, conspicuousness texture operator one local, local three value quantity are extracted to object pixel, by getting edge, on line and angle point, crucial pixel generates target mask, to improve the separating capacity of local three value quantity textural characteristics further, finally the local three of pixel in target mask is worth quantative attribute and affects less chrominance information by illumination Strength Changes and combine and obtain object representation method, and all pixel said methods treating tracking target represent, to set up object reference model,
2) next frame of video is read in as present frame, using the tracing positional of target to be tracked in previous frame as starting point, in the candidate region of present frame, utilize step 1) in object representation method establishment target candidate model, then using Bhattacharyya coefficient as the similarity measurement of object reference model and target candidate model, the candidate region finding Bhattacharyya coefficient maximum in neighborhood by the optimizing of MeanShift iteration is as the tracking results of interesting target, wherein iteration convergence condition can preset, then the convergence position of target to be tracked is upgraded, complete the target to be tracked video that tracks in the next frame in the same way to terminate.
2. tracking according to claim 1, it is characterized in that: described step 1) in object representation method be in hsv color space, be 16 grades by the H element quantization of object pixel, the local conspicuousness texture operator of the H component after then being quantized by object pixel in the three value quantity masks of target local to be tracked and chrominance information and object pixel locally three value quantity combines and obtains.
CN201310395734.XA 2013-09-03 2013-09-03 The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference Expired - Fee Related CN103456029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310395734.XA CN103456029B (en) 2013-09-03 2013-09-03 The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310395734.XA CN103456029B (en) 2013-09-03 2013-09-03 The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference

Publications (2)

Publication Number Publication Date
CN103456029A CN103456029A (en) 2013-12-18
CN103456029B true CN103456029B (en) 2016-03-30

Family

ID=49738356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310395734.XA Expired - Fee Related CN103456029B (en) 2013-09-03 2013-09-03 The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference

Country Status (1)

Country Link
CN (1) CN103456029B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598914A (en) * 2013-10-31 2015-05-06 展讯通信(天津)有限公司 Skin color detecting method and device
CN105096347B (en) * 2014-04-24 2017-09-08 富士通株式会社 Image processing apparatus and method
CN105957106B (en) * 2016-04-26 2019-02-22 湖南拓视觉信息技术有限公司 The method and apparatus of objective tracking
CN106815860B (en) * 2017-01-17 2019-11-29 湖南优象科技有限公司 A kind of method for tracking target based on orderly comparison feature
CN109740613B (en) * 2018-11-08 2023-05-23 深圳市华成工业控制股份有限公司 Visual servo control method based on Feature-Shift and prediction
CN116309687B (en) * 2023-05-26 2023-08-04 深圳世国科技股份有限公司 Real-time tracking and positioning method for camera based on artificial intelligence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
CN101916446A (en) * 2010-07-23 2010-12-15 北京航空航天大学 Gray level target tracking algorithm based on marginal information and mean shift

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
CN101916446A (en) * 2010-07-23 2010-12-15 北京航空航天大学 Gray level target tracking algorithm based on marginal information and mean shift

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Local Similarity Number and Its Application to Object Tracking;Hamed Rezazadegan Tavakoli et al.;《International Journal of Advanced Robotic Systems》;20130329;第10卷(第184期);第1-7页 *
基于多特征Mean Shift的人脸跟踪算法;张涛 等;《电子与信息学报》;20090831;第31卷(第8期);第1816-1820页 *

Also Published As

Publication number Publication date
CN103456029A (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN103456029B (en) The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference
CN107292339B (en) Unmanned aerial vehicle low-altitude remote sensing image high-resolution landform classification method based on feature fusion
CN105335966B (en) Multiscale morphology image division method based on local homogeney index
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
CN103218831B (en) A kind of video frequency motion target classifying identification method based on profile constraint
CN102289948B (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN103632363B (en) Object level high-resolution remote sensing image change detecting method based on Multiscale Fusion
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN104751478A (en) Object-oriented building change detection method based on multi-feature fusion
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN106023257A (en) Target tracking method based on rotor UAV platform
CN102509104B (en) Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
CN104966085A (en) Remote sensing image region-of-interest detection method based on multi-significant-feature fusion
CN104361589A (en) High-resolution remote sensing image segmentation method based on inter-scale mapping
CN105139015A (en) Method for extracting water body from remote sensing image
CN103927511A (en) Image identification method based on difference feature description
CN102903109B (en) A kind of optical image and SAR image integration segmentation method for registering
CN104573685A (en) Natural scene text detecting method based on extraction of linear structures
CN103440510A (en) Method for positioning characteristic points in facial image
CN101980317A (en) Method for predicting traffic flow extracted by improved C-V model-based remote sensing image road network
CN104143077B (en) Pedestrian target search method and system based on image
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN110263635A (en) Marker detection and recognition methods based on structure forest and PCANet
CN106340005A (en) High-resolution remote sensing image unsupervised segmentation method based on scale parameter automatic optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160330

Termination date: 20170903