CN103198493A - Target tracking method based on multi-feature self-adaption fusion and on-line study - Google Patents
Target tracking method based on multi-feature self-adaption fusion and on-line study Download PDFInfo
- Publication number
- CN103198493A CN103198493A CN2013101215769A CN201310121576A CN103198493A CN 103198493 A CN103198493 A CN 103198493A CN 2013101215769 A CN2013101215769 A CN 2013101215769A CN 201310121576 A CN201310121576 A CN 201310121576A CN 103198493 A CN103198493 A CN 103198493A
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- target region
- new candidate
- candidate target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a target tracking method based on multi-feature self-adaption fusion and on-line study. Target features are extracted and used as template features; three types of features are extracted from each of novel candidate target regions; a self-adaption fusion process is carried out according to distinctiveness and correlation of all the features; Bhattacharyya distances between the features obtained after fusion and the template features are calculated; the Bhattacharyya distances are subjected to uniformization and then used as weights of the novel candidate target regions; the novel candidate target region with the maximum weight, and a target region are subjected to an overlap judgment, if the overlap rate is less than an overlap rate threshold value, a multiple-time region of the novel candidate target region with the maximum weight is input to a detector, and when a recognizer outputs 'yes', the fact that target tracking succeeds is shown, and the recognizer, the template features and the target region are updated; if the recognizer outputs 'no', the fact that a novel target is found is shown; and if the overlap rate is greater than or equal to the overlap rate threshold value, the recognizer, the template features and the target region are updated. Through the target tracking method based on the multi-feature self-adaption fusion and the on-line study, the adaption ability of the target tracking under different scenarios and certain deformation conditions is enhanced, and the problem that tracking drifting is prone to occurrence after occlusion is avoided.
Description
Technical field
The present invention relates to target tracking domain, particularly a kind of method for tracking target based on the fusion of many features self-adaptation and on-line study.
Background technology
Video frequency object tracking refers to the process to the detection of target in the video sequence, sign and track extraction.Video frequency object tracking has practical application request in fields such as video monitoring, event analysis, man-machine interactions.
At present, state-of-the-art supervisory system can not ideally be handled the dynamic tracing task under the complex scene in the world, for example: deformation, block, the tracking under illumination variation, shade or the crowded environment.When between target partial occlusion and target generation deformation taking place especially, target following is still a challenge.
Based on the common initialization of the tracking of single feature target area, extract the arbitrary target feature, for example: color characteristic, search for and mate at next frame.But the tracing task under this method intractable complex background causes target following and follow-up track assessed and does not have a robustness.
For this reason, proposed the method for tracking target based on many features in the prior art again, this method is extracted features such as color, edge usually, can finish the tracing task under some complex situations preferably.
The inventor finds to exist at least in the prior art following shortcoming and defect in realizing process of the present invention:
Can't adapt to the variation of target shape based on the method for tracking target of many features, can not well solve the problem of following the tracks of drift behind the partial occlusion takes place between the target.
Summary of the invention
The invention provides a kind of method for tracking target based on the fusion of many features self-adaptation and on-line study, this method has avoided following the tracks of the problem of drift, has well adapted to the variation of target shape, sees for details hereinafter and describes:
A kind of method for tracking target based on the fusion of many features self-adaptation and on-line study said method comprising the steps of:
(1) target signature is extracted and as template characteristic in selected target zone from a two field picture;
(2) initialization recognizer, input next frame image, initialization candidate target region; Obtain new candidate target region according to shifting formula;
(3) described new candidate target region is extracted color, edge, three kinds of features of texture respectively; The property distinguished and correlativity according to each feature are carried out the self-adaptation fusion;
(4) calculate to merge Pasteur's distance of back feature and masterplate feature, with after described Pasteur's range normalization as the weight of described new candidate target region;
(5) N new candidate target region sorted according to the weight size, if the resampling judgment value greater than the resampling decision threshold, resamples execution in step (6); If not, execution in step (6);
(6) overlapping judgement is carried out in new candidate target region and the described target area of weight limit, if Duplication is less than Duplication threshold value execution in step (7); Otherwise, execution in step (8);
(7) with many times of regional input detectors of the new candidate target region of weight limit, if described detecting device output 0, failure is followed the tracks of in expression; Otherwise described detecting device output result is input to recognizer, if the output of described recognizer is that expression is followed the tracks of success and upgraded described recognizer, described template characteristic and described target area; If output is not, fresh target is found in expression, and flow process finishes;
(8) Duplication is then thought and is followed the tracks of successfully more than or equal to the Duplication threshold value, upgrades described recognizer, described template characteristic and described target area, and flow process finishes.
Described extraction target signature also specifically comprises as the step of template characteristic:
1) extracts color characteristic information;
2) extract edge feature information;
3) texture feature extraction information;
4) merge described color characteristic information, described edge feature information and described textural characteristics information, obtain the target signature histogram, as described template characteristic.
The step of described extraction color characteristic information specifically comprises:
1) color space is divided into colored region and achromaticity zone, described colored region and described achromaticity zone are carried out the HSV subregion, obtain Q
H* Q
SIndividual colored sub-range and Q
vIndividual achromaticity sub-range is with described Q
H* Q
SIndividual colored sub-range and described Q
vIndividual achromaticity sub-range is as Q
H* Q
S+ Q
vU between individual chromatic zones;
2) give different weights according to the distance of pixel and target area central point is far and near to described pixel, according to the HSV of described pixel u between corresponding chromatic zones is voted;
3) the ballot value of adding up between each chromatic zones obtains the color characteristic histogram.
The step of described extraction edge feature information specifically comprises:
1) described target area interpolation is obtained 2 times wide and high interpolation zone, then respectively to described target area and described interpolation area dividing;
2) calculate edge strength and the direction of each sub-piece, in 0 ° of-360 ° of scope, edge direction is divided into several direction zones, to the ballot of direction zone, obtain the edge feature histogram of each sub-piece according to edge strength;
3) the described edge feature histogram that each sub-piece is calculated couples together and obtains the complete edge feature histogram.
The step of described texture feature extraction information specifically comprises:
1) to each sub-piece, calculates local binary pattern feature histogram;
2) the described local binary pattern feature histogram that each sub-piece is calculated couples together and obtains complete textural characteristics histogram.
The described property distinguished is defined as described new candidate target region and adjacent background about the similarity degree of a certain feature, represents with two histogrammic Pasteur's coefficients.
Described correlativity is defined as described new candidate target region and described masterplate feature about the similarity degree of a certain feature, represents with two histogrammic Pasteur's coefficients.
Described renewal recognizer specifically is: the new candidate target region that positive sample was verified by described detecting device is formed, and negative sample is picked at random and zone new candidate target region equidimension in the background;
Described renewal template characteristic is specially: with the feature of the new candidate target region of weight limit as the template characteristic after upgrading;
Described renewal target area is specially: with the new candidate target region of weight limit as the target area after upgrading.
The beneficial effect of technical scheme provided by the invention is: this method has overcome the unicity of single feature, strengthened the adaptive faculty of target following under different scenes and the certain deformation situation, and avoided blocking the problem that drift easily takes place to follow the tracks of in the back, improved accuracy and the robustness of target following greatly.
Description of drawings
Fig. 1 a kind ofly merges and the process flow diagram of the method for tracking target of on-line study based on many features self-adaptation;
Fig. 2 is the synoptic diagram of initialization target area;
The synoptic diagram that Fig. 3 blocks for target 1;
Fig. 4 is for successfully to give the synoptic diagram that blocks lost target for change by on-line study;
Fig. 5 is the synoptic diagram of another initialization target area;
Fig. 6 is target 1 and the staggered synoptic diagram that blocks of target 2 generations;
Fig. 7 does not follow the tracks of the synoptic diagram of drift for accurate tracking.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, embodiment of the present invention is described further in detail below in conjunction with accompanying drawing.
For fear of the problem of following the tracks of drift, well adapt to the variation of target shape, the embodiment of the invention provides a kind of method for tracking target based on the fusion of many features self-adaptation and on-line study, on-line study is by study automatically in real time, can overcome target deformation and follow the tracks of the problem that drift brings, realized the tracking effect of expection preferably, referring to Fig. 1, seen for details hereinafter and describe:
101: as input, target signature is extracted and as template characteristic in selected target zone from a two field picture with a frame of any video sequence;
Wherein, being operating as of selected target zone is conventionally known to one of skill in the art, and the target area is the rectangular area, for example: can artificially select rectangular area, tracked object place by hand according to the actual requirements; Perhaps adopt the detected object model (for example: human detection model [1]) detect automatically, calculate the masterplate feature.
Wherein, extract target signature and be specially as the step of template characteristic:
1) extracts color characteristic information;
This method adopts based on HSV(colourity, saturation degree and brightness) the nuclear weighting color characteristic histogram of color space model carries out modeling to target, and basic thought is:
(1) color space is divided into colored region and achromaticity zone, colored region and achromaticity zone are carried out the HSV subregion, obtain Q
H* Q
SIndividual colored sub-range and Q
vIndividual achromaticity sub-range is with Q
H* Q
SIndividual colored sub-range and Q
vIndividual achromaticity sub-range is as Q
H* Q
S+ Q
vU between individual chromatic zones;
For example: all brightness less than 20% or saturation degree all be included into the achromaticity zone less than 10%, and be divided into Q by brightness value
vColor region beyond the individual achromaticity sub-range, achromaticity zone is colored region, is divided into Q by colourity and saturation degree
H* Q
SIndividual colored sub-range.
(2) (the target's center's pixel far away of namely adjusting the distance is given less weights to give different weights according to the distance distance of pixel and target area central point to pixel, thereby weaken the interference of object boundary and background), according to the HSV of pixel u between corresponding chromatic zones is voted;
The objective definition zone is wide W, the rectangular area of high H.Pixel on the border, target area may belong to background or partial occlusion has taken place, and in order to increase the reliability of color distribution, adopts to assign weight as minor function:
In the formula, k (x
i) expression gives pixel x
iWeight, y represents target area central point, x
iBe i pixel in the target area;
Expression target area size.
This method is determined between the corresponding chromatic zones of each pixel by Dirac function δ, δ (b (x
iThe expression of)-u) pixel x
iThe distribution of u between chromatic zones in histogram: b (x
i) be pixel x
iHSV, φ,
η is constant.Dirac function δ is defined as follows:
δ(x-φ)=0,x≠φ
For example: 3 pixels are arranged, be respectively: x
1, x
2And x
3, pixel x
1Being u1 between corresponding chromatic zones, is u1 between the chromatic zones of pixel x2 correspondence, pixel x
3Be u2 between corresponding chromatic zones, then the voting results of u1 are pixel x between chromatic zones
1With pixel x
2The weight sum; The voting results of u2 are pixel x between chromatic zones
3Weight; The voting results of u3 are 0 between chromatic zones.
(3) obtain the color characteristic histogram by the ballot value that calculates between each chromatic zones, the histogrammic dimension of color characteristic be between color region with the number sum in achromaticity interval.
Color characteristic histogram p
cCan be expressed as:
In the formula, U is the number between chromatic zones, and N is the number of target area pixel, normaliztion constant C
h:
2) extract edge feature information;
This method adopts a kind of edge direction characteristic of the edge strength weighting based on multiple dimensioned piecemeal, and this step is specially:
(1) the target area interpolation is obtained 2 times wide and high interpolation zone, then respectively to target area and interpolation area dividing;
Wherein, be conventionally known to one of skill in the art to the method for target area interpolation, method of partition adopts methods such as overlapping block or non overlapping blocks, and target area and interpolation zone are divided into some height pieces, and the embodiment of the invention does not limit this.
(2) calculate edge strength and the direction of each sub-piece, in 0 ° of-360 ° of scope, edge direction is divided into several direction zones, to the ballot of direction zone, obtain the edge feature histogram of each sub-piece according to edge strength;
Wherein, the embodiment of the invention adopts Suo Beier operator [2] to calculate edge strength and direction, can also adopt other method, and the embodiment of the invention does not limit this.The number in direction zone is set according to the needs in the practical application, and for example: spending with 20 is the interval, in 0 ° of-360 ° of scope edge direction is divided into 18 direction zones.The step of being voted in the direction zone according to edge strength is conventionally known to one of skill in the art, and the embodiment of the invention is not done at this and given unnecessary details.
(3) the edge feature histogram that each sub-piece is calculated couples together and obtains the complete edge feature histogram.
Wherein, the dimension of complete edge feature histogram is the product of direction zone number and sub-piece sum.
3) texture feature extraction information;
This method adopts multiple dimensioned local binary pattern [3] operator based on piecemeal to calculate textural characteristics, compare original local binary pattern feature, multiple dimensioned local binary pattern feature based on piecemeal is more insensitive to the influence of picture noise, and can extract abundant more part and global information, target image is had stronger expression ability and discriminating power, and robustness is stronger.
(1) to each sub-piece, calculates local binary pattern feature histogram;
Wherein, each sub-piece is to divide the sub-piece that obtains in the edge feature information, and this method adopts the unified pattern [4] that comprises 59 kinds of patterns to calculate local binary pattern feature.
(2) the local binary pattern feature histogram that each sub-piece is calculated couples together and obtains complete textural characteristics histogram.
Wherein, the histogrammic dimension of complete textural characteristics is 59 times sub-piece sum.
4) Fusion of Color characteristic information, edge feature information and textural characteristics information obtain the target signature histogram, as template characteristic.
Namely color characteristic information, edge feature information and textural characteristics information are carried out normalized, (weighted value is set according to the needs in the practical application according to default weight, for example: the weight of color characteristic information, edge feature information and textural characteristics information is than being 1:1:1) three kinds of characteristic informations after the normalization are connected, obtain the target signature histogram.
102: the initialization recognizer;
Recognizer is used for giving for change the tracking target of losing, recognizer adopts Boost algorithm [5] training to learn, positive sample in the study is the target area, the rectangular area identical with positive sample size in the negative sample picked at random background, the selection of negative sample number determines that according to the actual requirements this laboratory reference value is 3.
103: input next frame image, initialization candidate target region;
According to Gaussian distribution, in the target area around picked at random N regional as candidate target region.The value of N is selected according to actual needs, and reference value is 20 in this experiment.
104: obtain new candidate target region according to shifting formula;
R
n=A*(R
c-R
0)+B*rng+R
0
Wherein, R
n, R
c, R
0Be respectively new candidate target region, candidate target region, target area; A, B are coefficient of migration; Rng is tandom number generator.For example: new candidate target region, candidate target region and target area are with wide, height and the centre coordinate (x of rectangular area, y) expression, namely wide, height and centre coordinate are determined new candidate target region, when calculating the wide parameter of new candidate target region, the wide parameter of candidate target region and target area is updated in the transfer formula, obtain the wide parameter of new candidate target region, by that analogy, calculate high parameter and the centre coordinate of the candidate target region that makes new advances respectively, finally obtain new candidate target region.
105: new candidate target region is extracted color, edge, three kinds of visual signatures of texture respectively;
Wherein, the concrete operations of this step are identical with step 101, and the embodiment of the invention is not done this and given unnecessary details.
106: according to the property distinguished and the correlativity of color, edge and textural characteristics, each feature is carried out self-adaptation merge;
The property distinguished RD
fBe defined as new candidate target region and adjacent background about the similarity degree of a certain feature, represent with two histogrammic Pasteur's coefficients:
In the formula, HSV, EO, LBP represent color, edge, textural characteristics respectively,
With
The m dimension of representing adjacent background and target area feature histogram respectively; M is dimension or the histogrammic dimension of complete textural characteristics of the histogrammic dimension of color characteristic or complete edge feature histogram.The property distinguished RD
fMore little, the differentiation degree of this feature is more strong, distributes higher weight; On the contrary, distribute lower weight.
Correlativity is defined as new candidate target region and masterplate feature about the similarity degree of a certain feature, represents with two histogrammic Pasteur's coefficients:
In the formula, HSV, EO, LBP represent color, edge, textural characteristics respectively,
With
The m dimension of representing new candidate target region and target area feature histogram respectively; M is dimension or the histogrammic dimension of complete textural characteristics of the histogrammic dimension of color characteristic or complete edge feature histogram; Correlativity CD
fMore big, the degree of correlation of this feature is more strong, distributes higher weight; On the contrary, distribute lower weight.
The weight of each feature, beta, gamma are finally determined jointly by RDf and CDf:
In the formula,
Be normaliztion constant, alpha+beta+γ=1; HSV, EO, LBP represent color, edge, textural characteristics respectively.
107: calculate the Pasteur of fusion back feature and masterplate feature apart from d, Pasteur is carried out normalized apart from d, with the weight of normalization result as new candidate target region;
Wherein, ρ represents Pasteur's coefficient; M is dimension or the histogrammic dimension of complete textural characteristics of the histogrammic dimension of color characteristic or complete edge feature histogram; q
m, p
mBe respectively the m dimension that merges back feature and masterplate feature.
108: N new candidate target region sorted according to the weight size, calculate resampling judgment value N
j, if the resampling judgment value greater than the resampling decision threshold, resamples execution in step 109; If not, execution in step 109;
The resampling decision threshold is gained rule of thumb, and reference value is 50.Resample and namely delete the new candidate target region of little weight, copy the new candidate target region of weight limit then and replace the new candidate target region of deleting.
109: overlapping judgement is carried out in new candidate target region and target area to weight limit, if Duplication is less than Duplication threshold value execution in step 110; Otherwise, execution in step 111;
Wherein, the Duplication threshold value is set according to the situation in the practical application, for example: 0.3, during specific implementation, the embodiment of the invention does not limit this.
110: with the new candidate target region of weight limit (wide W, high H, the many times of zones of center (x, y)) (wide LW, high LH, center (x, y), L is multiple) input detector, if detecting device output 0, failure is followed the tracks of in expression; Otherwise detecting device is exported the result be input to recognizer, if recognizer output is that expression is followed the tracks of success and upgraded recognizer, template characteristic and target area; If output is not, fresh target is found in expression, and flow process finishes;
Detecting device is a kind of generic classifier that can detect interesting target, is used for distinguishing tracked target and other objects.Detecting device is trained under off-line, can not upgrade in tracing process.The detecting device input can be a two field picture, also can be a zone in the two field picture, and output is the zone of destination object in this image.
Recognizer is used for obtaining the prior imformation of tracing process specific objective, and it only upgrades under specific circumstances.The new candidate target region that positive sample during recognizer upgrades was verified by detecting device is formed, negative sample be in the background picked at random with new candidate target region equidimension zone.Recognizer is by identifying to prevent from following the tracks of the generation of drift to the new candidate target region of verifying.The recognizer input is the target area, and output is logical value, represents whether this target belongs to the target that recognizer is represented.With the feature of the new candidate target region of weight limit as the template characteristic after upgrading; With the new candidate target region of weight limit as the target area after upgrading.
111: Duplication is then thought and is followed the tracks of successfully more than or equal to the Duplication threshold value, upgrades recognizer, template characteristic and target area, and flow process finishes.
Wherein, the operation of renewal recognizer, template characteristic and target area in this step is consistent with the renewal operation in the step 110, and the embodiment of the invention is not done at this and given unnecessary details.
To other frame repeated execution of steps 104 of video sequence to step 111, up to complete video of traversal.
With a kind ofly merging and the feasibility of the method for tracking target of on-line study based on many features self-adaptation that concrete experiment verifies that the embodiment of the invention provides, this experiment is followed the tracks of the people as target below, and concrete outcome such as Fig. 2 are extremely shown in Figure 7.Wherein, detecting device adopts based on edge gradient histogram feature and supporting vector machine model and trains the human body detector that obtains.
Fig. 2, Fig. 3 and Fig. 4 are respectively video sequence the 16th, 19,23 frames.In the 16th frame initialization target area, the 19th frame, target 1 has taken place to block, and tracking is lost, and the 23rd frame is successfully given for change by on-line study and is blocked lost target, has proved that this method can effectively avoid blocking back track rejection problem.
Fig. 5, Fig. 6 and Fig. 7 are respectively video sequence the 291st, 294 and 299 frames.At the 291st frame, the initialization target area, the 294th frame, target 1 and target 2 take place to interlock and block, the 299th frame, this method has been carried out tracking accurately and efficiently, has avoided the staggered easily problem of generation tracking drift of back of blocking of target.
List of references
[1]N.Dalal?and?B.Triggs.Histograms?of?oriented?gradients?for?human?detection.CVPR,2005.
[2]Kanopoulos?N,Vasanthavada?N,Baker?R.L.Design?of?an?image?edge?detection?filter?using?the?Sobel?operator.IEEE?Journal?of?Volume:23,Issue.1988.
[3]Wang?Xiaoyu,Han,Tony?X,Yan?Shuicheng.An?HOG-LBP?human?detector?with?partial?occlusion?handling.Computer?Vision,32-39,2009.
[4]T.Ahonen,A.Hadid,and?M.Pietikainen.Face?Recogniton?with?Local?Binary?Patterns.Proc.European?Conf.Computer?Vision,469-481,2004.
[5]P.Viola?and?M.Jones.Rapid?object?detection?using?a?boosted?cascade?of?simple?features.In?Proc.CVPR,volume?I,511-518,2001.
It will be appreciated by those skilled in the art that accompanying drawing is the synoptic diagram of a preferred embodiment, the invention described above embodiment sequence number does not represent the quality of embodiment just to description.
The above only is preferred embodiment of the present invention, and is in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (8)
1. the method for tracking target based on the fusion of many features self-adaptation and on-line study is characterized in that, said method comprising the steps of:
(1) target signature is extracted and as template characteristic in selected target zone from a two field picture;
(2) initialization recognizer, input next frame image, initialization candidate target region; Obtain new candidate target region according to shifting formula;
(3) described new candidate target region is extracted color, edge, three kinds of features of texture respectively; The property distinguished and correlativity according to each feature are carried out the self-adaptation fusion;
(4) calculate to merge Pasteur's distance of back feature and masterplate feature, with after described Pasteur's range normalization as the weight of described new candidate target region;
(5) N new candidate target region sorted according to the weight size, if the resampling judgment value greater than the resampling decision threshold, resamples execution in step (6); If not, execution in step (6);
(6) overlapping judgement is carried out in new candidate target region and the described target area of weight limit, if Duplication is less than Duplication threshold value execution in step (7); Otherwise, execution in step (8);
(7) with many times of regional input detectors of the new candidate target region of weight limit, if described detecting device output 0, failure is followed the tracks of in expression; Otherwise described detecting device output result is input to recognizer, if the output of described recognizer is that expression is followed the tracks of success and upgraded described recognizer, described template characteristic and described target area; If output is not, fresh target is found in expression, and flow process finishes;
(8) Duplication is then thought and is followed the tracks of successfully more than or equal to the Duplication threshold value, upgrades described recognizer, described template characteristic and described target area, and flow process finishes.
2. according to claim 1 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation is characterized in that, described extraction target signature also specifically comprises as the step of template characteristic:
1) extracts color characteristic information;
2) extract edge feature information;
3) texture feature extraction information;
4) merge described color characteristic information, described edge feature information and described textural characteristics information, obtain the target signature histogram, as described template characteristic.
3. according to claim 2 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation is characterized in that the step of described extraction color characteristic information specifically comprises:
1) color space is divided into colored region and achromaticity zone, described colored region and described achromaticity zone are carried out the HSV subregion, obtain Q
H* Q
SIndividual colored sub-range and Q
vIndividual achromaticity sub-range is with described Q
H* Q
SIndividual colored sub-range and described Q
vIndividual achromaticity sub-range is as Q
H* Q
S+ Q
vU between individual chromatic zones;
2) give different weights according to the distance of pixel and target area central point is far and near to described pixel, according to the HSV of described pixel u between corresponding chromatic zones is voted;
3) the ballot value of adding up between each chromatic zones obtains the color characteristic histogram.
4. according to claim 2 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation is characterized in that the step of described extraction edge feature information specifically comprises:
1) described target area interpolation is obtained 2 times wide and high interpolation zone, then respectively to described target area and described interpolation area dividing;
2) calculate edge strength and the direction of each sub-piece, in 0 ° of-360 ° of scope, edge direction is divided into several direction zones, to the ballot of direction zone, obtain the edge feature histogram of each sub-piece according to edge strength;
3) the described edge feature histogram that each sub-piece is calculated couples together and obtains the complete edge feature histogram.
5. according to claim 2 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation is characterized in that the step of described texture feature extraction information specifically comprises:
1) to each sub-piece, calculates local binary pattern feature histogram;
2) the described local binary pattern feature histogram that each sub-piece is calculated couples together and obtains complete textural characteristics histogram.
6. according to claim 1 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation, it is characterized in that, the described property distinguished is defined as described new candidate target region and adjacent background about the similarity degree of a certain feature, represents with two histogrammic Pasteur's coefficients.
7. according to claim 1 a kind ofly merge and the method for tracking target of on-line study based on many features self-adaptation, it is characterized in that, described correlativity is defined as described new candidate target region and described masterplate feature about the similarity degree of a certain feature, represents with two histogrammic Pasteur's coefficients.
8. a kind of method for tracking target based on the fusion of many features self-adaptation and on-line study according to claim 1 is characterized in that,
Described renewal recognizer specifically is: the new candidate target region that positive sample was verified by described detecting device is formed, and negative sample is picked at random and zone new candidate target region equidimension in the background;
Described renewal template characteristic is specially: with the feature of the new candidate target region of weight limit as the template characteristic after upgrading;
Described renewal target area is specially: with the new candidate target region of weight limit as the target area after upgrading.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310121576.9A CN103198493B (en) | 2013-04-09 | 2013-04-09 | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310121576.9A CN103198493B (en) | 2013-04-09 | 2013-04-09 | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103198493A true CN103198493A (en) | 2013-07-10 |
CN103198493B CN103198493B (en) | 2015-10-28 |
Family
ID=48720998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310121576.9A Expired - Fee Related CN103198493B (en) | 2013-04-09 | 2013-04-09 | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103198493B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473288A (en) * | 2013-08-29 | 2013-12-25 | 西北工业大学 | Image search method based on hybrid micro-structure descriptor |
CN103996046A (en) * | 2014-06-11 | 2014-08-20 | 北京邮电大学 | Personnel recognition method based on multi-visual-feature fusion |
CN105069488A (en) * | 2015-09-25 | 2015-11-18 | 南京信息工程大学 | Tracking method based on template on-line clustering |
CN105513090A (en) * | 2015-11-24 | 2016-04-20 | 浙江宇视科技有限公司 | Construction method and apparatus for observation model |
CN105844647A (en) * | 2016-04-06 | 2016-08-10 | 哈尔滨伟方智能科技开发有限责任公司 | Kernel-related target tracking method based on color attributes |
CN105957107A (en) * | 2016-04-27 | 2016-09-21 | 北京博瑞空间科技发展有限公司 | Pedestrian detecting and tracking method and device |
CN106033613A (en) * | 2015-03-16 | 2016-10-19 | 北京大学 | Object tracking method and device |
CN106991395A (en) * | 2017-03-31 | 2017-07-28 | 联想(北京)有限公司 | Information processing method, device and electronic equipment |
CN107633226A (en) * | 2017-09-19 | 2018-01-26 | 北京师范大学珠海分校 | A kind of human action Tracking Recognition method and system |
CN108846854A (en) * | 2018-05-07 | 2018-11-20 | 中国科学院声学研究所 | A kind of wireless vehicle tracking based on motion prediction and multiple features fusion |
CN108876841A (en) * | 2017-07-25 | 2018-11-23 | 成都通甲优博科技有限责任公司 | The method and system of interpolation in a kind of disparity map parallax refinement |
CN110147768A (en) * | 2019-05-22 | 2019-08-20 | 云南大学 | A kind of method for tracking target and device |
CN110232703A (en) * | 2019-06-12 | 2019-09-13 | 中国矿业大学 | A kind of motion estimate device and method based on color and texture information |
CN110414535A (en) * | 2019-07-02 | 2019-11-05 | 绵阳慧视光电技术有限责任公司 | A kind of manual initial block modification method and system based on background differentiation |
CN110443320A (en) * | 2019-08-13 | 2019-11-12 | 北京明略软件系统有限公司 | The determination method and device of event similarity |
CN110569840A (en) * | 2019-08-13 | 2019-12-13 | 浙江大华技术股份有限公司 | Target detection method and related device |
CN114692788A (en) * | 2022-06-01 | 2022-07-01 | 天津大学 | Early warning method and device for extreme weather of Ernino based on incremental learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2037410A1 (en) * | 2007-09-14 | 2009-03-18 | Thomson Licensing | Method for tracking an object in a sequence of images and device implementing said method |
CN102982340A (en) * | 2012-10-31 | 2013-03-20 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on semi-supervised learning and random fern classifier |
CN102999920A (en) * | 2012-10-25 | 2013-03-27 | 西安电子科技大学 | Target tracking method based on nearest neighbor classifier and mean shift |
-
2013
- 2013-04-09 CN CN201310121576.9A patent/CN103198493B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2037410A1 (en) * | 2007-09-14 | 2009-03-18 | Thomson Licensing | Method for tracking an object in a sequence of images and device implementing said method |
CN102999920A (en) * | 2012-10-25 | 2013-03-27 | 西安电子科技大学 | Target tracking method based on nearest neighbor classifier and mean shift |
CN102982340A (en) * | 2012-10-31 | 2013-03-20 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on semi-supervised learning and random fern classifier |
Non-Patent Citations (2)
Title |
---|
刘士荣 等: "基于多特征融合的粒子滤波目标跟踪算法", 《信息与控制》, vol. 41, no. 6, 30 June 2012 (2012-06-30) * |
尹宏鹏 等: "一种基于多特征自适应融合的运动目标跟踪算法", 《光电子·激光》, vol. 21, no. 6, 30 June 2010 (2010-06-30) * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473288A (en) * | 2013-08-29 | 2013-12-25 | 西北工业大学 | Image search method based on hybrid micro-structure descriptor |
CN103996046A (en) * | 2014-06-11 | 2014-08-20 | 北京邮电大学 | Personnel recognition method based on multi-visual-feature fusion |
CN103996046B (en) * | 2014-06-11 | 2017-07-21 | 北京邮电大学 | The personal identification method merged based on many visual signatures |
CN106033613B (en) * | 2015-03-16 | 2019-04-30 | 北京大学 | Method for tracking target and device |
CN106033613A (en) * | 2015-03-16 | 2016-10-19 | 北京大学 | Object tracking method and device |
CN105069488A (en) * | 2015-09-25 | 2015-11-18 | 南京信息工程大学 | Tracking method based on template on-line clustering |
CN105069488B (en) * | 2015-09-25 | 2018-06-29 | 南京信息工程大学 | Tracking based on template on-line talking |
CN105513090B (en) * | 2015-11-24 | 2018-06-05 | 浙江宇视科技有限公司 | A kind of observing and nursing construction method and device |
CN105513090A (en) * | 2015-11-24 | 2016-04-20 | 浙江宇视科技有限公司 | Construction method and apparatus for observation model |
CN105844647A (en) * | 2016-04-06 | 2016-08-10 | 哈尔滨伟方智能科技开发有限责任公司 | Kernel-related target tracking method based on color attributes |
CN105957107A (en) * | 2016-04-27 | 2016-09-21 | 北京博瑞空间科技发展有限公司 | Pedestrian detecting and tracking method and device |
CN106991395A (en) * | 2017-03-31 | 2017-07-28 | 联想(北京)有限公司 | Information processing method, device and electronic equipment |
CN106991395B (en) * | 2017-03-31 | 2020-05-26 | 联想(北京)有限公司 | Information processing method and device and electronic equipment |
CN108876841A (en) * | 2017-07-25 | 2018-11-23 | 成都通甲优博科技有限责任公司 | The method and system of interpolation in a kind of disparity map parallax refinement |
CN107633226A (en) * | 2017-09-19 | 2018-01-26 | 北京师范大学珠海分校 | A kind of human action Tracking Recognition method and system |
CN107633226B (en) * | 2017-09-19 | 2021-12-24 | 北京师范大学珠海分校 | Human body motion tracking feature processing method |
CN108846854A (en) * | 2018-05-07 | 2018-11-20 | 中国科学院声学研究所 | A kind of wireless vehicle tracking based on motion prediction and multiple features fusion |
CN110147768B (en) * | 2019-05-22 | 2021-05-28 | 云南大学 | Target tracking method and device |
CN110147768A (en) * | 2019-05-22 | 2019-08-20 | 云南大学 | A kind of method for tracking target and device |
CN110232703A (en) * | 2019-06-12 | 2019-09-13 | 中国矿业大学 | A kind of motion estimate device and method based on color and texture information |
CN110232703B (en) * | 2019-06-12 | 2023-07-25 | 中国矿业大学 | Moving object recognition device and method based on color and texture information |
CN110414535A (en) * | 2019-07-02 | 2019-11-05 | 绵阳慧视光电技术有限责任公司 | A kind of manual initial block modification method and system based on background differentiation |
CN110414535B (en) * | 2019-07-02 | 2023-04-28 | 绵阳慧视光电技术有限责任公司 | Manual initial frame correction method and system based on background distinction |
CN110443320A (en) * | 2019-08-13 | 2019-11-12 | 北京明略软件系统有限公司 | The determination method and device of event similarity |
CN110569840A (en) * | 2019-08-13 | 2019-12-13 | 浙江大华技术股份有限公司 | Target detection method and related device |
CN114692788A (en) * | 2022-06-01 | 2022-07-01 | 天津大学 | Early warning method and device for extreme weather of Ernino based on incremental learning |
CN114692788B (en) * | 2022-06-01 | 2022-08-19 | 天津大学 | Early warning method and device for extreme weather of Ernino based on incremental learning |
Also Published As
Publication number | Publication date |
---|---|
CN103198493B (en) | 2015-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103198493B (en) | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation | |
CN103914702B (en) | System and method for improving the object detection performance in video | |
Ogale | A survey of techniques for human detection from video | |
Yao et al. | Traffic sign recognition using HOG-SVM and grid search | |
CN110135296A (en) | Airfield runway FOD detection method based on convolutional neural networks | |
CN103246896B (en) | A kind of real-time detection and tracking method of robustness vehicle | |
KR102373753B1 (en) | Method, and System for Vehicle Recognition Tracking Based on Deep Learning | |
CN103839279A (en) | Adhesion object segmentation method based on VIBE in object detection | |
CN102521565A (en) | Garment identification method and system for low-resolution video | |
CN110298297A (en) | Flame identification method and device | |
Dib et al. | A review on negative road anomaly detection methods | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN103870818A (en) | Smog detection method and device | |
CN105654505B (en) | A kind of collaboration track algorithm and system based on super-pixel | |
CN111191535B (en) | Pedestrian detection model construction method based on deep learning and pedestrian detection method | |
CN109993061A (en) | A kind of human face detection and tracing method, system and terminal device | |
Malhi et al. | Vision based intelligent traffic management system | |
Masmoudi et al. | Vision based system for vacant parking lot detection: Vpld | |
Javed et al. | Automated multi-camera surveillance: algorithms and practice | |
Mammeri et al. | North-American speed limit sign detection and recognition for smart cars | |
Al-Heety | Moving vehicle detection from video sequences for traffic surveillance system | |
Tao et al. | Smoky vehicle detection based on multi-feature fusion and ensemble neural networks | |
CN115620090A (en) | Model training method, low-illumination target re-recognition method and device and terminal equipment | |
Muchtar et al. | A unified smart surveillance system incorporating adaptive foreground extraction and deep learning-based classification | |
Kim et al. | Development of a real-time automatic passenger counting system using head detection based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151028 Termination date: 20200409 |
|
CF01 | Termination of patent right due to non-payment of annual fee |