CN103310466A - Single target tracking method and achievement device thereof - Google Patents
Single target tracking method and achievement device thereof Download PDFInfo
- Publication number
- CN103310466A CN103310466A CN2013102688346A CN201310268834A CN103310466A CN 103310466 A CN103310466 A CN 103310466A CN 2013102688346 A CN2013102688346 A CN 2013102688346A CN 201310268834 A CN201310268834 A CN 201310268834A CN 103310466 A CN103310466 A CN 103310466A
- Authority
- CN
- China
- Prior art keywords
- sample
- target
- span
- frame
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
Provided are a single target tracking method and an achievement device thereof. The single target tracking method includes that in a video V (V={F0, F1, FN}) formed by N frames of gray level images, a target O0 is selected in an F0 frame, and the image O0 is subjected to graying and width and height normalization processing to obtain an initialized parameter of the image; classifier initializing and updating which include training set constructing, feature extracting and model updating are carried out; and target tracking is carried out on the Ft+1 frame of image through an ft+1 model. The single target tracking method displays target surface appearances in a binary feature based compressed sensing and dimension descending method, can effectively display target deformation, improves shielding resistance and light resistance, and accordingly can achieve robust target tracking; and meanwhile has the advantages of small memory consumption and calculated amount, and achieves real-time tracking speed.
Description
Technical field
The present invention relates to tracking and implement device thereof to the reference position that sets the goal, especially a kind of monotrack method and implement device thereof.
Background technology
Carrying out the research of tracking technique based on the moving target feature, is the focus of computer vision field research in recent years.Identify though biological characteristics such as fingerprint, palmmprint and vein have been carried out in security fields to study widely and had Preliminary Applications, these biological characteristics to belong to contact, limited its range of application greatly.Comparatively speaking, gait and recognition of face this " contactless " recognition technology is carried out ingenious combination with human motion and biological characteristic and is carried out Study of recognition, has become key areas in the intelligent scene video monitoring at present.Especially Gait Recognition, the People's Bank of China of need being expert at collects the feature of movement human and carries out identification when walking to move, and the movement human in early stage detects and the accuracy of the work of tracking, the prerequisite that real-time is whole recognition performance.This has proposed very big challenge to video monitoring, and based on the requirement of system's scene security performance, the manually-operated video monitoring of traditional dependence can not adapt to the needs that the actual scene security monitoring is used owing to have following shortcoming.How to realize that under the complex background of reality the tracking of target is not only the key of intelligent video monitoring system, in intelligent transportation or man-machine interaction are used, very important effect is arranged also, so target tracking algorism has obtained development very widely.But the success or not of most of pedestrian's track algorithm all will be depended on the complexity of background and the similarity of pedestrian's target and background, has only target and background just can obtain good result under the bigger situation of color difference.In order to solve the problem that pedestrian in the complex scene follows the tracks of, need us to design increasing robust algorithm, make it be enough to solve illumination variation, noise effect, barrier and unavoidable problem in all practical applications such as block.How accurately and rapidly from video sequence detection and tracking to go out moving target extremely important, be one of the most key technology of identification and abnormal behaviour identification, the method for motion target tracking mainly contains two kinds at present: 1, statistical learning method; 2, based on the algorithm of color characteristic.First method becomes one of mainstream technology in the area of pattern recognition gradually, and it has the application of success on many classical problems, and motion target tracking is exactly an example.The Adaboost algorithm is a kind of cascade track algorithm that people such as Freund proposes, and its target is automatically to pick out several Weak Classifiers to be integrated into a strong classifier from the Weak Classifier space.The Adaboost algorithm based on Haar type feature that people such as Vila propose is the successful Application of Adaboost algorithm on people's face detects.People such as Grabner propose online Adaboost algorithm, and the Adaboost algorithm application to target tracking domain, has been obtained tracking effect preferably.Be different from off-line Adaboost algorithm, the training sample of online Adaboost algorithm is one or several data that obtain in real time.Use this algorithm can adapt to problems such as moving target changing features better, but online Adaboost algorithm relies on sorter to follow the tracks of merely, easy classification error river under the situation that occurrence of large-area is blocked in complicated background is caused to follow the tracks of and is lost.
It also is a kind of by the algorithm of extensive concern in performance good aspect real-time and the robustness that the Camshift track algorithm that Bradski proposes relies on it.The Camshift algorithm is core with the Meanshift algorithm, solve Meanshift and can not change the shortcoming of following the tracks of window size, dwindle the target search scope, improved accuracy and operation efficiency, under the simple situation of background, can obtain tracking effect preferably.But other moving targets influenced greatlyyer around the Camshift track algorithm was subjected to, and thought non-impact point by mistake impact point easily, target size is changed and cause to follow the tracks of to lose efficacy, and then occur following the tracks of and lose phenomenon.Traditional Camshift target tracking algorism is followed the tracks of as feature with colouring information, when color of object and background or non-target are close, also can occur following the tracks of and lose.And traditional Camshift target tracking algorism is followed the tracks of failure easily to fast-moving target, and can't restore from failure.In view of single online Adaboost algorithm and Camshift algorithm all can not be obtained good tracking effect, patent of invention CN201210487250.3, patent name discloses a kind of motion target tracking method based on online Adaboost algorithm and the combination of Camshift algorithm for " a kind of motion target tracking method ", at first eigenmatrix and the sorter computing with online Adaboost track algorithm obtains confidence map, local direction histogram feature and color characteristic have been merged in choosing of feature, use the Camshift algorithm at confidence map then, make the feature of Camshift algorithm application merge texture and colouring information.This method may further comprise the steps: the first step accurately detects moving target based on the fast-moving target detection method of code book model; Second step to the initialization of online Adaboost algorithm Weak Classifier group, obtained strong classifier, and local direction histogram feature and color characteristic have been merged in choosing of moving target feature; The 3rd step, eigenmatrix and the Weak Classifier computing of online Adaboost track algorithm are obtained confidence map, use the Camshift track algorithm at confidence map, upgrade Weak Classifier according to the moving target position that obtains, obtain the tracking results of whole section video sequence at last.This method utilizes conventional approach to solve tracking problem, and its existing problems have two aspects, at first, the feature robustness deficiency of extraction, often comparatively responsive to noise for direction histogram and the color characteristic of part, thus cause the apparent robustness deficiency of target; Secondly, the Camshift method that this method adopts is to illumination, and the target of change color is easy to generate drift phenomenon, thereby the problem that has caused tracking accuracy to descend can't satisfy the severe rugged environment in the actual monitored video.Therefore design the feature of robust and the two big key issues that the strong sorter of generalization ability is target following.
Patent name is " based on the monotrack method of composing the power least square " (publication number: 103093482A, open day: 2013-05-08) disclose a kind of monotrack method based on tax power least square, in the mode of reconstructed error target has been followed the tracks of.The shortcoming of the method is sparse needs of finding the solution reconstruct to consume the regular hour, therefore follows the tracks of and goes up the difficult real-time tracking that realizes.Suppose the known video V={F that is formed by N frame gray level image
0, F
1... F
N, the wide height of two field picture is respectively w, h.The problem that the present invention wants to solve is: at F
0Selected target O in the frame
0A kind of tracking is proposed then to O
0Carrying out the N continuous frame follows the tracks of.
This tracking of giving the reference position that sets the goal, the major technique that adopts is that profile and textural characteristics information are expressed the apparent of target, utilize the apparent model of classification learning method learning objective then by the color of extraction target at present; Then in the next frame picture, detect the position of target by apparent model, perhaps by simple track algorithm, as average drifting or light stream etc., the position that tracking target occurs in the next frame picture, the result who after this integrates tracking and two kinds of methods of target detection obtains a tracing positional the most believable; By certain update strategy apparent model is carried out adaptive renewal at last.
Summary of the invention
For addressing the above problem, the object of the present invention is to provide a kind of by the compressed sensing feature extracting method with at line core study update method and device, overcome the deficiency of above-mentioned tracking technique, target is extracted the extremely strong compressed sensing feature of robustness, improve the apparent ability to express of target, just go then to upgrade apparent model in line core study way.
For achieving the above object, technical scheme of the present invention is:
A kind of monotrack method comprises:
The first step, initiation parameter, that is, and at F
0Obtain target O in the frame
0The rectangle frame B of initial position
0=[x
0, y
0, w
0, h
0], represent the upper left corner horizontal ordinate of frame respectively, upper left corner ordinate, frame is wide, the frame height; On the identical image block I of wide height, generate L random point to collection
Wherein
Represent respectively l point to the horizontal ordinate of first point, the ordinate of first point, second horizontal ordinate, the ordinate of second point, the right generating mode of random point are limited in level or vertical two kinds; Generate sparse stochastic matrix A, be used for the feature dimensionality reduction;
In second step, t=0 is carried out in sorter initialization and renewal thereof, and 1 ..., N-1 time iteration is upgraded, and will handle t two field picture F
t, comprise training set structure, feature extraction and three processing procedures of model modification;
In the 3rd step, target following utilizes model f
T+1At F
T+1Two field picture carries out target following, and tracking step comprises: forecast sample structure, feature extraction, sample classification, select the highest a plurality of samples of degree of confidence, generate final (also being a best simultaneously) object boundary frame, the output tracking frame, t=t+1 is if t>N then finishes to follow the tracks of; Otherwise, returned for second step.
In the first step,
Line number is H, and value is 50-300, (being preferably 100), columns is L, known have an equiprobability function rand, its generate equiprobably 1,2,3 ..., an element among the 2024}, if rand ∈ 1,2,3 ... 16}, then
If rand ∈ 17,2,3 ..., 32}, then
Otherwise a
Ij=0.
In second step,
Described training set structure comprises the steps:
A) positive sample collection
From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 50-500 (preferred 100) positive samples pictures collection
Acquisition methods is Pan and Zoom, and step is as follows:
I. the generation formula of positive sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]+shift (1)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 80-150 (preferred 100 times) following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (1) calculates the bounding box [x ', y ', w ', h '] of sample then; According to bounding box [x ', y ', w ', h '] cut-away view as F
tSubimage I; I is normalized to the image I of wide height identical (for example 32x32); After so carrying out 80-150 time, generated 80-150 and opened positive class samples pictures set, be designated as
B) negative sample collection
Outer peripheral areas, definitely
Obtain 50-1000 (preferred 200) negative class sample set at random
Acquisition methods is translation or convergent-divergent, and step is as follows:
I. generate the formula of negative sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]shift (2)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 (preferred 200 times) following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (2) calculates the bounding box [x ', y ', w ', h '] of sample then; According to frame [x ', y ', w ', h '] cut-away view as F
tSubimage I; I is normalized to the image I of wide height identical (for example 32x32); After so carrying out 150-500 time, generated 150-500 and opened negative class samples pictures, be designated as
C) merge positive and negative class sample, composing training sample Dt; Definitely,
Y wherein
i{ 1,1} represents the class label of sample to ∈, the negative class sample of-1 expression, the positive class sample of 1 expression.
In second step, described feature extraction is for extracting D
tIn the feature of all sample images, extract sample { I
t, y
tThe step of feature is as follows:
A) initialization sample { I
t, y
tBe characterized as
Characteristic length is the element number of CP, i.e. L;
I wherein
t(p, q) presentation video I
tIn point (p, gray-scale pixel values q);
C) utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is that 50-300(is preferably 100), thus new feature z obtained, and computing formula is as follows:
In second step, described model modification is for utilizing training set
Upgrade sorter model
Wherein
Namely upgrade model parameter w
t∈ R
1 * 101, R represents real number, step is as follows:
B) carry out t=1 ..., T following iterative step:
Ii. from A
tThe middle sample subclass that satisfies certain condition of seeking
Iii. calculating parameter value η
t=1/ (λ t).
Iv. undated parameter for the first time:
Wherein min represents the minimum value in the element, || || represent 2 norms.
C) output w
T+1
In the 3rd step, the described model f that utilizes
T+1At F
T+1The step that two field picture carries out target following is as follows:
1) sample set
Extract.From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 150-300 (preferred 200) positive samples pictures collection
Acquisition methods is translation or convergent-divergent, and step is as follows:
I. generate the formula of sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]+shift (3)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 (preferred 200 times) following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (3) calculates the bounding box [x ', y ', w ', h '] of sample then; According to frame [x ', y ', w ', h '] cut-away view as F
T+1Subimage I; I is normalized to the image I of wide height identical (for example 32x32);
Iii. according to step I i, generated 150-500 and opened the set of (preferred 200) samples pictures, be designated as
The sample frame set of sample set in image is designated as
2) calculate
In the feature of every pictures, calculate sample { I
t, y
tThe method of feature is as follows:
A) initialization sample { I
t, y
tBe characterized as
Characteristic length is plain number, i.e. L of CP;
I wherein
t(p, q) presentation video I
tIn point (p, gray-scale pixel values q);
C) utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is that 50-300(is preferably 100), thus new feature z obtained, and computing formula is as follows:
As above, constitute sample set to be sorted
3) utilize model f
T+1To U
T+1In all samples classify each sample z
i∈ U
T+1Produce corresponding degree of confidence:
4) according to Conf
T+1, from
Middle most the highest bounding boxes of degree of confidence of selecting
This number is preferably 1/20 of sample number, generates a final objective bounding box B
T+1=[x
T+1, y
T+1, w
T+1, h
T+1];
5) t=t+1 is if t>N then finishes to follow the tracks of; Otherwise, return second of front and go on foot.
In order to realize said method, the present invention also provides a kind of implement device of monotrack method, comprising:
Image acquiring device is used for obtaining a two field picture and image being carried out gray processing processing and wide high normalized from video;
Sorter initialization and updating device thereof are used for initialization model and online updating model;
Target tracker is used for doing target homing at a new images, makes search result consistent as far as possible with target.
Described image acquiring device comprises that random point is to generation unit and sparse stochastic matrix generation unit.
Described sorter initialization and updating device thereof comprise samples pictures collection tectonic element, feature extraction unit and model modification unit, wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing; The model modification unit, the feature samples collection that utilizes feature extraction unit to obtain upgrades sorter model.
Described target tracker comprises samples pictures collection tectonic element, feature extraction unit and target following unit, and wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing, and the target following unit be used for all samples are classified, and therefrom select the highest a plurality of bounding boxes of degree of confidence, and then cluster generates the object boundary frame an of the best.
In sum, creationary feature extracting method and the feature dimension reduction method of having proposed of a kind of monotrack method provided by the invention, target is extracted the extremely strong compressed sensing feature of robustness, improved the apparent ability to express of target, then by sorter initialization and updating steps thereof, on-line study and the apparent model of fresh target more, thus precision and the speed of target following improved greatly.
And, implement device provided by the invention has adopted based on the compressed sensing dimension reduction method of two value tags and has expressed the apparent of target, deformation that can the effective expression target, improve anti-blocking and the ability of illumination, thereby can the robust tracking target, have the low and little advantage of calculated amount of memory consumption simultaneously, reach real-time follow-up speed.Therefore has good using value in actual applications.
Description of drawings
Fig. 1 is that the point that generates at random of the present invention is to synoptic diagram;
Fig. 2 is image block dot matrix trellis diagram exemplary plot of the present invention;
Fig. 3 is feature extracting method synoptic diagram of the present invention;
Fig. 4 is feature dimensionality reduction exemplary plot of the present invention;
Fig. 5 is monotrack method flow diagram of the present invention;
Fig. 6 is the implement device structural representation of monotrack method of the present invention;
Fig. 7 is sample architecture of the present invention unit process flow diagram;
Fig. 8 is model modification of the present invention unit process flow diagram;
Fig. 9 is target following of the present invention unit process flow diagram;
Figure 10 and Figure 11 are tracking effect figure of the present invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explaining the present invention, and be not used in restriction the present invention.
The invention provides a kind of monotrack method, as shown in Figure 5, suppose the known video V={F that is formed by N frame gray scale pedestrian image
0, F
1... F
N, the wide height of two field picture is respectively w, and h comprises the steps:
The first step, get parms
As among Fig. 5 1. shown in, concrete initialization step is as follows:
1) by manually, at F
0Obtain target O in the frame
0The rectangle frame B of initial position
0=[x
0, y
0, w
0, h
0], x wherein
0, y
0, w
0, h
0The upper left corner horizontal ordinate of representing frame respectively, upper left corner ordinate, frame is wide, the frame height.
2) on the image block I of wide height identical (for example 32x32), generate L random point to collection
Wherein
Represent respectively l point to the horizontal ordinate of first point, the ordinate of first point, second horizontal ordinate, the ordinate of second point.The right generating mode of random point is limited in level or vertical two kinds.As shown in Figure 1, be two synoptic diagram that random point is right.Concrete steps are as follows:
B) according to S, can obtain a pair set
Right to each point
Two ordinate points
Add a random number, i.e. the 4th ordinate respectively
With second ordinate
Wherein rand represents the random number that [0,1] is interval.Generated the some pair set about vertical direction thus
C) according to S, can obtain a pair set
Right to each point
Two horizontal ordinate points
Add a random number, i.e. the 3rd horizontal ordinate respectively
With first horizontal ordinate
Wherein rand represents the random number that [0,1] is interval.Generate the some pair set about horizontal direction thus
D) merging is about the some pair set of vertical and horizontal direction
It is right to have noted removing the point that repeats here, and the element number of set CP is L.
3) generate sparse stochastic matrix
Ranks are respectively H, L.Wherein the value of line number is 50-300, and optimum value is 100, and known have an equiprobability function rand, its generate equiprobably 1,2,3 ..., an element among the 2024}.If rand ∈ 1,2,3 ..., 16}, then
If rand ∈ 17,2,3 ..., 32}, then
Otherwise a
Ij=0.Sparse stochastic matrix A is used for the feature dimensionality reduction, reduces calculated amount and Noise Resistance Ability.
Second step, sorter initialization and renewal thereof
As among Fig. 5 3., 4. and 5. shown in the step, suppose to carry out t=0,1 ..., N-1 time iteration is upgraded, and will handle t two field picture F
t, iterative process is as follows:
1) training set structure
A) positive sample collection
From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 50-500, the best is 100 positive samples pictures collection
Acquisition methods is Pan and Zoom.Shown in Fig. 7 process flow diagram, it is as follows that cycle index T is set at 100. steps:
I. the generation formula of positive sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (1)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. in the present embodiment, to carry out the following example that is operating as 100 times: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (1) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
tSubimage I
iWith I
iBe normalized to the image I that wide height is 32x32
iAfter so carrying out 100 times, generated 100 positive class samples pictures set, be designated as
B) negative sample collection
Outer peripheral areas, definitely
Obtain 50-1000 (the best is 100) negative class sample set at random
Acquisition methods is the translation convergent-divergent.Shown in Fig. 7 process flow diagram, it is 200 that cycle index T is set at 150-500(the best).Step is as follows:
I. generate the formula of negative sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (2)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. in the present embodiment, to carry out the following example that is operating as 200 times: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (2) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
tSubimage I; With I
tBe normalized to the image I of wide height identical (for example 32x32).After so carrying out 200 times, generated 200 negative class samples pictures, be designated as
C) merge positive and negative class sample, composing training sample D
tDefinitely,
Y wherein
i{ 1,1} represents the class label of sample to ∈, the negative class sample of-1 expression, the positive class sample of 1 expression.
2) feature extraction, as shown in Figure 3.Extract D
tIn the feature of all sample images, extract sample { I
i, y
iThe step of feature is as follows:
A) initialization sample { I
i, y
iBe characterized as
Characteristic length is the element number of CP, i.e. L.
I wherein
i(p, q) presentation video I
iIn point (p, gray-scale pixel values q).
C) as shown in Figure 4, utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is 100 for 50-300(the best), thus new feature z obtained, and computing formula is as follows:
Sorter initialization or renewal as shown in Figure 8, utilize training set Z
tUpgrade sorter model
Wherein
Namely upgrade model parameter w
t∈ R
1 * 101, R represents real number.Step is as follows:
E) carry out t=1 ..., T following iterative step:
Iii. calculating parameter value η
t=1 (λ t).
Iv. undated parameter for the first time:
Wherein min represents the minimum value in the element, || || represent 2 norms.
F) output w
T+1, i.e. model f
T+1
The 3rd step, tracking target
As 6.-9. step among Fig. 5, particularly shown in Fig. 9 process flow diagram.Utilize model f
T+1At F
T+1Two field picture carries out target following, and tracking step is as follows:
1) sample set
Extract.From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 50-500 (the best is 200) positive samples pictures collection
Acquisition methods is Pan and Zoom etc.Step is as follows:
I. generate the formula of sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (3)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 following operation, in the present embodiment to carry out the following example that is operating as 200 times: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (3) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
T+1Subimage I; I is normalized to the image I of wide height identical (for example 32x32).
Iii. according to step I i, generated 200 samples pictures set, be designated as
The sample frame set of sample set in image is designated as
2) calculate
In the feature of every pictures.Feature extracting method as shown in Figure 3 is identical.Constitute sample set to be sorted
3) utilize model f
T+1To U
T+1In all samples classify.Each sample z
i∈ U
T+1Produce corresponding degree of confidence:
4) according to Conf
T+1, from
Middle 10 the highest bounding boxes of degree of confidence of selecting
Utilize weighting Meanshift clustering method (Dalal N.Finding people in images and videos[D] .Institut National Polytechnique de Grenoble-INPG, 2006.) to generate a final objective bounding box B
T+1=[x
T+1, y
T+1, w
T+1, h
T+1];
5) if t=t+1 is t〉N, then finish to follow the tracks of; Otherwise, return sorter initialization and the renewal thereof in second step.
As shown in figure 10, the tracking effect of pedestrian under the different video frame, the pedestrian has and changes one's clothes in the picture, turn round, characteristics such as deformation, the method that this patent proposes can effectively address these problems.As shown in figure 11, the tracking effect of pedestrian under the different video frame, the pedestrian has fuzzy, and illumination variation is too little, and characteristics such as the frame background is more are followed the tracks of in float, and the method that this patent proposes can effectively overcome the above problems.This patent has extremely strong tracking power, has anti-illumination variation, target deformation, apparent variation and is subjected to characteristics such as background influence is low.
In sum, creationary feature extracting method and the feature dimension reduction method of having proposed of a kind of monotrack method provided by the invention, target is extracted the extremely strong compressed sensing feature of robustness, improved the apparent ability to express of target, then by sorter initialization and updating steps thereof, on-line study and the apparent model of fresh target more, thus precision and the speed of target following improved greatly.
At method for tracking target set forth above, the present invention also provides a kind of implement device of this method, as shown in Figure 6.Image acquiring device is used for obtaining a two field picture and image being carried out gray processing processing and wide high normalized from video;
Sorter initialization and updating device thereof are used for initialization model and online updating model;
Target tracker is used for doing target homing at a new images, makes search result consistent as far as possible with target.Described image acquiring device comprises that random point is to generation unit and sparse stochastic matrix generation unit.
Described sorter initialization and updating device thereof comprise samples pictures collection tectonic element, feature extraction unit and model modification unit, wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing; The model modification unit, the feature samples collection that utilizes feature extraction unit to obtain upgrades sorter model;
Described target tracker comprises samples pictures collection tectonic element, feature extraction unit and target following unit, and wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing, and the target following unit be used for all samples are classified, and therefrom select the highest a plurality of bounding boxes of degree of confidence, and then cluster generates the object boundary frame an of the best.
Further describe the workflow of each unit of implement device of a kind of monotrack method of the present invention below,
As shown in Figure 6, the at first selected i two field picture of described image acquiring device obtains target O
0The rectangle frame B of initial position
0=[x
0, y
0, w
0, h
0], represent the upper left corner horizontal ordinate of frame respectively, upper left corner ordinate, frame is wide, the frame height;
Be on the image block I of 32x32 at wide height, generate L random point to collection
Wherein
Represent respectively l point to the horizontal ordinate of first point, the ordinate of first point, second horizontal ordinate, the ordinate of second point, the right generating mode of random point are limited in level or vertical two kinds, and be specific as follows:
Generate the grid point set in image I
Vertical point according to S, can obtain a pair set to generating
Right to each point
Two ordinate points
Add a random number, i.e. the 4th ordinate respectively
With second ordinate
Wherein rand represents the random number that [0,1] is interval.Generated the some pair set about vertical direction thus
Level point according to S, can obtain a pair set to generating
Right to each point
Two horizontal ordinate points
Add a random number, i.e. the 3rd horizontal ordinate respectively
With first horizontal ordinate
Wherein rand represents the random number that [0,1] is interval.Generate the some pair set about horizontal direction thus
Merging is about the some pair set of vertical and horizontal direction
It is right to have removed the point that repeats, and the element number of set CP is L;
Generate sparse stochastic matrix A=[a
Ij]
100 * L, ranks are respectively 100 and L, known have an equiprobability function rand, its generate equiprobably 1,2,3 ..., an element among the 2024}, if rand ∈ 1,2,3 ..., 16}, then
If rand ∈ 17,2,3 ..., 32}, then
Otherwise a
Ij=0.
Sorter initialization and updating device thereof, as among Fig. 6 2.-4. shown in, comprise unit, feature extraction unit and the model modification unit of samples pictures set structure.Wherein, described positive and negative samples pictures set tectonic element: be mainly used in the positive class sample set of sub-images of structure and negative class sample set of sub-images from samples pictures.Described feature extraction unit module: be used for aligning the feature extraction that the negative sample image carries out compressed sensing.Described model modification unit: the feature samples collection that is used for utilizing feature extraction unit to obtain upgrades sorter model.
Suppose to carry out t=0,1 ..., N-1 time iteration is upgraded, and will handle t two field picture F
t, iterative process is as follows:
1) training set structure
A) positive sample collection
From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle 100 positive samples pictures collection of random extraction
Acquisition methods is Pan and Zoom.Shown in Fig. 7 process flow diagram, cycle index T is set at 100.
I. the generation formula of positive sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (1)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out the following operation of 80-150, in the present embodiment with 100 following examples that are operating as: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (1) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
tSubimage I
iWith I
iBe normalized to the image I that wide height is 32x32
iAfter so carrying out 100 times, generated 100 positive class samples pictures set, be designated as
B) negative sample collection
Outer peripheral areas, definitely,
Obtain 100 negative class sample sets at random
Acquisition methods is the translation convergent-divergent.Shown in Fig. 7 process flow diagram, it is as follows that cycle index T is set at 200. steps:
I. generate the formula of negative sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (2)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 following operation, in the present embodiment to carry out 200 following operations: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (2) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
tSubimage I; With I
tBe normalized to the image I that wide height is 32x32.After so carrying out 200 times, generated 200 negative class samples pictures, be designated as
C) merge positive and negative class sample, composing training sample D
tDefinitely,
Y wherein
i{ 1,1} represents the class label of sample to ∈, the negative class sample of-1 expression, the positive class sample of 1 expression.
2) feature extraction; Extract D
tIn the feature of all sample images, extract sample { I
i, y
iThe step of feature is as follows:
A) initialization sample { I
i, y
iBe characterized as
Characteristic length is the element number of CP, i.e. L.
I wherein
i(p, q) presentation video I
iIn point (p, gray-scale pixel values q).
C) as shown in Figure 4, utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is 50-300, and dimension is preferred 100 in the present embodiment, thereby obtains new feature z, and computing formula is as follows:
3) model modification unit utilizes training set Z
tUpgrade sorter model
Wherein
Namely upgrade model parameter w
t∈ R
1 * 101, R represents real number.Step is as follows:
Carry out t=1 ..., T following iterative step:
From A
tThe middle sample subclass that satisfies certain condition of seeking
Calculating parameter value η
t=1 (λ t).
Undated parameter for the first time:
Wherein min represents the minimum value in the element, || || represent 2 norms.
Output w
T+1, i.e. model f
T+1
This sorter initialization and updating device thereof mainly provide the online updating function to model, realized in the target following process, and along with target shape, the variation of illumination and size, constantly learning objective is apparent, to improve the robustness of following the tracks of.
Target tracker, be used for doing target homing at a new images, make search result consistent as far as possible with target, the feature samples set that utilizes the category of model device to align the acquisition of negative sample characteristics of image set constructing module is classified, and obtains best target bezel locations.Described target tracker comprises samples pictures collection tectonic element, feature extraction unit and target following unit.As among Fig. 6 5.-7. shown in, wherein, the samples pictures collection tectonic element in samples pictures collection tectonic element, feature extraction unit and sorter initialization and the updating device thereof is identical with method and the flow process of feature extraction unit.Its workflow is as follows:
1) sample set
Extract.From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle 200 positive samples pictures collection of random extraction
Acquisition methods is Pan and Zoom etc.Step is as follows:
I. generate the formula of sample bounding box:
[x',y',w',h']=scale[x,y,w
t,h
t]+shift (3)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. to carry out the following example that is operating as 200 times: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (3) calculates the bounding box [x', y', w', h'] of sample then; According to frame [x', y', w', h'] cut-away view as F
T+1Subimage I; I is normalized to the image I that wide height is 32x32.
Iii. according to step I i, generated 200 samples pictures set, be designated as
The sample frame set of sample set in image is designated as
2) calculate
In the feature of every pictures.Feature extracting method as shown in Figure 3 is identical.Constitute sample set to be sorted
3) utilize model f
T+1To U
T+1In all samples classify.Each sample z
i∈ U
T+1Produce corresponding degree of confidence:
4) according to Conf
T+1, from
Middle 10 the highest bounding boxes of degree of confidence of selecting
Utilize weighting Meanshift method to generate a final objective bounding box B
T+1=[x
T+1, y
T+1, w
T+1, h
T+1];
5) if t=t+1 is t〉N, then finish to follow the tracks of; Otherwise, return the model modification unit in sorter initialization and the updating device thereof.
To sum up, implement device provided by the invention has adopted based on the compressed sensing dimension reduction method of two value tags and has expressed the apparent of target, deformation that can the effective expression target, improve anti-blocking and the ability of illumination, thereby can the robust tracking target, have the low and little advantage of calculated amount of memory consumption simultaneously, reach real-time follow-up speed.Therefore has good using value in actual applications.
The above only is preferred embodiment of the present invention, not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. monotrack method comprises:
The first step, initiation parameter, that is, and at F
0Obtain target O in the frame
0The rectangle frame B of initial position
0=[x
0, y
0, w
0, h
0], represent the upper left corner horizontal ordinate of frame respectively, upper left corner ordinate, frame is wide, the frame height; On the identical image block I of wide height, generate L random point to collection
Wherein
Represent respectively l point to the horizontal ordinate of first point, the ordinate of first point, second horizontal ordinate, the ordinate of second point, the right generating mode of random point are limited in level or vertical two kinds; Generate sparse stochastic matrix A, be used for the feature dimensionality reduction;
In second step, t=0 is carried out in sorter initialization and renewal thereof, and 1 ..., N-1 time iteration is upgraded, and will handle t two field picture F
t, comprise training set structure, feature extraction and three processing procedures of model modification;
In the 3rd step, target following utilizes model f
T+1At F
T+1Two field picture carries out target following, and tracking step comprises: the forecast sample structure, and feature extraction, sample classification is selected most the highest samples of degree of confidence, generates a final objective bounding box, the output tracking frame, t=t+1 is if t>N then finishes to follow the tracks of; Otherwise, returned for second step.
2. monotrack method as claimed in claim 1 is characterized in that, the sparse stochastic matrix of described generation
Line number is H, and value is 50-300, and columns is L, and known have an equiprobability function rand, its generate equiprobably 1,2,3 ... an element among the 2024}, if rand ∈ 1,2,3 ..., 16}, then
If rand ∈ 17,2,3 ..., 32}, then
3. monotrack method as claimed in claim 1 or 2 is characterized in that, described training set structure comprises the steps:
A) positive sample collection
From object boundary frame B
t=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 50-500 positive samples pictures collection
Step is as follows:
I. the generation formula of positive sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]+shift (1)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 80-150 following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (1) calculates the bounding box [x ', y ', w ', h '] of sample then; According to frame [x ', y ', w ', h '] cut-away view as F
tSubimage I; I is normalized to the identical image I of wide height; After so carrying out 80-150 time, generated 80-150 and opened positive class samples pictures set;
B) negative sample collection
Outer peripheral areas, definitely
Obtain 50-1000 negative class sample set at random
Acquisition methods is translation or convergent-divergent, and step is as follows:
I. generate the formula of negative sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]+shift (2)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (2) calculates the bounding box [x ', y ', w ', h '] of sample then; According to frame [x ', y ', w ', h '] cut-away view as F
tSubimage I; It is identical image I that I is normalized to wide height; After so carrying out 150-500 time, generated 150-500 and opened negative class samples pictures set;
C) merge positive and negative class sample, composing training sample D
t
4. monotrack method as claimed in claim 1 or 2 is characterized in that, described feature extraction is for extracting D
tIn the feature of all sample images, extract sample { I
i, y
iThe step of feature is as follows:
A) initialization sample { I
i, y
iBe characterized as
Characteristic length is the element number of CP, i.e. L;
I wherein
i(p, q) presentation video I
iIn point (p, gray-scale pixel values q);
C) utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is 50-300, thereby obtains new feature z, and computing formula is as follows:
5. monotrack method as claimed in claim 1 or 2 is characterized in that, described model modification is for utilizing training set
Upgrade sorter model
Wherein
Namely upgrade model parameter w
t∈ R
1 * 101, R represents real number, step is as follows:
B) carry out t=1 ..., T following iterative step:
I. from Z
tIn select k sample at random, constitute subclass
Ii. from A
tThe middle sample subclass that satisfies certain condition of seeking
Iii. calculating parameter value η
t=1/ (λ t);
Iv. undated parameter for the first time:
V. undated parameter for the second time:
Wherein min represents the minimum value in the element, || || represent 2 norms;
C) output w
T+1
6. monotrack method as claimed in claim 1 or 2 is characterized in that, the described model f that utilizes
T+1At F
T+1The step that two field picture carries out target following is as follows:
1) sample set
Extract, from object boundary frame Bt=[x
t, y
t, w
t, h
t] neighborhood
Middle random extraction 150-300 positive samples pictures collection
Acquisition methods is translation or convergent-divergent, and step is as follows:
I. generate the formula of sample bounding box:
[x′,y′,w′,h′]=scale[x,y,w
t,h
t]+shift (3)
Wherein scale represents the convergent-divergent rate, span [0.8,1.2], and shift represents the positive integer side-play amount, span [0,20] .x, the y span is
Ii. carry out 150-500 following operation: at x, y, scale in the span of shift, obtains random value respectively; Substitution formula (3) calculates the bounding box [x ', y ', w ', h '] of sample then; According to frame [x ', y ', w ', h '] cut-away view as F
T+1Subimage I; I is normalized to the image I of wide height identical (for example 32*32);
Iii. according to step I i, generated 150-500 and opened the samples pictures set;
2) feature of the every pictures of calculating is calculated sample { I
i, the method for 1} feature is as follows:
A) initialization sample { I
i, 1} is characterized as
Characteristic length is the element number of CP, i.e. L;
I wherein
i(p, q) presentation video I
iIn point (p, gray-scale pixel values q);
C) utilize sparse stochastic matrix A right
Carry out dimensionality reduction, dimension is the line number of matrix A, thereby obtains new feature z, and computing formula is as follows:
3) utilize model f
T+1To U
T+1In all samples classify each sample z
i∈ U
T+1Produce corresponding degree of confidence:
4) according to Conf
T+1, from
Middle most the highest bounding boxes of degree of confidence of selecting
This number is preferably 1/20 of sample number, generates a final objective bounding box B
T+1=[x
T+1, y
T+1, w
T+1, h
T+1];
5) t=t+1 is if t>N then finishes to follow the tracks of; Otherwise, returned for second step.
7. the implement device of a monotrack method comprises:
Image acquiring device is used for obtaining a two field picture and image being carried out gray processing processing and wide high normalized from video;
Sorter initialization and updating device thereof are used for initialization model and online updating model;
Target tracker is used for doing target homing at a new images, makes search result consistent as far as possible with target.
8. the implement device of monotrack method as claimed in claim 7 is characterized in that, described image acquiring device comprises that random point is to generation unit and sparse stochastic matrix generation unit.
9. as the implement device of claim 7 or 8 described monotrack methods, it is characterized in that, described sorter initialization and updating device thereof comprise samples pictures collection tectonic element, feature extraction unit and model modification unit, wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing; The model modification unit, the feature samples collection that utilizes feature extraction unit to obtain upgrades sorter model.
10. as the implement device of claim 7 or 8 described monotrack methods, it is characterized in that, described target tracker comprises samples pictures collection tectonic element, feature extraction unit and target following unit, wherein, samples pictures collection tectonic element is used for from the samples pictures positive class sample set of sub-images of structure and negative class sample set of sub-images; Feature extraction unit is used for aligning the feature extraction that the negative sample image carries out compressed sensing; The target following unit is used for all samples are classified, and selects the highest a plurality of bounding boxes of degree of confidence, and then cluster generates the object boundary frame an of the best.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310268834.6A CN103310466B (en) | 2013-06-28 | 2013-06-28 | A kind of monotrack method and implement device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310268834.6A CN103310466B (en) | 2013-06-28 | 2013-06-28 | A kind of monotrack method and implement device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103310466A true CN103310466A (en) | 2013-09-18 |
CN103310466B CN103310466B (en) | 2016-02-17 |
Family
ID=49135643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310268834.6A Expired - Fee Related CN103310466B (en) | 2013-06-28 | 2013-06-28 | A kind of monotrack method and implement device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103310466B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632143A (en) * | 2013-12-05 | 2014-03-12 | 冠捷显示科技(厦门)有限公司 | Cloud computing combined object identifying system on basis of images |
CN103632382A (en) * | 2013-12-19 | 2014-03-12 | 中国矿业大学(北京) | Compressive sensing-based real-time multi-scale target tracking method |
CN103870839A (en) * | 2014-03-06 | 2014-06-18 | 江南大学 | Online video target multi-feature tracking method |
CN104008397A (en) * | 2014-06-09 | 2014-08-27 | 华侨大学 | Target tracking algorithm based on image set |
CN104463192A (en) * | 2014-11-04 | 2015-03-25 | 中国矿业大学(北京) | Dark environment video target real-time tracking method based on textural features |
CN104599289A (en) * | 2014-12-31 | 2015-05-06 | 安科智慧城市技术(中国)有限公司 | Target tracking method and device |
CN104820998A (en) * | 2015-05-27 | 2015-08-05 | 成都通甲优博科技有限责任公司 | Human body detection and tracking method and device based on unmanned aerial vehicle mobile platform |
CN105354252A (en) * | 2015-10-19 | 2016-02-24 | 联想(北京)有限公司 | Information processing method and apparatus |
CN105631462A (en) * | 2014-10-28 | 2016-06-01 | 北京交通大学 | Behavior identification method through combination of confidence and contribution degree on the basis of space-time context |
CN106570490A (en) * | 2016-11-15 | 2017-04-19 | 华南理工大学 | Pedestrian real-time tracking method based on fast clustering |
CN108492314A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Wireless vehicle tracking based on color characteristics and structure feature |
CN108537825A (en) * | 2018-03-26 | 2018-09-14 | 西南交通大学 | A kind of method for tracking target based on transfer learning Recurrent networks |
CN108830204A (en) * | 2018-06-01 | 2018-11-16 | 中国科学技术大学 | The method for detecting abnormality in the monitor video of target |
CN109472812A (en) * | 2018-09-29 | 2019-03-15 | 深圳市锦润防务科技有限公司 | A kind of method, system and the storage medium of target following template renewal |
CN109521419A (en) * | 2017-09-20 | 2019-03-26 | 比亚迪股份有限公司 | Method for tracking target and device based on Radar for vehicle |
CN109902623A (en) * | 2019-02-27 | 2019-06-18 | 浙江大学 | A kind of gait recognition method based on perception compression |
CN111479062A (en) * | 2020-04-15 | 2020-07-31 | 上海摩象网络科技有限公司 | Target object tracking frame display method and device and handheld camera |
CN112269401A (en) * | 2020-09-04 | 2021-01-26 | 河南大学 | Self-adaptive active sensor tracking method based on tracking precision and risk control |
CN113591607A (en) * | 2021-07-12 | 2021-11-02 | 辽宁科技大学 | Station intelligent epidemic prevention and control system and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794264A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and system of real time detecting and continuous tracing human face in video frequency sequence |
CN101221620A (en) * | 2007-12-20 | 2008-07-16 | 北京中星微电子有限公司 | Human face tracing method |
US7840061B2 (en) * | 2007-02-28 | 2010-11-23 | Mitsubishi Electric Research Laboratories, Inc. | Method for adaptively boosting classifiers for object tracking |
KR20120129301A (en) * | 2011-05-19 | 2012-11-28 | 수원대학교산학협력단 | Method and apparatus for extracting and tracking moving objects |
-
2013
- 2013-06-28 CN CN201310268834.6A patent/CN103310466B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794264A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and system of real time detecting and continuous tracing human face in video frequency sequence |
US7840061B2 (en) * | 2007-02-28 | 2010-11-23 | Mitsubishi Electric Research Laboratories, Inc. | Method for adaptively boosting classifiers for object tracking |
CN101221620A (en) * | 2007-12-20 | 2008-07-16 | 北京中星微电子有限公司 | Human face tracing method |
KR20120129301A (en) * | 2011-05-19 | 2012-11-28 | 수원대학교산학협력단 | Method and apparatus for extracting and tracking moving objects |
Non-Patent Citations (1)
Title |
---|
KAIHUA ZHANG ET AL.: "Real-Time Compressive Tracking", 《COMPUTER VISION-ECCV 2012》, 13 October 2012 (2012-10-13), pages 864 - 877, XP047019010 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632143A (en) * | 2013-12-05 | 2014-03-12 | 冠捷显示科技(厦门)有限公司 | Cloud computing combined object identifying system on basis of images |
CN103632143B (en) * | 2013-12-05 | 2017-02-08 | 冠捷显示科技(厦门)有限公司 | Cloud computing combined object identifying system on basis of images |
CN103632382B (en) * | 2013-12-19 | 2016-06-22 | 中国矿业大学(北京) | A kind of real-time multiscale target tracking based on compressed sensing |
CN103632382A (en) * | 2013-12-19 | 2014-03-12 | 中国矿业大学(北京) | Compressive sensing-based real-time multi-scale target tracking method |
CN103870839A (en) * | 2014-03-06 | 2014-06-18 | 江南大学 | Online video target multi-feature tracking method |
CN104008397A (en) * | 2014-06-09 | 2014-08-27 | 华侨大学 | Target tracking algorithm based on image set |
CN104008397B (en) * | 2014-06-09 | 2017-05-03 | 华侨大学 | Target tracking algorithm based on image set |
CN105631462A (en) * | 2014-10-28 | 2016-06-01 | 北京交通大学 | Behavior identification method through combination of confidence and contribution degree on the basis of space-time context |
CN104463192A (en) * | 2014-11-04 | 2015-03-25 | 中国矿业大学(北京) | Dark environment video target real-time tracking method based on textural features |
CN104463192B (en) * | 2014-11-04 | 2018-01-05 | 中国矿业大学(北京) | Dark situation video object method for real time tracking based on textural characteristics |
CN104599289A (en) * | 2014-12-31 | 2015-05-06 | 安科智慧城市技术(中国)有限公司 | Target tracking method and device |
CN104599289B (en) * | 2014-12-31 | 2018-12-07 | 南京七宝机器人技术有限公司 | Method for tracking target and device |
CN104820998A (en) * | 2015-05-27 | 2015-08-05 | 成都通甲优博科技有限责任公司 | Human body detection and tracking method and device based on unmanned aerial vehicle mobile platform |
CN104820998B (en) * | 2015-05-27 | 2019-11-26 | 成都通甲优博科技有限责任公司 | A kind of human testing based on unmanned motor platform and tracking and device |
CN105354252A (en) * | 2015-10-19 | 2016-02-24 | 联想(北京)有限公司 | Information processing method and apparatus |
CN106570490B (en) * | 2016-11-15 | 2019-07-16 | 华南理工大学 | A kind of pedestrian's method for real time tracking based on quick clustering |
CN106570490A (en) * | 2016-11-15 | 2017-04-19 | 华南理工大学 | Pedestrian real-time tracking method based on fast clustering |
CN109521419A (en) * | 2017-09-20 | 2019-03-26 | 比亚迪股份有限公司 | Method for tracking target and device based on Radar for vehicle |
CN108492314A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Wireless vehicle tracking based on color characteristics and structure feature |
CN108492314B (en) * | 2018-01-24 | 2020-05-19 | 浙江科技学院 | Vehicle tracking method based on color characteristics and structural features |
CN108537825A (en) * | 2018-03-26 | 2018-09-14 | 西南交通大学 | A kind of method for tracking target based on transfer learning Recurrent networks |
CN108830204A (en) * | 2018-06-01 | 2018-11-16 | 中国科学技术大学 | The method for detecting abnormality in the monitor video of target |
CN108830204B (en) * | 2018-06-01 | 2021-10-19 | 中国科学技术大学 | Method for detecting abnormality in target-oriented surveillance video |
CN109472812A (en) * | 2018-09-29 | 2019-03-15 | 深圳市锦润防务科技有限公司 | A kind of method, system and the storage medium of target following template renewal |
CN109472812B (en) * | 2018-09-29 | 2021-11-02 | 深圳市锦润防务科技有限公司 | Method, system and storage medium for updating target tracking template |
CN109902623A (en) * | 2019-02-27 | 2019-06-18 | 浙江大学 | A kind of gait recognition method based on perception compression |
CN111479062A (en) * | 2020-04-15 | 2020-07-31 | 上海摩象网络科技有限公司 | Target object tracking frame display method and device and handheld camera |
CN112269401A (en) * | 2020-09-04 | 2021-01-26 | 河南大学 | Self-adaptive active sensor tracking method based on tracking precision and risk control |
CN113591607A (en) * | 2021-07-12 | 2021-11-02 | 辽宁科技大学 | Station intelligent epidemic prevention and control system and method |
CN113591607B (en) * | 2021-07-12 | 2023-07-04 | 辽宁科技大学 | Station intelligent epidemic situation prevention and control system and method |
Also Published As
Publication number | Publication date |
---|---|
CN103310466B (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103310466B (en) | A kind of monotrack method and implement device thereof | |
Wang et al. | Automatic building extraction from high-resolution aerial imagery via fully convolutional encoder-decoder network with non-local block | |
Enzweiler et al. | Monocular pedestrian detection: Survey and experiments | |
Zhan et al. | Face detection using representation learning | |
Timofte et al. | Multi-view traffic sign detection, recognition, and 3D localisation | |
Jun et al. | Robust face detection using local gradient patterns and evidence accumulation | |
Vishwakarma et al. | Hybrid classifier based human activity recognition using the silhouette and cells | |
Cheng et al. | Gait analysis for human identification through manifold learning and HMM | |
CN102609686B (en) | Pedestrian detection method | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN105528794A (en) | Moving object detection method based on Gaussian mixture model and superpixel segmentation | |
CN103020986A (en) | Method for tracking moving object | |
CN109033954A (en) | A kind of aerial hand-written discrimination system and method based on machine vision | |
CN102496001A (en) | Method of video monitor object automatic detection and system thereof | |
CN103295016A (en) | Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics | |
Armanfard et al. | TED: A texture-edge descriptor for pedestrian detection in video sequences | |
Gao et al. | Extended compressed tracking via random projection based on MSERs and online LS-SVM learning | |
Wei et al. | Pedestrian detection in underground mines via parallel feature transfer network | |
Gao et al. | Robust visual tracking using exemplar-based detectors | |
CN104050460B (en) | The pedestrian detection method of multiple features fusion | |
Yılmaz et al. | Recurrent binary patterns and cnns for offline signature verification | |
Hou et al. | Human detection and tracking over camera networks: A review | |
Li et al. | Foldover features for dynamic object behaviour description in microscopic videos | |
Haselhoff et al. | An evolutionary optimized vehicle tracker in collaboration with a detection system | |
CN114612506B (en) | Simple, efficient and anti-interference high-altitude parabolic track identification and positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20171106 Address after: 200070 room 912, Gonghe Road, 504, Shanghai, Jingan District Patentee after: SHANGHAI QINGTIAN ELECTRONIC TECHNOLOGY Co.,Ltd. Address before: 518034 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306 Patentee before: ANKE SMART CITY TECHNOLOGY (PRC) Co.,Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160217 |