CN105117720A - Object scale self-adaption tracking method based on spatial-temporal model - Google Patents

Object scale self-adaption tracking method based on spatial-temporal model Download PDF

Info

Publication number
CN105117720A
CN105117720A CN201510632255.4A CN201510632255A CN105117720A CN 105117720 A CN105117720 A CN 105117720A CN 201510632255 A CN201510632255 A CN 201510632255A CN 105117720 A CN105117720 A CN 105117720A
Authority
CN
China
Prior art keywords
target
scale
omega
space
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510632255.4A
Other languages
Chinese (zh)
Other versions
CN105117720B (en
Inventor
蒋敏
吴佼
孔军
柳晨华
皮昕鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201510632255.4A priority Critical patent/CN105117720B/en
Publication of CN105117720A publication Critical patent/CN105117720A/en
Application granted granted Critical
Publication of CN105117720B publication Critical patent/CN105117720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an object scale self-adaption tracking method based on a spatial-temporal model, and the method comprises the following steps: beginning a video, reading in a first frame of image, manually assigning the rectangle position of a tracked object; then based on context airspace, initializing the spatial-temporal model and a multiscale history object template library; next reading in a next frame of image, building the spatial-temporal model by iteration, calculating a confidence map, and estimating an object center position; then according to the history object template library, judging a template optimal scale, determining the rectangle position of the object, completing tracking of the current frame of object, and updating scale parameters of the spatial-temporal model and the multiscale history object template library; finally detecting whether the video is finished or not, continuously reading in a next frame if the video is not finished, or completing tracking. By adopting the object scale self-adaption tracking method, the object appearance scale change is effectively coped with under the conditions of illumination change, partial blocking and swift moving, and the robust tracking is realized.

Description

Based on the target scale adaptive tracking method of space-time model
Technical field:
The invention belongs to field of machine vision, particularly a kind of target scale adaptive tracking method based on space-time model.
Background technology:
Target following is the high-rise visual processes in video monitoring system, namely the correlation technique such as computer vision, image/video process is utilized, when not needing human intervention, target in the image sequence of video camera shooting is described, processes and is analyzed, realize the detection to moving target in dynamic scene, tracking and identification, then according to the target signature processed and analyze, interested movement locus is obtained.Motion target tracking, detection are important research contents of field of video monitoring, and the effect of target in video image tracking, detection directly has influence on the accuracy of the advanced processes such as the behavioral value of the target in subsequent video processing procedure, event understanding and analysis [1].
In recent years, for the tracking problem under the disturbing effects such as target scale, illumination variation and partial occlusion in actual environment, Chinese scholars constantly proposes new track algorithm or Improving ways [2].But research object is only confined to target itself by a lot of tracking, have ignored the information characteristics around target, causes arithmetic accuracy not high.In order to strengthen tracking accuracy, open China etc. [3]introduce target surrounding context spatial domain relation, utilize target and peripheral information relation Modling model thereof to carry out future position, improve the robustness of the interference such as the anti-partial occlusion of algorithm and illumination variation.But when the significant change of target appearance yardstick, the feature extracted of the method cannot effective Modling model, easily occurs that track window departs from target, even lose objects.
The present invention is directed to target scale and change the tracking accuracy decline problem caused, introduce the multiple dimensioned history To Template storehouse of using Clustering and building, propose a kind of target scale adaptive tracking algorithm based on space-time model, achieve the real-time target robust tracking of dimensional variation.
Summary of the invention:
Fundamental purpose of the present invention proposes a kind of target scale adaptive tracking algorithm based on space-time model, under the disturbing effects such as target scale, illumination variation and partial occlusion, and rapid, accurate localizing objects region.
To achieve these goals, the invention provides following technical scheme:
Step one, read in the first two field picture Image 1, manually specify tracking target rectangle position Ζ;
Step 2, based on context spatial domain Ω c, initialization space-time model order
H 1 s t c ( X ) = h 1 S C ( X ) = F - 1 ( F ( e - | | X - x * α | | β ) F ( I 1 ( X ) ω σ ( X - x * ) ) ) ;
Step 3, calculating target area Z 1direction gradient feature histogram f hOG_Z, make m=<I z, f hOG_Z>, M=M ∪ m, initialize multi-scale history To Template storehouse M;
Step 4, read in next frame image Image t+1(t>=1);
Step 5, iteration build space-time model order H t + 1 s t c = ( 1 - &eta; ) H t s t c + &eta;h t s c , ( t &GreaterEqual; 1 ) , Wherein h t S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I t ( X ) &omega; &sigma; ( X - x * ) ) ) ;
Step 6, calculating confidence map G t+1, order G t + 1 ( X ) = H t + 1 s t c ( X ) &CircleTimes; I t + 1 ( X ) &omega; &sigma; ( X - x t * ) , Estimating target center x t + 1 * = arg m a x X &Element; &Omega; C ( x t * ) G t + 1 ( X ) ;
Step 7, with the target's center estimated in step 6 point centered by, extract the sample of n different scale, be all normalized to the image block of 8 × 8, comparison history To Template storehouse M, judge templet optimal scale, determines target rectangle position, completes the target following of t+1 frame;
Step 8, the optimal scale estimated according to step 7, upgrade space-time model scale parameter;
Step 9, according to the optimal objective region Z estimated in step 7 t+1, upgrade multiple dimensioned history To Template storehouse M;
If step 10 video does not terminate, then proceed to step 4, read in next frame image; Otherwise follow the tracks of and terminate.
Compared with prior art, the present invention has following beneficial effect:
1. utilize the optimal scale of target to upgrade space-time model by step 7, achieve the real-time modeling method of dimensional variation.
2. use Clustering by step 9 and build the multiple dimensioned history To Template storehouse dynamically updated, construct the templatespace that can represent dbjective state, improve the accuracy representing dbjective state.
3. in conjunction with space-time model, that uses Clustering structure dynamically updates multiple dimensioned history To Template storehouse and optimal scale estimation, the present invention is when target is disturbed by illumination variation, partial occlusion and quick movement etc., successfully manage the change of target appearance yardstick, achieve robust tracking.
Therefore, will be with a wide range of applications in the application that the present invention monitors at public safety.
Accompanying drawing illustrates:
Fig. 1 algorithm flow chart of the present invention
The context spatial domain schematic diagram of Fig. 2 establishing target
The key diagram of Fig. 3 different scale sample;
The key diagram in Fig. 4 To Template space;
Fig. 5 algorithm is tracking effect in the experiment of Singer1 sequence image;
Fig. 6 algorithm is at Singer1 sequence image experiment medial error tracing analysis figure;
Fig. 7 algorithm is tracking effect in the experiment of David sequence image;
Fig. 8 algorithm is at David sequence image experiment medial error tracing analysis figure;
Fig. 9 algorithm is tracking effect in the experiment of CarScale sequence image;
Figure 10 algorithm is at CarScale sequence image experiment medial error tracing analysis figure;
Embodiment
In order to object of the present invention, concrete steps and feature are better described, below in conjunction with accompanying drawing, the present invention is further detailed explanation:
With reference to figure 1, a kind of target scale adaptive tracking method based on space-time model that the present invention proposes, mainly comprises following steps:
Step one, read in the first two field picture Image 1, manually specify tracking target rectangle position Ζ;
Step 2, based on context spatial domain Ω c, initialization space-time model order
H 1 s t c ( X ) = h 1 S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I 1 ( X ) &omega; &sigma; ( X - x * ) ) ) ;
Step 3, calculating target area Z 1direction gradient feature histogram f hOG_Z, make m=<I z, f hOG_Z>, M=M ∪ m, initialize multi-scale history To Template storehouse M;
Step 4, read in next frame image Image t+1(t>=1);
Step 5, iteration build space-time model order H t + 1 s t c = ( 1 - &eta; ) H t s t c + &eta;h t s c , ( t &GreaterEqual; 1 ) , Wherein h t S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I t ( X ) &omega; &sigma; ( X - x * ) ) ) ;
Step 6, calculating confidence map G t+1, order G t + 1 ( X ) = H t + 1 s t c ( X ) &CircleTimes; I t + 1 ( X ) &omega; &sigma; ( X - x t * ) , Estimating target center x t + 1 * = arg m a x X &Element; &Omega; C ( x t * ) G t + 1 ( X ) ;
Step 7, with the target's center estimated in step 6 point centered by, extract the sample of n different scale, be all normalized to the image block of 8 × 8, comparison history To Template storehouse M, judge templet optimal scale, determines target rectangle position, completes the target following of t+1 frame;
Step 8, the optimal scale estimated according to step 7, upgrade space-time model scale parameter;
Step 9, according to the optimal objective region Z estimated in step 7 t+1, upgrade multiple dimensioned history To Template storehouse M;
If step 10 video does not terminate, then proceed to step 4, read in next frame image; Otherwise follow the tracks of and terminate.
In technique scheme, the target rectangle position Ζ in step one as shown in solid box in Fig. 2, x *for target area geometric center, W zand H zrepresent that target area Ζ's is wide and high respectively.
In technique scheme, the context spatial domain Ω in step 2 cas shown in dotted line frame in Fig. 2, represent target and peripheral information thereof.Context spatial domain Ω cwith target area geometric center x *centered by, Ω cwide and height be defined as
W &Omega; C = 2 W Z , H &Omega; C = 2 H Z
In technique scheme, space-time model in step 2 initial method be:
1. objective definition context spatial domain Ω cfeature X c=c (m)=(I (m), m) | m ∈ Ω c(x *), wherein Ω c(x *) represent with x *centered by the corresponding context spatial domain of target, m represents spatial domain Ω c(x *) in pixel, I (m) represents the gray-scale value of pixel m;
2. build the Spatial Domain characterizing target and its peripheral reference
H 1 S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I 1 ( X ) &omega; &sigma; ( X - x * ) ) )
Wherein X ∈ R 2represent context spatial domain Ω c(x *) middle set of pixels, F () represents Fourier transform function, F -1() represents inverse Fourier transform function, and ‖ ‖ represents Euclidean distance.Parameter recommendation value is α=2.25, β=1.ω σ() is Gaussian function, is defined as
&omega; &sigma; ( X - x 1 * ) = e - | | X - x 1 * | | 2 &sigma; 1 2 , &sigma; 1 = W &Omega; C + H &Omega; C 2
3. make complete space-time model initialization.
In technique scheme, in step 3, the initial method of multiple dimensioned history To Template storehouse M is:
1. make
2. by target area Z 1transfer gray-scale map to, and normalization Z 1be the image block I of 8 × 8 pixels z;
3. calculate I zdirection gradient feature histogram f hOG_Z;
4. make m=<I z, f hOG_Z>, M=M ∪ m;
5. multiple dimensioned history To Template storehouse M initialization completes;
In technique scheme, in step 5, iteration builds space-time model method be:
H t + 1 s t c = ( 1 - &eta; ) H t s t c + &eta;h t s c , ( t &GreaterEqual; 1 ) ,
Wherein with represent t, t+1 moment space-time model respectively, η is the learning rate upgraded, and suggestion adopts η=0.075. represent the target empty domain model of t, characterize target and its peripheral reference, circular is as follows:
h t S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I t ( X ) &omega; &sigma; ( X - x * ) ) )
Wherein X ∈ R 2represent context spatial domain Ω c(x *) middle set of pixels, F () represents Fourier transform function, F -1() represents inverse Fourier transform function, and ‖ ‖ represents Euclidean distance.Parameter recommendation value is α=2.25, β=1.ω σ() is Gaussian function, is defined as
&omega; &sigma; ( X - x t * ) = e - | | X - x t * | | 2 &sigma; t 2
In technique scheme, confidence map G in step 6 t+1computing method be:
G t + 1 ( X ) = H t + 1 s t c ( X ) &CircleTimes; I t + 1 ( X ) &omega; &sigma; ( X - x t * )
Wherein represent convolution algorithm symbol, X ∈ R 2represent context spatial domain Ω cmiddle set of pixels, G t+1(X) the context spatial domain Ω of t is represented cin scope, pixel is in the value of the confidence in t+1 moment, and its value represents that this point drops on the probability of target area Ζ.The point that probable value is the highest is the possible center of t+1 moment target
x t + 1 * = arg m a x X &Element; &Omega; C ( x t * ) G t + 1 ( X )
In technique scheme, in step 7, template optimal scale determination methods is:
1. with the target's center estimated in step 6 point centered by, extract the sample of n different scale, be all normalized to the image block (as shown in Figure 3) of 8 × 8, proposed parameter n=20 in the present invention;
2. build sample space D={d to be matched j ∈ [1, n], wherein d jrepresent a jth sample, the direction gradient feature histogram of its correspondence is expressed as s jrepresent the yardstick of corresponding sample;
3. the histograms of oriented gradients similarity of k template in the sample of calculated crosswise n different scale and multiple dimensioned history To Template storehouse M, obtains similar matrix S dM∈ R n × k,
S D M = { s ( m i , d j ) | s ( m i , d j ) = 1 - | | f HOG m i - f HOG d j | | 2 , m i &Element; M , d j &Element; D }
Wherein the highest with template similarity sample be t+1 moment target area Z t+1optimal estimation, the yardstick of its correspondence
In technique scheme, space-time model in step 8 scale parameter update method is:
1. target area Z t+1wide and high:
W Z(t+1)=W Z(t)*s
H Z(t+1)=H Z(t)*s,
2. target context spatial domain Ω cwide and high:
W &Omega; C ( t + 1 ) = W &Omega; C ( t ) * s
H &Omega; C ( t + 1 ) = H &Omega; C ( t ) * s
3. Gaussian function &omega; &sigma; ( X - x t * ) = e - | | X - x * | | 2 &sigma; t 2 In scale parameter σ t:
σ t+1=σ t*s
In technique scheme, in step 9, multiple dimensioned history To Template storehouse M update method is:
1. the optimal objective region Z will estimated in step 7 t+1transfer gray-scale map to, and normalization Z t+1be the image block I of 8 × 8 pixels z;
2. calculate I zdirection gradient feature histogram f hOG_Z;
3. make m=<I z, f hOG_Z>, M=M ∪ m;
4. if, t≤k (parameter recommendation value is k=10), multiple dimensioned history To Template storehouse M has upgraded, and algorithm terminates; Otherwise, proceed to step 5;
5. calculate similarity matrix S m
S M = { s ( m i , m j ) | s ( m i , m j ) = 1 - | | f HOG m i - f HOG m j | | 2 , m i &Element; M , m j &Element; M }
Wherein represent template m i/ m jdirection gradient feature histogram;
6. obtain the minimum template pair of similarity ( m m i n 1 , m m i n 2 ) = argmin m i &Element; M , m j &Element; M s ( m i , m j ) ;
7. calculate m respectively min1, m min2with the similarity of other templates and, S sum_p=∑ mj ∈ Ms (m p, m j), m p∈ { m min1, m min2;
8. if S sum_min1>=S sum_min2, adjustment templatespace M=M-m min1otherwise, M=M-m min2;
9. multiple dimensioned history To Template storehouse M has upgraded;
In technique scheme, in step 9, multiple dimensioned history To Template storehouse M is after K >=k renewal, as shown in Figure 4, template number in template base | M| will remain k, and k the representational dbjective state of most remained in template base from the t=1 moment, along with the continuation followed the tracks of, template base dynamically updates continuing.
In technique scheme, effect such as the Fig. 5 based on the target scale adaptive tracking method of space-time model shows.Fig. 5 gives algorithm in the experiment of Singer1 sequence image, and target object experiences the visual tracking effect continuing to diminish etc. under disturbed condition of violent illumination variation and yardstick.Fig. 6 is the algorithm center position of tracking and graph of errors analysis chart of standard-track central point in the experiment of Singer1 sequence image.Fig. 7 gives algorithm in the experiment of David sequence image, the visual tracking effect under the disturbed conditions such as target object experience illumination variation, plane internal rotation turn, non-linear deformation and the irregular change of yardstick.Fig. 8 is the algorithm center position of tracking and graph of errors analysis chart of standard-track central point in the experiment of David sequence image.Fig. 9 gives algorithm in the experiment of CarScale sequence image, the visual tracking effect under the disturbed conditions such as the significant change of target object experience target appearance yardstick, partial occlusion and rapid movement.Figure 10 is the algorithm center position of tracking and graph of errors analysis chart of standard-track central point in the experiment of CarScale sequence image.By three groups of sequential tests, experimental result is described with tracking effect figure and quantitative graph of errors qualitatively, the precision of verification algorithm and robustness.
This patent utilizes target and its surrounding context spatial information (si), and this target serial relation on a timeline, sets up space-time model.Effective extraction target signature, that avoids the interference such as partial occlusion and illumination variation to cause departs from.Use Clustering and build screening rule, dynamically update the representational template of most and build templatespace, accurately represent the state of target.Incoming direction histogram of gradients signature analysis template and Sample Similarity, improve the accuracy of coupling further.The target optimal scale finally obtained according to coupling upgrades space-time model, realizes the real-time modeling method of dimensional variation, the robustness of boosting algorithm.Experimental verification, the method that this patent proposes, when target is disturbed by illumination variation and partial occlusion etc., successfully manages the change of target appearance yardstick.
By reference to the accompanying drawings the specific embodiment of the present invention is elaborated above, but the present invention is not limited to above-mentioned embodiment, in the ken that those of ordinary skill in the art possess, can also make a variety of changes under the prerequisite not departing from present inventive concept.
[1]Felzenszwalb,P.F.,Girshick,R.B.,McAllester,D.,etal.ObjectDetectionwithDiscriminativelyTrainedPart-BasedModels[J].PatternAnalysisandMachineIntelligence,2010,32(9):1627-1645.
[2]WUYi,LimJongwoo,YangM-H.ObjectTrackingBenchmark[J].IEEETransactionsonPatternAnalysisandMachineIntelligence,InPress,2014.
[3]ZHANGKai-hua,ZHANGLei,,YangM-H.FastVisualTrackingviaDenseSpatio-TemporalContextLearning[C].Proceeding.of13thEuropeanConferenceonComputerVision,2014.

Claims (9)

1., based on the target scale adaptive tracking method of space-time model, it is characterized in that, comprise the following steps:
Step one, read in the first two field picture Image 1, manually specify tracking target rectangle position Ζ;
Step 2, based on context spatial domain Ω c, initialization space-time model order
H 1 s t c ( X ) = h 1 S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I 1 ( X ) &omega; &sigma; ( X - x * ) ) ) ;
Step 3, calculating target area Z 1direction gradient feature histogram f hOG_Z, make m=<I z, f hOG_Z>, M=M ∪ m, initialize multi-scale history To Template storehouse M;
Step 4, read in next frame image Image t+1(t>=1);
Step 5, iteration build space-time model order H t + 1 s t c = ( 1 - &eta; ) H t s t c + &eta;h t s c ( t &GreaterEqual; 1 ) , Wherein h t S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I t ( X ) &omega; &sigma; ( X - x * ) ) ) ;
Step 6, calculating confidence map G t+1, order G t + 1 ( X ) = H t + 1 s t c ( X ) &CircleTimes; I t + 1 ( X ) &omega; &sigma; ( X - x t * ) , Estimating target center x t + 1 * = arg m a x X &Element; &Omega; c ( x t * ) G t + 1 ( X ) ;
Step 7, with the target's center estimated in step 6 point centered by, extract the sample of n different scale, be all normalized to the image block of 8 × 8, comparison history To Template storehouse M, judge templet optimal scale, determines target rectangle position, completes the target following of t+1 frame;
Step 8, the optimal scale estimated according to step 7, upgrade space-time model scale parameter;
Step 9, according to the optimal objective region Z estimated in step 7 t+1, upgrade multiple dimensioned history To Template storehouse M;
If step 10 video does not terminate, then proceed to step 4, read in next frame image; Otherwise follow the tracks of and terminate.
2. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, space-time model in step 2 initial method be:
1) objective definition context spatial domain Ω cfeature X c=c (m)=(I (m), m) | m ∈ Ω c(x *), wherein Ω c(x *) represent with x *centered by the corresponding context spatial domain of target, m represents spatial domain Ω c(x *) in pixel, I (m) represents the gray-scale value of pixel m;
2) Spatial Domain characterizing target and its peripheral reference is built
h 1 S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I 1 ( X ) &omega; &sigma; ( X - x * ) ) )
Wherein X ∈ R 2represent context spatial domain Ω c(x *) middle set of pixels, F () represents Fourier transform function, F -1() represents inverse Fourier transform function, and ‖ ‖ represents Euclidean distance; Parameter recommendation value is α=2.25, β=1; ω σ() is Gaussian function, is defined as
&omega; &sigma; ( X - x 1 * ) = e - | | X - x 1 * | | 2 &sigma; 1 2 , &sigma; 1 = W &Omega; C + H &Omega; C 2
3) make complete space-time model initialization.
3. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, the initial method of the multiple dimensioned history To Template storehouse M described in step 3 is:
1) make
2) by target area Z 1transfer gray-scale map to, and normalization Z 1be the image block I of 8 × 8 pixels z;
3) I is calculated zdirection gradient feature histogram f hOG_Z;
4) m=<I is made z, f hOG_Z>, M=M ∪ m;
5) multiple dimensioned history To Template storehouse M initialization completes.
4. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, the iteration described in step 5 builds space-time model method be:
H t + 1 s t c = ( 1 - &eta; ) H t s t c + &eta;h t s c ( t &GreaterEqual; 1 )
Wherein with represent t, t+1 moment space-time model respectively, η is the learning rate upgraded, and suggestion adopts η=0.075; represent the target empty domain model of t, characterize target and its peripheral reference, circular is as follows:
h t S C ( X ) = F - 1 ( F ( e - | | X - x * &alpha; | | &beta; ) F ( I t ( X ) &omega; &sigma; ( X - x * ) ) )
Wherein X ∈ R 2represent context spatial domain Ω c(x *) middle set of pixels, F () represents Fourier transform function, F -1() represents inverse Fourier transform function, and ‖ ‖ represents Euclidean distance; Parameter recommendation value is α=2.25, β=1; ω σ() is Gaussian function, is defined as
&omega; &sigma; ( X - x t * ) = e - | | X - x t * | | 2 &sigma; t 2 .
5. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, the confidence map G described in step 6 t+1computing method be:
G t + 1 ( X ) = H t + 1 s t c ( X ) &CircleTimes; I t + 1 ( X ) &omega; &sigma; ( X - x t * )
Wherein represent convolution algorithm symbol, X ∈ R 2represent context spatial domain Ω cmiddle set of pixels, G t+1(X) the context spatial domain Ω of t is represented cin scope, pixel is in the value of the confidence in t+1 moment, and its value represents that this point drops on the probability of target area Ζ; The point that probable value is the highest is the possible center of t+1 moment target
x t + 1 * = arg m a x X &Element; &Omega; c ( x t * ) G t + 1 ( X ) .
6. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, the template optimal scale determination methods described in step 7 is:
1) with the target's center estimated in step 6 point centered by, extract the sample of n different scale, be all normalized to the image block (as shown in Figure 3) of 8 × 8, proposed parameter n=20 in the present invention;
2) sample space D={d to be matched is built j ∈ [1, n], wherein d jrepresent a jth sample, the direction gradient feature histogram of its correspondence is expressed as represent the yardstick of corresponding sample;
3) the histograms of oriented gradients similarity of k template in the sample of calculated crosswise n different scale and multiple dimensioned history To Template storehouse M, obtains similar matrix S dM∈ R n × k,
S D M = { s ( m i , d j ) | s ( m i , d j ) = 1 - | | f HOG m i - f HOG d j | | 2 , m i &Element; M , d j &Element; D }
Wherein the highest with template similarity sample be t+1 moment target area Z t+1optimal estimation, the yardstick of its correspondence
7. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, space-time model in described step 8 scale parameter update method is:
1) target area Z t+1wide and high:
W Z(t+1)=W Z(t)*s
H Z(t+1)=H Z(t)*s,
2) target context spatial domain Ω cwide and high:
W &Omega; c ( t + 1 ) = W &Omega; c ( t ) * s
H &Omega; c ( t + 1 ) = H &Omega; c ( t ) * s
3) Gaussian function &omega; &sigma; ( X - x t * ) = e - | | X - x * | | 2 &sigma; t 2 In scale parameter σ t:
σ t+1=σ t*s。
8. the target scale adaptive tracking method based on space-time model according to claim 1, is characterized in that, in described step 9, multiple dimensioned history To Template storehouse M update method is:
1) the optimal objective region Z will estimated in step 7 t+1transfer gray-scale map to, and normalization Z t+1be the image block I of 8 × 8 pixels z;
2) I is calculated zdirection gradient feature histogram f hOG_Z;
3) m=<I is made z, f hOG_Z>, M=M ∪ m;
4) if t≤k (parameter recommendation value is k=10), multiple dimensioned history To Template storehouse M has upgraded, and algorithm terminates; Otherwise, proceed to step 5;
5) similarity matrix S is calculated m
S M = { s ( m i , m j ) | s ( m i , m j ) = 1 - | | f HOG m i - f HOG m j | | 2 , m i &Element; M , m j &Element; M }
Wherein represent template m i/ m jdirection gradient feature histogram;
6) the minimum template pair of similarity is obtained ( m m i n 1 , m m i n 2 ) = argmin m i &Element; M , m j &Element; M s ( m i , m j ) ;
7) m is calculated respectively min1, m min2with the similarity of other templates and, S s u m _ p = &Sigma; m j &Element; M s ( m p , m j ) , m p &Element; { m m i n 1 , m m i n 2 } ;
8) if S sum_min1>=S sum_min2, adjustment templatespace M=M-m min1otherwise, M=M-m min2;
9) multiple dimensioned history To Template storehouse M has upgraded.
9. the target scale adaptive tracking method based on space-time model according to claim 1, it is characterized in that, in described step 9, multiple dimensioned history To Template storehouse M is after K >=k renewal, template number in template base | M| will remain k, and k the representational dbjective state of most remained in template base from the t=1 moment, along with the continuation followed the tracks of, template base dynamically updates continuing.
CN201510632255.4A 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model Active CN105117720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510632255.4A CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510632255.4A CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Publications (2)

Publication Number Publication Date
CN105117720A true CN105117720A (en) 2015-12-02
CN105117720B CN105117720B (en) 2018-08-28

Family

ID=54665703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510632255.4A Active CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Country Status (1)

Country Link
CN (1) CN105117720B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654518A (en) * 2016-03-23 2016-06-08 上海博康智能信息技术有限公司 Trace template self-adaption method based on variance estimation
CN105931273A (en) * 2016-05-04 2016-09-07 江南大学 Local sparse representation object tracking method based on LO regularization
CN106127811A (en) * 2016-06-30 2016-11-16 西北工业大学 Target scale adaptive tracking method based on context
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106251364A (en) * 2016-07-19 2016-12-21 北京博瑞爱飞科技发展有限公司 Method for tracking target and device
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence
CN107346548A (en) * 2017-07-06 2017-11-14 电子科技大学 A kind of tracking for electric transmission line isolator
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108093153A (en) * 2017-12-15 2018-05-29 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN105678806B (en) * 2016-01-07 2019-01-08 中国农业大学 A kind of live pig action trail automatic tracking method differentiated based on Fisher
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN111311641A (en) * 2020-02-25 2020-06-19 重庆邮电大学 Target tracking control method for unmanned aerial vehicle
CN113436228A (en) * 2021-06-22 2021-09-24 中科芯集成电路有限公司 Anti-blocking and target recapturing method of correlation filtering target tracking algorithm
CN113516713A (en) * 2021-06-18 2021-10-19 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362885B2 (en) * 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
CN101458816A (en) * 2008-12-19 2009-06-17 西安电子科技大学 Target matching method in digital video target tracking
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN103218825A (en) * 2013-03-15 2013-07-24 华中科技大学 Quick detection method of spatio-temporal interest points with invariable scale

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362885B2 (en) * 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
CN101458816A (en) * 2008-12-19 2009-06-17 西安电子科技大学 Target matching method in digital video target tracking
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN103218825A (en) * 2013-03-15 2013-07-24 华中科技大学 Quick detection method of spatio-temporal interest points with invariable scale

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴佼: "基于时空模型的鲁棒目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
蒋敏 等: "基于时空模型的尺度自适应跟踪算法", 《小型微型计算机系统》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678806B (en) * 2016-01-07 2019-01-08 中国农业大学 A kind of live pig action trail automatic tracking method differentiated based on Fisher
CN105654518B (en) * 2016-03-23 2018-10-23 上海博康智能信息技术有限公司 A kind of trace template adaptive approach
CN105654518A (en) * 2016-03-23 2016-06-08 上海博康智能信息技术有限公司 Trace template self-adaption method based on variance estimation
CN105931273A (en) * 2016-05-04 2016-09-07 江南大学 Local sparse representation object tracking method based on LO regularization
CN105931273B (en) * 2016-05-04 2019-01-25 江南大学 Local rarefaction representation method for tracking target based on L0 regularization
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106127798B (en) * 2016-06-13 2019-02-22 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106127811A (en) * 2016-06-30 2016-11-16 西北工业大学 Target scale adaptive tracking method based on context
CN106251364A (en) * 2016-07-19 2016-12-21 北京博瑞爱飞科技发展有限公司 Method for tracking target and device
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence
CN106485732B (en) * 2016-09-09 2019-04-16 南京航空航天大学 A kind of method for tracking target of video sequence
CN107346548A (en) * 2017-07-06 2017-11-14 电子科技大学 A kind of tracking for electric transmission line isolator
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108093153A (en) * 2017-12-15 2018-05-29 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN109903281B (en) * 2019-02-28 2021-07-27 中科创达软件股份有限公司 Multi-scale-based target detection method and device
CN111311641A (en) * 2020-02-25 2020-06-19 重庆邮电大学 Target tracking control method for unmanned aerial vehicle
CN111311641B (en) * 2020-02-25 2023-06-09 重庆邮电大学 Unmanned aerial vehicle target tracking control method
CN113516713A (en) * 2021-06-18 2021-10-19 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN113516713B (en) * 2021-06-18 2022-11-22 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN113436228A (en) * 2021-06-22 2021-09-24 中科芯集成电路有限公司 Anti-blocking and target recapturing method of correlation filtering target tracking algorithm
CN113436228B (en) * 2021-06-22 2024-01-23 中科芯集成电路有限公司 Anti-shielding and target recapturing method of related filtering target tracking algorithm

Also Published As

Publication number Publication date
CN105117720B (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN105117720A (en) Object scale self-adaption tracking method based on spatial-temporal model
CN108470332B (en) Multi-target tracking method and device
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
Yang et al. Robust superpixel tracking
US9798923B2 (en) System and method for tracking and recognizing people
CN109145766B (en) Model training method and device, recognition method, electronic device and storage medium
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN110084836B (en) Target tracking method based on deep convolution characteristic hierarchical response fusion
CN103793926B (en) Method for tracking target based on sample reselection procedure
CN109544592B (en) Moving object detection algorithm for camera movement
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN103514441A (en) Facial feature point locating tracking method based on mobile platform
WO2022218396A1 (en) Image processing method and apparatus, and computer readable storage medium
CN112766218B (en) Cross-domain pedestrian re-recognition method and device based on asymmetric combined teaching network
CN116935447B (en) Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN103886325A (en) Cyclic matrix video tracking method with partition
CN107909053B (en) Face detection method based on hierarchical learning cascade convolution neural network
CN111046856A (en) Parallel pose tracking and map creating method based on dynamic and static feature extraction
CN111680753A (en) Data labeling method and device, electronic equipment and storage medium
CN108898623A (en) Method for tracking target and equipment
CN114240997A (en) Intelligent building online cross-camera multi-target tracking method
CN103413149A (en) Method for detecting and identifying static target in complicated background
CN104036528A (en) Real-time distribution field target tracking method based on global search
CN117292338B (en) Vehicle accident identification and analysis method based on video stream analysis
CN103996207A (en) Object tracking method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant