CN105117720B - Target scale adaptive tracking method based on space-time model - Google Patents

Target scale adaptive tracking method based on space-time model Download PDF

Info

Publication number
CN105117720B
CN105117720B CN201510632255.4A CN201510632255A CN105117720B CN 105117720 B CN105117720 B CN 105117720B CN 201510632255 A CN201510632255 A CN 201510632255A CN 105117720 B CN105117720 B CN 105117720B
Authority
CN
China
Prior art keywords
target
space
scale
time model
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510632255.4A
Other languages
Chinese (zh)
Other versions
CN105117720A (en
Inventor
蒋敏
吴佼
孔军
柳晨华
皮昕鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201510632255.4A priority Critical patent/CN105117720B/en
Publication of CN105117720A publication Critical patent/CN105117720A/en
Application granted granted Critical
Publication of CN105117720B publication Critical patent/CN105117720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of target scale adaptive tracking method based on space-time model, includes the following steps:Video starts, and reads in first frame image, manually specified tracking target rectangle position;It is then based on context spatial domain, initializes space-time model and multiple dimensioned history target template library;Then next frame image is read in, iteration builds space-time model, calculates confidence map, estimation target's center position;Then according to history target template library, judge templet optimal scale determines target rectangle position, completes present frame target following, and update space-time model scale parameter and multiple dimensioned history target template library;Finally whether detection video terminates, and is not finished and continues to read in next frame, otherwise terminates to track.The present invention has successfully managed the variation of target appearance scale, has realized robust tracking in the case where target is by illumination variation, partial occlusion and the interference such as fast moves.

Description

Target scale adaptive tracking method based on space-time model
Technical field:
The invention belongs to field of machine vision, more particularly to a kind of target scale adaptive tracing side based on space-time model Method.
Background technology:
Target following is the high-rise visual processes in video monitoring system, that is, utilizes computer vision, image/video processing Etc. the relevant technologies the target in the image sequence of video camera shooting is described, is located in the case where not needing human intervention Reason and analysis, realize the detection, tracking and identification to moving target in dynamic scene, then special according to the target of processing and analysis Sign, obtains interested movement locus.Motion target tracking, detection are an important research contents of field of video monitoring, depending on Target following in frequency image, detection effect directly influence the behavioral value of the target in subsequent video processing procedure, event The accuracy of the advanced processes such as understanding and analysis[1]
In recent years, it is asked for the tracking under the interference effects such as target scale, illumination variation and partial occlusion in actual environment Topic, domestic and foreign scholars constantly propose new track algorithm or Improving ways[2].However many trackings are only by research object It is confined to target itself, has ignored the information characteristics around target, causes arithmetic accuracy not high.In order to enhance tracking accuracy, Open China etc.[3]Target surrounding context spatial domain relationship is introduced, establishes model using target and its peripheral information relationship to predict target Position improves the robustness of the interference such as the anti-partial occlusion of algorithm and illumination variation.However obviously become in target appearance scale When change, the feature of this method extraction can not effectively establish model, be susceptible to track window and deviate target, or even lose target.
The present invention declines problem for tracking accuracy caused by target scale variation, introduces and borrows Clustering structure Multiple dimensioned history target template library proposes a kind of target scale adaptive tracking algorithm based on space-time model, realizes scale The real-time target robust tracking of variation.
Invention content:
The main object of the present invention is to propose a kind of target scale adaptive tracking algorithm based on space-time model, in target Under the interference effects such as scaling, illumination variation and partial occlusion, rapid, accurate positionin target area.
To achieve the goals above, the present invention provides the following technical solutions:
Step 1: reading in first frame image Image1, specified manually to track target rectangle position Ζ;
Step 2: being based on context spatial domain Ωc, initialize space-time modelIt enables
Step 3: calculating target area Z1Direction gradient feature histogram fHOG_Z, enable m=<IZ,fHOG_Z>, M=M ∪ m, Initialize multiple dimensioned history target template library M;
Step 4: reading in next frame image Imaget+1(t≥1);
Step 7: with the target's center's point estimated in step 6Centered on, the sample of n different scale is extracted, It is normalized to 8 × 8 image block, compares history target template library M, judge templet optimal scale determines target rectangle position, complete At t+1 frame target followings;
Step 8: according to the optimal scale that step 7 is estimated, space-time model is updatedScale parameter;
Step 9: according to the optimal objective region Z estimated in step 7t+1, update multiple dimensioned history target template library M;
Step 10: if video is not finished, it is transferred to step 4, reads in next frame image;Otherwise tracking terminates.
Compared with prior art, the invention has the advantages that:
1. update space-time model using the optimal scale of target by step 7, realize the real-time target of dimensional variation with Track.
2. borrowing the newer multiple dimensioned history target template library of Clustering structure dynamic by step 9, most can construct The templatespace for representing dbjective state improves the accuracy for indicating dbjective state.
3. combining space-time model, the dynamic update multiple dimensioned history target template library of Clustering structure and optimal ruler are borrowed Degree estimation, the present invention are successfully managed in the case where target is by illumination variation, partial occlusion and the interference such as fast moves The variation of target appearance scale, realizes robust tracking.
Therefore, the present invention will be with a wide range of applications in the application that public safety monitors.
Description of the drawings:
The algorithm flow chart of Fig. 1 present invention
Fig. 2 builds the context spatial domain schematic diagram of target
The definition graph of Fig. 3 different scale samples;
The definition graph in Fig. 4 target templates space;
Fig. 5 algorithms tracking effect in the experiment of Singer1 sequence images;
Fig. 6 algorithms error curve analysis chart in the experiment of Singer1 sequence images;
Fig. 7 algorithms tracking effect in the experiment of David sequence images;
Fig. 8 algorithms error curve analysis chart in the experiment of David sequence images;
Fig. 9 algorithms tracking effect in the experiment of CarScale sequence images;
Figure 10 algorithms error curve analysis chart in the experiment of CarScale sequence images;
Specific implementation mode
In order to better illustrate the purpose of the present invention, specific steps and feature, below in conjunction with the accompanying drawings to the present invention make into One step is described in detail:
With reference to figure 1, a kind of target scale adaptive tracking method based on space-time model proposed by the present invention includes mainly Following steps:
Step 1: reading in first frame image Image1, specified manually to track target rectangle position Ζ;
Step 2: being based on context spatial domain Ωc, initialize space-time modelIt enables
Step 3: calculating target area Z1Direction gradient feature histogram fHOG_Z, enable m=<IZ,fHOG_Z>, M=M ∪ m, Initialize multiple dimensioned history target template library M;
Step 4: reading in next frame image Imaget+1(t≥1);
Step 7: with the target's center's point estimated in step 6Centered on, the sample of n different scale is extracted, It is normalized to 8 × 8 image block, compares history target template library M, judge templet optimal scale determines target rectangle position, complete At t+1 frame target followings;
Step 8: according to the optimal scale that step 7 is estimated, space-time model is updatedScale parameter;
Step 9: according to the optimal objective region Z estimated in step 7t+1, update multiple dimensioned history target template library M;
Step 10: if video is not finished, it is transferred to step 4, reads in next frame image;Otherwise tracking terminates.
In above-mentioned technical proposal, in the target rectangle position Ζ such as Fig. 2 in step 1 shown in solid box, x*For target area Geometric center, WZAnd HZThe width and height of target area Ζ are indicated respectively.
In above-mentioned technical proposal, the context spatial domain Ω in step 2cAs shown in dotted line frame in Fig. 2, indicate target and its Peripheral information.Context spatial domain ΩcWith target area geometric center x*Centered on, ΩcWidth and height be defined as
In above-mentioned technical proposal, space-time model in step 2Initial method be:
1. defining target context spatial domain ΩcFeature Xc=c (m)=(I (m), m) | m ∈ Ωc(x*), wherein Ωc (x*) indicate with x*Centered on target correspond to context spatial domain, m indicates spatial domain Ωc(x*) in pixel, I (m) indicate pixel The gray value of m;
2. the Spatial Domain of structure characterization target and its peripheral reference
Wherein X ∈ R2Indicate context spatial domain Ωc(x*) in set of pixels, F () indicate Fourier transform function, F-1(·) Indicate that inverse Fourier transform function, ‖ ‖ indicate Euclidean distance.Parameter recommendation value is α=2.25, β=1.ωσ() is Gauss Function is defined as
3. enablingComplete space-time modelInitialization.
In above-mentioned technical proposal, the initial method of multiple dimensioned history target template library M is in step 3:
1. enabling
2. by target area Z1Switch to gray-scale map, and normalizes Z1For the image block I of 8 × 8 pixelsZ
3. calculating IZDirection gradient feature histogram fHOG_Z
4. enabling m=<IZ,fHOG_Z>, M=M ∪ m;
5. the library M initialization of multiple dimensioned history target template is completed;
In above-mentioned technical proposal, iteration builds space-time model in step 5Method be:
WhereinWithRespectively indicate t, t+1 moment space-time model, η be newer learning rate, it is proposed that using η= 0.075。Indicate that the target Spatial Domain of t moment, characterization target and its peripheral reference, circular are as follows:
Wherein X ∈ R2Indicate context spatial domain Ωc(x*) in set of pixels, F () indicate Fourier transform function, F-1(·) Indicate that inverse Fourier transform function, ‖ ‖ indicate Euclidean distance.Parameter recommendation value is α=2.25, β=1.ωσ() is Gauss Function is defined as
In above-mentioned technical proposal, confidence map G in step 6t+1Computational methods be:
WhereinIndicate convolution operator, X ∈ R2Indicate context spatial domain ΩcMiddle set of pixels, Gt+1(X) t moment is indicated Context spatial domain ΩcThe value of the confidence of the pixel at the t+1 moment in range, value indicate that the point falls the probability in target area Ζ. The highest point of probability value is the possible center of t+1 moment targets
In above-mentioned technical proposal, template optimal scale judgment method is in step 7:
1. with the target's center's point estimated in step 6Centered on, extract the sample of n different scale, equal normalizing Turn to 8 × 8 image block (as shown in Figure 3), proposed parameter n=20 in the present invention;
2. structure sample space D={ d to be matchedj∈[1,n], whereindjIndicate j-th of sample, Its corresponding direction gradient feature histogram is expressed assjIndicate the scale of corresponding sample;
3. the direction gradient of the sample of n different scale of calculated crosswise and k template in multiple dimensioned history target template library M Histogram similarity obtains similar matrix SDM∈Rn×k,
Wherein with the highest sample of template similarityAs t+1 moment mesh Mark region Zt+1Optimal estimation, corresponding scale
In above-mentioned technical proposal, space-time model in step 8Scale parameter update method is:
1. target area Zt+1Width and height:
WZ(t+1)=WZ(t)*s
HZ(t+1)=HZ(t)*s,
2. target context spatial domain ΩcWidth and height:
σt+1t*s
In above-mentioned technical proposal, multiple dimensioned history target template library M update methods are in step 9:
1. the optimal objective region Z that will be estimated in step 7t+1Switch to gray-scale map, and normalizes Zt+1For the figure of 8 × 8 pixels As block IZ
2. calculating IZDirection gradient feature histogram fHOG_Z
3. enabling m=<IZ,fHOG_Z>, M=M ∪ m;
4. if t≤k (parameter recommendation value is k=10), the library M updates of multiple dimensioned history target template are completed, algorithm terminates;It is no Then, it is transferred to step 5;
5. calculating similarity matrix SM
WhereinIndicate template mi/mjDirection gradient feature histogram;
7. calculating separately mmin1,mmin2With the similarities of other templates and, Ssum_p=∑mj∈Ms(mp,mj), mp∈{mmin1, mmin2};
8. if Ssum_min1≥Ssum_min2, adjustment templatespace M=M-mmin1, conversely, M=M-mmin2
9. the library M updates of multiple dimensioned history target template are completed;
In above-mentioned technical proposal, multiple dimensioned history target template library M is after K >=k update in step 9, such as Fig. 4 institutes Show, template number in template library | M | it will stay in that k, and it is a most representative that the k since the t=1 moment is remained in template library Dbjective state, with the continuation of tracking, template library will continue dynamic update.
In above-mentioned technical proposal, effect such as Fig. 5 of the target scale adaptive tracking method based on space-time model shows.Fig. 5 Algorithm is given in the experiment of Singer1 sequence images, target object undergoes persistently becoming smaller for violent illumination variation and scale Visual tracking effect under equal disturbed conditions.Fig. 6 be algorithm in the experiment of Singer1 sequence images the center position that tracks with The error curve analysis chart of standard-track central point.Fig. 7 gives algorithm in the experiment of David sequence images, target object warp Go through the visual tracking effect under the disturbed conditions such as illumination variation, plane internal rotation, non-linear deformation and the irregular variation of scale. Fig. 8 is that the center position that algorithm tracks in the experiment of David sequence images and the error curve of standard-track central point are analyzed Figure.Fig. 9 gives algorithm in the experiment of CarScale sequence images, the significant change of target object experience target appearance scale, Visual tracking effect under partial occlusion and the quickly disturbed conditions such as movement.Figure 10 is that algorithm is real in CarScale sequence images Test the center position of middle tracking and the error curve analysis chart of standard-track central point.Pass through three groups of sequential tests, experiment knot Fruit is illustrated with qualitative tracking effect figure and quantitative error curve, the precision and robustness of verification algorithm.
This patent utilizes target and its surrounding context spatial information (si) and the target serial relation on a timeline, builds Vertical space-time model.Effectively extraction target signature avoids deviateing caused by the interference such as partial occlusion and illumination variation.Borrow cluster Thought builds screening rule, and the most representative template of dynamic update builds templatespace, the accurate state for indicating target.It introduces Histograms of oriented gradients signature analysis template and Sample Similarity further improve matched accuracy.Finally according to matching The target optimal scale of acquisition updates space-time model, realizes the real-time modeling method of dimensional variation, the robustness of boosting algorithm.It is real Verification, the method that this patent proposes are successfully managed in the case where target is by interference such as illumination variations and partial occlusion The variation of target appearance scale.
The specific implementation mode of the present invention is elaborated above in conjunction with attached drawing, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept It puts and makes a variety of changes.
[1]Felzenszwalb,P.F.,Girshick,R.B.,McAllester,D.,et al.Object Detection with Discriminatively Trained Part-Based Models[J].Pattern Analysis and Machine Intelligence,2010,32(9):1627-1645.
[2]WU Yi,Lim Jongwoo,Yang M-H.Object Tracking Benchmark[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,In Press,2014.
[3]ZHANG Kai-hua,ZHANG Lei,,Yang M-H.Fast Visual Tracking via Dense Spatio-Temporal Context Learning[C].Proceeding.of 13th European Conference on Computer Vision,2014.

Claims (8)

1. the target scale adaptive tracking method based on space-time model, which is characterized in that include the following steps:
Step 1: reading in first frame image Image1, specified manually to track target rectangle position Z;
Step 2: being based on context spatial domain Ωc, initialize space-time modelIt enables
Step 3: calculating target area Z1Direction gradient feature histogram fHOG_Z, enable m=<IZ,fHOG_Z>, M=M ∪ m, initially Change multiple dimensioned history target template library M;
Step 4: reading in next frame image Imaget+1(t≥1);
Step 5: iteration builds space-time modelIt enablesWherein
Step 6: calculating confidence map Gt+1, enableEstimate target Center
Step 7: with the target's center's point estimated in step 6Centered on, extract the sample of n different scale, equal normalizing 8 × 8 image block is turned to, history target template library M is compared, judge templet optimal scale determines target rectangle position, completes t+ 1 frame target following;
Step 8: according to the optimal scale that step 7 is estimated, space-time model is updatedScale parameter;
Step 9: according to the optimal objective region Z estimated in step 7t+1, update multiple dimensioned history target template library M;
Step 10: if video is not finished, it is transferred to step 4, reads in next frame image;Otherwise tracking terminates;
Wherein, ΩcIndicate that context spatial domain, η are newer learning rate, x*For target area geometric center;
Space-time model in step 2Initial method be:
1) target context spatial domain Ω is definedcFeature Xc=c (m)=(I (m), m) | m ∈ Ωc(x*), wherein Ωc(x*) table Show with x*Centered on target correspond to context spatial domain, m indicates spatial domain Ωc(x*) in pixel, I (m) indicates the ash of pixel m Angle value;
2) Spatial Domain of structure characterization target and its peripheral reference
Wherein X ∈ R2Indicate context spatial domain Ωc(x*) in set of pixels, F () indicate Fourier transform function, F-1() indicates Inverse Fourier transform function, | | | | indicate Euclidean distance;Parameter alpha=2.25, β=1;ωσ() is Gaussian function, is defined as
3) it enablesComplete space-time modelInitialization;
Wherein,Indicate ΩcWidth,Indicate ΩcHeight.
2. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that step The initial method of multiple dimensioned history target template library M described in three is:
1) it enables
2) by target area Z1Switch to gray-scale map, and normalizes Z1For the image block I of 8 × 8 pixelsZ
3) I is calculatedZDirection gradient feature histogram fHOG_Z
4) m=is enabled<IZ,fHOG_Z>, M=M ∪ m;
5) multiple dimensioned history target template library M initialization is completed.
3. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that step Iteration described in five builds space-time modelMethod be:
WhereinWithIndicate that t, t+1 moment space-time model, η are newer learning rate, η=0.075 respectively;It indicates The target Spatial Domain of t moment, characterization target and its peripheral reference, circular are as follows:
Wherein X ∈ R2Indicate context spatial domain Ωc(x*) in set of pixels, F () indicate Fourier transform function, F-1() indicates Inverse Fourier transform function, | | | | indicate Euclidean distance;Parameter alpha=2.25, β=1;ωσ() is Gaussian function, is defined as
4. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that step Confidence map G described in sixt+1Computational methods be:
WhereinIndicate convolution operator, X ∈ R2Indicate context spatial domain ΩcMiddle set of pixels, Gt+1(X) the upper and lower of t moment is indicated Literary spatial domain ΩcThe value of the confidence of the pixel at the t+1 moment in range, value indicate that the point falls the probability in target area Z;Probability value Highest point is the possible center of t+1 moment targets
5. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that step Template optimal scale judgment method described in seven is:
1) target's center's point to estimate in step 6Centered on, the sample of n different scale is extracted, is normalized to 8 × 8 image block, parameter n=20;
2) sample space D={ d to be matched are builtj∈[1,n], whereindjIt indicates j-th of sample, corresponds to Direction gradient feature histogram be expressed assjIndicate the scale of corresponding sample;
3) the direction gradient histogram of the sample of n different scale of calculated crosswise and k template in multiple dimensioned history target template library M Figure similarity obtains similar matrix SDM∈Rn×k,
Wherein with the highest sample of template similarityThe as moment target areas t+1 Domain Zt+1Optimal estimation, corresponding scale
6. the target scale adaptive tracking method according to claim 5 based on space-time model, which is characterized in that described The step of eight in space-time modelScale parameter update method is:
1) target area Zt+1Width and height:
WZ(t+1)=WZ(t)*s
HZ(t+1)=HZ(t)*s,
2) target context spatial domain ΩcWidth and height:
3) Gaussian functionIn scale parameter σt
σt+1t*s。
7. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that described The step of nine in multiple dimensioned history target template library M update methods be:
1) the optimal objective region Z that will be estimated in step 7t+1Switch to gray-scale map, and normalizes Zt+1For the image block of 8 × 8 pixels IZ
2) I is calculatedZDirection gradient feature histogram fHOG_Z
3) m=is enabled<IZ,fHOG_Z>, M=M ∪ m;
If 4) t≤k, k=10, the library M updates of multiple dimensioned history target template are completed, and algorithm terminates;Otherwise, it is transferred to step 5;
5) similarity matrix S is calculatedM
WhereinIndicate template mi/mjDirection gradient feature histogram;
6) the minimum template pair of similarity is obtained
7) m is calculated separatelymin1,mmin2With the similarities of other templates and,
mp∈{mmin1,mmin2};
If 8) Ssum_min1≥Ssum_min2, adjustment templatespace M=M-mmin1, conversely, M=M-mmin2
9) multiple dimensioned history target template library M updates are completed.
8. the target scale adaptive tracking method according to claim 1 based on space-time model, which is characterized in that described The step of nine in multiple dimensioned history target template library M after the update of K >=k times, template number in template library | M | will stay in that k, And the k since the t=1 moment most representative dbjective states are remained in template library, with the continuation of tracking, template library It will continue dynamic to update.
CN201510632255.4A 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model Active CN105117720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510632255.4A CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510632255.4A CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Publications (2)

Publication Number Publication Date
CN105117720A CN105117720A (en) 2015-12-02
CN105117720B true CN105117720B (en) 2018-08-28

Family

ID=54665703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510632255.4A Active CN105117720B (en) 2015-09-29 2015-09-29 Target scale adaptive tracking method based on space-time model

Country Status (1)

Country Link
CN (1) CN105117720B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678806B (en) * 2016-01-07 2019-01-08 中国农业大学 A kind of live pig action trail automatic tracking method differentiated based on Fisher
CN105654518B (en) * 2016-03-23 2018-10-23 上海博康智能信息技术有限公司 A kind of trace template adaptive approach
CN105931273B (en) * 2016-05-04 2019-01-25 江南大学 Local rarefaction representation method for tracking target based on L0 regularization
CN106127798B (en) * 2016-06-13 2019-02-22 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106127811A (en) * 2016-06-30 2016-11-16 西北工业大学 Target scale adaptive tracking method based on context
CN106251364A (en) * 2016-07-19 2016-12-21 北京博瑞爱飞科技发展有限公司 Method for tracking target and device
CN106485732B (en) * 2016-09-09 2019-04-16 南京航空航天大学 A kind of method for tracking target of video sequence
CN107346548A (en) * 2017-07-06 2017-11-14 电子科技大学 A kind of tracking for electric transmission line isolator
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108093153B (en) * 2017-12-15 2020-04-14 深圳云天励飞技术有限公司 Target tracking method and device, electronic equipment and storage medium
CN109903281B (en) * 2019-02-28 2021-07-27 中科创达软件股份有限公司 Multi-scale-based target detection method and device
CN111311641B (en) * 2020-02-25 2023-06-09 重庆邮电大学 Unmanned aerial vehicle target tracking control method
CN113516713B (en) * 2021-06-18 2022-11-22 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN113436228B (en) * 2021-06-22 2024-01-23 中科芯集成电路有限公司 Anti-shielding and target recapturing method of related filtering target tracking algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362885B2 (en) * 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
CN101458816A (en) * 2008-12-19 2009-06-17 西安电子科技大学 Target matching method in digital video target tracking
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN103218825A (en) * 2013-03-15 2013-07-24 华中科技大学 Quick detection method of spatio-temporal interest points with invariable scale

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362885B2 (en) * 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
CN101458816A (en) * 2008-12-19 2009-06-17 西安电子科技大学 Target matching method in digital video target tracking
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN103218825A (en) * 2013-03-15 2013-07-24 华中科技大学 Quick detection method of spatio-temporal interest points with invariable scale

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于时空模型的尺度自适应跟踪算法;蒋敏 等;《小型微型计算机系统》;20160731;第37卷(第7期);1522-1525 *
基于时空模型的鲁棒目标跟踪算法研究;吴佼;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215(第02期);I138-3490 *

Also Published As

Publication number Publication date
CN105117720A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN105117720B (en) Target scale adaptive tracking method based on space-time model
Wang et al. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching
CN108256394B (en) Target tracking method based on contour gradient
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN108346159A (en) A kind of visual target tracking method based on tracking-study-detection
CN106570490B (en) A kind of pedestrian&#39;s method for real time tracking based on quick clustering
US20150347804A1 (en) Method and system for estimating fingerprint pose
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN103985143A (en) Discriminative online target tracking method based on videos in dictionary learning
CN111046856A (en) Parallel pose tracking and map creating method based on dynamic and static feature extraction
JP7136500B2 (en) Pedestrian Re-identification Method for Random Occlusion Recovery Based on Noise Channel
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN113763424B (en) Real-time intelligent target detection method and system based on embedded platform
CN110047063A (en) A kind of detection method that material is fallen, device, equipment and storage medium
Pei et al. Consistency guided network for degraded image classification
CN104036528A (en) Real-time distribution field target tracking method based on global search
CN109241932B (en) Thermal infrared human body action identification method based on motion variance map phase characteristics
CN107886060A (en) Pedestrian&#39;s automatic detection and tracking based on video
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
Zheng et al. Shadow removal for pedestrian detection and tracking in indoor environments
CN112488985A (en) Image quality determination method, device and equipment
CN106372650B (en) A kind of compression tracking based on motion prediction
CN106940786B (en) Iris reconstruction method using iris template based on LLE and PSO
CN106778831B (en) Rigid body target on-line feature classification and tracking method based on Gaussian mixture model
CN115272967A (en) Cross-camera pedestrian real-time tracking and identifying method, device and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant