CN110738685A - space-time context tracking method with color histogram response fusion - Google Patents

space-time context tracking method with color histogram response fusion Download PDF

Info

Publication number
CN110738685A
CN110738685A CN201910864988.9A CN201910864988A CN110738685A CN 110738685 A CN110738685 A CN 110738685A CN 201910864988 A CN201910864988 A CN 201910864988A CN 110738685 A CN110738685 A CN 110738685A
Authority
CN
China
Prior art keywords
target
color
color histogram
point
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910864988.9A
Other languages
Chinese (zh)
Other versions
CN110738685B (en
Inventor
林彬
郑浩岚
罗旋
王华通
陈华舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN201910864988.9A priority Critical patent/CN110738685B/en
Publication of CN110738685A publication Critical patent/CN110738685A/en
Application granted granted Critical
Publication of CN110738685B publication Critical patent/CN110738685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Abstract

The invention relates to space-time context tracking methods fusing color histogram responses, which are based on a space-time context tracking frame, introduce a color histogram model during target center prediction and fuse the color histogram model with the space-time context model in a response layer, thereby overcoming the defect of weak feature expression capability of the space-time context model and realizing more accurate and stable target positioning.

Description

space-time context tracking method with color histogram response fusion
Technical Field
The invention relates to target tracking methods, belongs to the field of computer vision, and particularly relates to space-time context tracking methods fusing color histogram responses.
Background
The target tracking technology is important branches and research hotspots in the field of computer vision, and has universal application in military aspects such as unmanned aerial vehicle reconnaissance, missile guidance and the like and civil aspects such as video monitoring, automatic driving and the like.
The tracking method (STC) proposed in the document "Fast visual tracking via space mapping-temporal context learning, Proceedings of the European Conference Computer Vision,2014: 127-.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides space-time context tracking methods fusing color histogram responses, and solves the problem that a space-time context tracking model is insufficient in adaptability when dealing with interference factors such as target deformation, motion blur and the like in a video.
Technical scheme
A spatio-temporal context tracking method with fused color histogram responses, comprising the steps of:
step 1, reading the 1 st frame image data in the video, and marking the initial target central position as a coordinate
Figure BDA0002200997110000021
Target size is noted as sw×sh,sw、shThe entire target area is noted as the width and height of the target, respectively
Figure BDA0002200997110000022
Then is provided with
Figure BDA0002200997110000023
For the center, contained target regions were determined
Figure BDA0002200997110000024
And a background region
Figure BDA0002200997110000025
Local context search area of
Figure BDA0002200997110000026
Figure BDA0002200997110000028
Is of size M.times.N, M being 2sw,N=2sh
Step 2 for search area
Figure BDA0002200997110000029
Computing an initial spatiotemporal context model
Figure BDA00022009971100000211
Wherein F and F-1Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) are shown, respectively,I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),
Figure BDA00022009971100000213
representing point x and target center
Figure BDA00022009971100000214
The Euclidean distance, σ and σ' are scale parameters, and are set toσ'=2.25。
Step 3 calculating the target area
Figure BDA00022009971100000216
And a background region
Figure BDA00022009971100000217
The color histogram of (1) has the number of color regions represented by β, β for grayscale images, and β for color images3
Figure BDA0002200997110000031
And
Figure BDA0002200997110000032
the corresponding initial color histogram models are respectively recorded as
Figure BDA0002200997110000033
And
Figure BDA0002200997110000034
y denotes a color interval index number, y 1, 2.
Step 4 reads frames of images, assuming the current frame is the tth frame, toCentered, the search area is extracted as in step 1
Figure BDA0002200997110000036
Target area
Figure BDA0002200997110000037
And a background region
Figure BDA0002200997110000038
Step 5 model the spatio-temporal context
Figure BDA0002200997110000039
Applied to the current frame to obtain a spatio-temporal context model response ft stc(x):
Figure BDA00022009971100000310
In the formula (I), the compound is shown in the specification,
Figure BDA00022009971100000311
which represents a convolution operation, is a function of,
Figure BDA00022009971100000312
It(x) Representing the gray value at point x in the t-th frame (the color image needs to convert the RGB values into gray values), σ is the same as in step 2.
Step 6 in search areaIn, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as(or) Then, It(x) (or R)t(x) In the target area
Figure BDA00022009971100000316
And backgroundRegion(s)
Figure BDA00022009971100000317
The probability of occurrence is respectively
Figure BDA00022009971100000318
(or
Figure BDA00022009971100000319
) And(or) And thereby obtaining a search area
Figure BDA00022009971100000322
Inner target color probability score map pt(x):
Figure BDA00022009971100000323
Figure BDA00022009971100000324
In the formula, λ is an adjustment parameter, and is set to 10-4
Step 7 for pt(x) calculating integral graph to obtain color histogram model response f of current framet hist(x):
In the formula (I), the compound is shown in the specification,x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x.
Step 8 for the results obtained in step 5 and step 7Spatio-temporal context model response ft stc(x) And color histogram model response ft hist(x) And (3) carrying out fusion to obtain a final response result:
ft(x)=αft stc(x)+(1-α)ft hist(x) (5)
in the formula, α is a weight parameter, and α is set to 0.55.
Step 9 at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Figure BDA0002200997110000043
Step 10 with
Figure BDA0002200997110000045
As a center, re-extracting the search area according to the mode in step 1 to obtain
Figure BDA0002200997110000046
Target areaAnd a background region
Figure BDA0002200997110000048
Then to
Figure BDA0002200997110000049
Computing new spatiotemporal context models
Figure BDA00022009971100000410
Figure BDA00022009971100000411
Where ρ is a learning rate of updating the spatio-temporal context model, and ρ is set to 0.035, and σ' are the same as in step 2.
Step 11 is to proceed to the target area according to the mode of step 3
Figure BDA00022009971100000412
And a background region
Figure BDA00022009971100000413
Calculating color histogram model
Figure BDA00022009971100000414
And
Figure BDA00022009971100000415
and updating the model according to the following formula to obtain a new color histogram model
Figure BDA00022009971100000416
And
Figure BDA0002200997110000051
Figure BDA0002200997110000052
in the equation, η represents the learning rate of the color histogram model update, and η is set to 0.04.
Step 12 for
Figure BDA0002200997110000053
Is a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shThe initial width and height of the target in step 1. And finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.
Advantageous effects
The space-time context tracking methods fusing color histogram response, which are provided by the invention, introduce color histogram information to assist a space-time context model in target positioning, namely, firstly constructing a space-time context model and a color histogram model and respectively calculating respective model response graphs, then fusing the response graphs of the two models to obtain a final response graph, and then determining a target center according to the final response graph result.
The advantages are as follows: based on a space-time context tracking framework, a color histogram model is introduced during target center prediction and is fused with the space-time context model in a response layer, so that the defect of weak feature expression capability of the space-time context model is overcome, and more accurate and stable target positioning is realized. The tracking effect shown in the test of different tracking scenes such as target deformation, motion blur and the like is greatly improved compared with the original space-time context tracking method, the average tracking speed can reach 134 frames/second under the condition of general PC hardware, and the method has high practical application value.
Drawings
FIG. 1: time-space context tracking method flow chart fusing color histogram response
Detailed Description
The present invention will now be described in further with reference to the following examples and accompanying drawings:
step 1, reading the 1 st frame image data in the video, and marking the initial target central position as a coordinate
Figure BDA0002200997110000061
Target size is noted as sw×sh,sw、shThe entire target area is noted as the width and height of the target, respectively
Figure BDA0002200997110000062
Then is provided with
Figure BDA0002200997110000063
For the center, contained target regions were determined
Figure BDA0002200997110000064
And a background region
Figure BDA0002200997110000065
Local context search area of
Figure BDA0002200997110000066
Figure BDA0002200997110000067
Figure BDA0002200997110000068
Is of size M.times.N, M being 2sw,N=2sh
Step 2 for search area
Figure BDA0002200997110000069
Computing an initial spatiotemporal context model
Figure BDA00022009971100000610
Figure BDA00022009971100000611
Wherein F and F-1Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) are shown, respectively,
Figure BDA00022009971100000612
I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),
Figure BDA00022009971100000613
representing point x and target center
Figure BDA00022009971100000614
The Euclidean distance, σ and σ' are scale parameters, and are set to
Figure BDA00022009971100000615
σ'=2.25。
Step 3 calculating the target area
Figure BDA00022009971100000616
And a background region
Figure BDA00022009971100000617
The color histogram of (1) has the number of color regions represented by β, β for grayscale images, and β for color images3
Figure BDA00022009971100000618
Andthe corresponding initial color histogram models are respectively recorded asAnd
Figure BDA00022009971100000621
y denotes a color interval index number, y 1, 2.
Step 4 reads frames of images, assuming the current frame is the tth frame, to
Figure BDA00022009971100000622
Centered, the search area is extracted as in step 1
Figure BDA0002200997110000071
Target area
Figure BDA0002200997110000072
And a background region
Figure BDA0002200997110000073
Step 5 model the spatio-temporal context
Figure BDA0002200997110000074
Applied to the current frame to obtain a spatio-temporal context model response ft stc(x):
Figure BDA0002200997110000075
In the formula (I), the compound is shown in the specification,
Figure BDA0002200997110000076
which represents a convolution operation, is a function of,
Figure BDA0002200997110000077
It(x) Representing the gray value at point x in the t-th frame (the color image needs to convert the RGB values into gray values), σ is the same as in step 2.
Step 6 in search area
Figure BDA0002200997110000078
In, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as
Figure BDA0002200997110000079
(or
Figure BDA00022009971100000710
) Then, It(x) (or R)t(x) In the target area
Figure BDA00022009971100000711
And a background region
Figure BDA00022009971100000712
The probability of occurrence is respectively(or
Figure BDA00022009971100000714
) And
Figure BDA00022009971100000715
(or
Figure BDA00022009971100000716
) And thereby obtaining a search area
Figure BDA00022009971100000717
Inner target color probability score map pt(x):
Figure BDA00022009971100000718
Figure BDA00022009971100000719
In the formula, λ is an adjustment parameter, and is set to 10-4
Step 7 for pt(x) calculating integral graph to obtain color histogram model response f of current framet hist(x):
In the formula (I), the compound is shown in the specification,
Figure BDA0002200997110000081
x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x.
Step 8 response f to the spatio-temporal context model obtained in step 5 and step 7t stc(x) And color histogram model response ft hist(x) And (3) carrying out fusion to obtain a final response result:
ft(x)=αft stc(x)+(1-α)ft hist(x) (5)
in the formula, α is a weight parameter, and α is set to 0.55.
Step 9 at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Figure BDA0002200997110000083
Step 10 with
Figure BDA0002200997110000084
As a center, re-extracting the search area according to the mode in step 1 to obtain
Figure BDA0002200997110000085
Target areaAnd a background region
Figure BDA0002200997110000087
Then to
Figure BDA0002200997110000088
Computing new spatiotemporal context models
Figure BDA0002200997110000089
Figure BDA00022009971100000810
Where ρ is a learning rate of updating the spatio-temporal context model, and ρ is set to 0.035, and σ' are the same as in step 2.
Step 11 is to proceed to the target area according to the mode of step 3
Figure BDA00022009971100000811
And a background region
Figure BDA00022009971100000812
Calculating color histogram model
Figure BDA00022009971100000813
And
Figure BDA00022009971100000814
and updating the model according to the following formula to obtain a new color histogram modelAnd
Figure BDA00022009971100000816
Figure BDA00022009971100000817
in the equation, η represents the learning rate of the color histogram model update, and η is set to 0.04.
Step 12 for
Figure BDA0002200997110000091
Is a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shThe initial width and height of the target in step 1. And finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.

Claims (7)

1, high-speed correlation filtering tracking method based on high confidence update strategy, characterized by the following steps:
step 1: reading the 1 st frame image data in the video, and marking the initial target center position as a coordinateTarget size is noted as sw×sh,sw、shRespectively the width and height of the target willThe whole target area is marked as
Figure FDA0002200997100000012
Then is provided withFor the center, contained target regions were determined
Figure FDA0002200997100000014
And a background region
Figure FDA0002200997100000015
Local context search area of
Figure FDA0002200997100000016
Is of size M.times.N, M being 2sw,N=2sh
Step 2: for search area
Figure FDA0002200997100000018
Computing an initial spatiotemporal context model
Figure FDA0002200997100000019
Figure FDA00022009971000000110
In the formula: f and F-1Respectively representing a fast fourier transform and an inverse fast fourier transform,
Figure FDA00022009971000000111
I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),
Figure FDA00022009971000000112
representing point x and target center
Figure FDA00022009971000000113
Is a scale parameter, where σ and σ' are scale parameters
Figure FDA00022009971000000114
And step 3: calculating a target region
Figure FDA00022009971000000115
And a background region
Figure FDA00022009971000000116
The color histogram of (a), the number of color bins is noted as β,
Figure FDA00022009971000000117
and
Figure FDA00022009971000000118
the corresponding initial color histogram models are respectively recorded as
Figure FDA00022009971000000119
And
Figure FDA00022009971000000120
y denotes a color interval index number, y 1, 2.., β;
step 4, reading frames of images, assuming that the current frame is the t-th frame, toCentered, the search area is extracted as in step 1Target areaAnd a background region
And 5: modeling spatiotemporal context
Figure FDA00022009971000000125
Applied to the current frame to obtain a spatio-temporal context model response
Figure FDA00022009971000000126
Figure FDA00022009971000000127
In the formula:which represents a convolution operation, is a function of,
Figure FDA00022009971000000129
It(x) Representing the gray value at the point x in the t-th frame (the color image needs to convert the RGB value into the gray value), σ is the same as in step 2;
step 6: in the search area
Figure FDA0002200997100000021
In, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as
Figure FDA0002200997100000022
(or) Then, It(x) (or R)t(x) In the target area
Figure FDA0002200997100000024
And a background region
Figure FDA0002200997100000025
The probability of occurrence is respectively
Figure FDA0002200997100000026
(or
Figure FDA0002200997100000027
) And(or
Figure FDA00022009971000000219
) And thereby obtaining a search area
Figure FDA0002200997100000029
Inner target color probability score map pt(x):
Figure FDA00022009971000000210
Figure FDA00022009971000000211
In the formula: lambda is an adjusting parameter;
and 7: to pt(x) calculating integral graph to obtain color histogram model response of current frame
Figure FDA00022009971000000212
Figure FDA00022009971000000213
In the formula:x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x;
and 8: responding to the space-time context model obtained in the step 5 and the step 7
Figure FDA00022009971000000215
And color histogram model response
Figure FDA00022009971000000216
Fusing to obtain the final response result ft(x):
α is a weight parameter;
and step 9: at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Figure FDA0002200997100000031
Step 10: to be provided with
Figure FDA0002200997100000032
As a center, re-extracting the search area according to the mode in step 1 to obtain
Figure FDA0002200997100000033
Target areaAnd a background regionThen to
Figure FDA0002200997100000036
Computing new spatiotemporal context models
Figure FDA0002200997100000038
In the formula: rho is the learning rate of the updating of the space-time context model, and sigma' are the same as those in the step 2;
step 11: aiming at the target area according to the mode of step 3
Figure FDA0002200997100000039
And a background region
Figure FDA00022009971000000310
Calculating color histogram model
Figure FDA00022009971000000311
And
Figure FDA00022009971000000312
and updating the model according to the following formula to obtain a new color histogram modelAnd
Figure FDA00022009971000000314
η is the learning rate of the color histogram model update;
step 12: for use in
Figure FDA00022009971000000316
Is a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shInitial width and height for the target in step 1; and finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.
2. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the scale parameter σ' is set to 2.25.
3. The high-speed correlation filtering tracking method based on the high-confidence update strategy according to claim 1, wherein the color interval number β is set to 32 for gray-scale images and 32 for color images3
4. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the adjustment parameter lambda is set to 10-4
5. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, wherein the weight parameter α is set to 0.55.
6. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the learning rate ρ of the spatio-temporal context model update is set to 0.035.
7. The high-speed correlation filtering tracking method based on the high-confidence update strategy according to claim 1, wherein the learning rate η of the color histogram model update is set to 0.04.
CN201910864988.9A 2019-09-09 2019-09-09 Space-time context tracking method integrating color histogram response Active CN110738685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910864988.9A CN110738685B (en) 2019-09-09 2019-09-09 Space-time context tracking method integrating color histogram response

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910864988.9A CN110738685B (en) 2019-09-09 2019-09-09 Space-time context tracking method integrating color histogram response

Publications (2)

Publication Number Publication Date
CN110738685A true CN110738685A (en) 2020-01-31
CN110738685B CN110738685B (en) 2023-05-05

Family

ID=69267918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910864988.9A Active CN110738685B (en) 2019-09-09 2019-09-09 Space-time context tracking method integrating color histogram response

Country Status (1)

Country Link
CN (1) CN110738685B (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1939797A1 (en) * 2006-12-23 2008-07-02 NTT DoCoMo, Inc. Method and apparatus for automatically determining a semantic classification of context data
US20110150329A1 (en) * 2009-12-18 2011-06-23 Nxp B.V. Method of and system for determining an average colour value for pixels
CN103237155A (en) * 2013-04-01 2013-08-07 北京工业大学 Tracking and positioning method of single-view-blocked target
JP2013210844A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Image collation device
CN103679756A (en) * 2013-12-26 2014-03-26 北京工商大学 Automatic target tracking method and system based on color and shape features
CN104537692A (en) * 2014-12-30 2015-04-22 中国人民解放军国防科学技术大学 Key point stabilization tracking method based on time-space contextual information assisting
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
TW201633278A (en) * 2015-03-10 2016-09-16 威盛電子股份有限公司 Adaptive contrast enhancement apparatus and method
CN106023246A (en) * 2016-05-05 2016-10-12 江南大学 Spatiotemporal context tracking method based on local sensitive histogram
CN106296620A (en) * 2016-08-14 2017-01-04 遵义师范学院 A kind of color rendition method based on rectangular histogram translation
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context
CN107146240A (en) * 2017-05-05 2017-09-08 西北工业大学 The video target tracking method of taking photo by plane detected based on correlation filtering and conspicuousness
CN107209931A (en) * 2015-05-22 2017-09-26 华为技术有限公司 Color correction device and method
CN107424175A (en) * 2017-07-20 2017-12-01 西安电子科技大学 A kind of method for tracking target of combination spatio-temporal context information
CN107610159A (en) * 2017-09-03 2018-01-19 西安电子科技大学 Infrared small object tracking based on curvature filtering and space-time context
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor
CN108805902A (en) * 2018-05-17 2018-11-13 重庆邮电大学 A kind of space-time contextual target tracking of adaptive scale
CN109314773A (en) * 2018-03-06 2019-02-05 香港应用科技研究院有限公司 The generation method of high-quality panorama sketch with color, brightness and resolution balance
CN109325966A (en) * 2018-09-05 2019-02-12 华侨大学 A method of vision tracking is carried out by space-time context
CN109544600A (en) * 2018-11-23 2019-03-29 南京邮电大学 It is a kind of based on it is context-sensitive and differentiate correlation filter method for tracking target
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN110070562A (en) * 2019-04-02 2019-07-30 西北工业大学 A kind of context-sensitive depth targets tracking

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1939797A1 (en) * 2006-12-23 2008-07-02 NTT DoCoMo, Inc. Method and apparatus for automatically determining a semantic classification of context data
US20110150329A1 (en) * 2009-12-18 2011-06-23 Nxp B.V. Method of and system for determining an average colour value for pixels
JP2013210844A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Image collation device
CN103237155A (en) * 2013-04-01 2013-08-07 北京工业大学 Tracking and positioning method of single-view-blocked target
CN103679756A (en) * 2013-12-26 2014-03-26 北京工商大学 Automatic target tracking method and system based on color and shape features
CN104537692A (en) * 2014-12-30 2015-04-22 中国人民解放军国防科学技术大学 Key point stabilization tracking method based on time-space contextual information assisting
TW201633278A (en) * 2015-03-10 2016-09-16 威盛電子股份有限公司 Adaptive contrast enhancement apparatus and method
CN107209931A (en) * 2015-05-22 2017-09-26 华为技术有限公司 Color correction device and method
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN106023246A (en) * 2016-05-05 2016-10-12 江南大学 Spatiotemporal context tracking method based on local sensitive histogram
CN106296620A (en) * 2016-08-14 2017-01-04 遵义师范学院 A kind of color rendition method based on rectangular histogram translation
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context
CN107146240A (en) * 2017-05-05 2017-09-08 西北工业大学 The video target tracking method of taking photo by plane detected based on correlation filtering and conspicuousness
CN107424175A (en) * 2017-07-20 2017-12-01 西安电子科技大学 A kind of method for tracking target of combination spatio-temporal context information
CN107610159A (en) * 2017-09-03 2018-01-19 西安电子科技大学 Infrared small object tracking based on curvature filtering and space-time context
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN109314773A (en) * 2018-03-06 2019-02-05 香港应用科技研究院有限公司 The generation method of high-quality panorama sketch with color, brightness and resolution balance
CN108805902A (en) * 2018-05-17 2018-11-13 重庆邮电大学 A kind of space-time contextual target tracking of adaptive scale
CN109325966A (en) * 2018-09-05 2019-02-12 华侨大学 A method of vision tracking is carried out by space-time context
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN109544600A (en) * 2018-11-23 2019-03-29 南京邮电大学 It is a kind of based on it is context-sensitive and differentiate correlation filter method for tracking target
CN110070562A (en) * 2019-04-02 2019-07-30 西北工业大学 A kind of context-sensitive depth targets tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHANG K ET AL.: "Fast visual tracking via cense spatio-temporal context learning" *
郭春梅 等: "融合显著度时空上下文的超像素跟踪算法" *

Also Published As

Publication number Publication date
CN110738685B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN109753913B (en) Multi-mode video semantic segmentation method with high calculation efficiency
CN107452015B (en) Target tracking system with re-detection mechanism
CN111260688A (en) Twin double-path target tracking method
CN109993775B (en) Single target tracking method based on characteristic compensation
CN110766723B (en) Unmanned aerial vehicle target tracking method and system based on color histogram similarity
CN111696110B (en) Scene segmentation method and system
CN111008996B (en) Target tracking method through hierarchical feature response fusion
CN113034545A (en) Vehicle tracking method based on CenterNet multi-target tracking algorithm
CN109446978B (en) Method for tracking moving target of airplane based on staring satellite complex scene
CN111192294A (en) Target tracking method and system based on target detection
CN111429485B (en) Cross-modal filtering tracking method based on self-adaptive regularization and high-reliability updating
CN111222502B (en) Infrared small target image labeling method and system
CN112767440B (en) Target tracking method based on SIAM-FC network
CN110544267A (en) correlation filtering tracking method for self-adaptive selection characteristics
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN110738685A (en) space-time context tracking method with color histogram response fusion
CN113269808B (en) Video small target tracking method and device
CN116051601A (en) Depth space-time associated video target tracking method and system
CN113379787B (en) Target tracking method based on 3D convolution twin neural network and template updating
CN110751671A (en) Target tracking method based on kernel correlation filtering and motion estimation
CN114067240A (en) Pedestrian single-target tracking method based on online updating strategy and fusing pedestrian characteristics
KR20230046818A (en) Data learning device and method for semantic image segmentation
CN108875630B (en) Moving target detection method based on video in rainy environment
CN112069997A (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant