CN110738685A - space-time context tracking method with color histogram response fusion - Google Patents
space-time context tracking method with color histogram response fusion Download PDFInfo
- Publication number
- CN110738685A CN110738685A CN201910864988.9A CN201910864988A CN110738685A CN 110738685 A CN110738685 A CN 110738685A CN 201910864988 A CN201910864988 A CN 201910864988A CN 110738685 A CN110738685 A CN 110738685A
- Authority
- CN
- China
- Prior art keywords
- target
- color
- color histogram
- point
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
Abstract
The invention relates to space-time context tracking methods fusing color histogram responses, which are based on a space-time context tracking frame, introduce a color histogram model during target center prediction and fuse the color histogram model with the space-time context model in a response layer, thereby overcoming the defect of weak feature expression capability of the space-time context model and realizing more accurate and stable target positioning.
Description
Technical Field
The invention relates to target tracking methods, belongs to the field of computer vision, and particularly relates to space-time context tracking methods fusing color histogram responses.
Background
The target tracking technology is important branches and research hotspots in the field of computer vision, and has universal application in military aspects such as unmanned aerial vehicle reconnaissance, missile guidance and the like and civil aspects such as video monitoring, automatic driving and the like.
The tracking method (STC) proposed in the document "Fast visual tracking via space mapping-temporal context learning, Proceedings of the European Conference Computer Vision,2014: 127-.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides space-time context tracking methods fusing color histogram responses, and solves the problem that a space-time context tracking model is insufficient in adaptability when dealing with interference factors such as target deformation, motion blur and the like in a video.
Technical scheme
A spatio-temporal context tracking method with fused color histogram responses, comprising the steps of:
step 1, reading the 1 st frame image data in the video, and marking the initial target central position as a coordinateTarget size is noted as sw×sh,sw、shThe entire target area is noted as the width and height of the target, respectivelyThen is provided withFor the center, contained target regions were determinedAnd a background regionLocal context search area of Is of size M.times.N, M being 2sw,N=2sh。
Wherein F and F-1Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) are shown, respectively,I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),representing point x and target centerThe Euclidean distance, σ and σ' are scale parameters, and are set toσ'=2.25。
Step 3 calculating the target areaAnd a background regionThe color histogram of (1) has the number of color regions represented by β, β for grayscale images, and β for color images3,Andthe corresponding initial color histogram models are respectively recorded asAndy denotes a color interval index number, y 1, 2.
Step 4 reads frames of images, assuming the current frame is the tth frame, toCentered, the search area is extracted as in step 1Target areaAnd a background region
Step 5 model the spatio-temporal contextApplied to the current frame to obtain a spatio-temporal context model response ft stc(x):
In the formula (I), the compound is shown in the specification,which represents a convolution operation, is a function of,It(x) Representing the gray value at point x in the t-th frame (the color image needs to convert the RGB values into gray values), σ is the same as in step 2.
Step 6 in search areaIn, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as(or) Then, It(x) (or R)t(x) In the target areaAnd backgroundRegion(s)The probability of occurrence is respectively(or) And(or) And thereby obtaining a search areaInner target color probability score map pt(x):
In the formula, λ is an adjustment parameter, and is set to 10-4。
Step 7 for pt(x) calculating integral graph to obtain color histogram model response f of current framet hist(x):
In the formula (I), the compound is shown in the specification,x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x.
Step 8 for the results obtained in step 5 and step 7Spatio-temporal context model response ft stc(x) And color histogram model response ft hist(x) And (3) carrying out fusion to obtain a final response result:
ft(x)=αft stc(x)+(1-α)ft hist(x) (5)
in the formula, α is a weight parameter, and α is set to 0.55.
Step 9 at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Step 10 withAs a center, re-extracting the search area according to the mode in step 1 to obtainTarget areaAnd a background regionThen toComputing new spatiotemporal context models
Where ρ is a learning rate of updating the spatio-temporal context model, and ρ is set to 0.035, and σ' are the same as in step 2.
Step 11 is to proceed to the target area according to the mode of step 3And a background regionCalculating color histogram modelAndand updating the model according to the following formula to obtain a new color histogram modelAnd
in the equation, η represents the learning rate of the color histogram model update, and η is set to 0.04.
Step 12 forIs a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shThe initial width and height of the target in step 1. And finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.
Advantageous effects
The space-time context tracking methods fusing color histogram response, which are provided by the invention, introduce color histogram information to assist a space-time context model in target positioning, namely, firstly constructing a space-time context model and a color histogram model and respectively calculating respective model response graphs, then fusing the response graphs of the two models to obtain a final response graph, and then determining a target center according to the final response graph result.
The advantages are as follows: based on a space-time context tracking framework, a color histogram model is introduced during target center prediction and is fused with the space-time context model in a response layer, so that the defect of weak feature expression capability of the space-time context model is overcome, and more accurate and stable target positioning is realized. The tracking effect shown in the test of different tracking scenes such as target deformation, motion blur and the like is greatly improved compared with the original space-time context tracking method, the average tracking speed can reach 134 frames/second under the condition of general PC hardware, and the method has high practical application value.
Drawings
FIG. 1: time-space context tracking method flow chart fusing color histogram response
Detailed Description
The present invention will now be described in further with reference to the following examples and accompanying drawings:
step 1, reading the 1 st frame image data in the video, and marking the initial target central position as a coordinateTarget size is noted as sw×sh,sw、shThe entire target area is noted as the width and height of the target, respectivelyThen is provided withFor the center, contained target regions were determinedAnd a background regionLocal context search area of Is of size M.times.N, M being 2sw,N=2sh。
Wherein F and F-1Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) are shown, respectively,I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),representing point x and target centerThe Euclidean distance, σ and σ' are scale parameters, and are set toσ'=2.25。
Step 3 calculating the target areaAnd a background regionThe color histogram of (1) has the number of color regions represented by β, β for grayscale images, and β for color images3,Andthe corresponding initial color histogram models are respectively recorded asAndy denotes a color interval index number, y 1, 2.
Step 4 reads frames of images, assuming the current frame is the tth frame, toCentered, the search area is extracted as in step 1Target areaAnd a background region
Step 5 model the spatio-temporal contextApplied to the current frame to obtain a spatio-temporal context model response ft stc(x):
In the formula (I), the compound is shown in the specification,which represents a convolution operation, is a function of,It(x) Representing the gray value at point x in the t-th frame (the color image needs to convert the RGB values into gray values), σ is the same as in step 2.
Step 6 in search areaIn, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as(or) Then, It(x) (or R)t(x) In the target areaAnd a background regionThe probability of occurrence is respectively(or) And(or) And thereby obtaining a search areaInner target color probability score map pt(x):
In the formula, λ is an adjustment parameter, and is set to 10-4。
Step 7 for pt(x) calculating integral graph to obtain color histogram model response f of current framet hist(x):
In the formula (I), the compound is shown in the specification,x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x.
Step 8 response f to the spatio-temporal context model obtained in step 5 and step 7t stc(x) And color histogram model response ft hist(x) And (3) carrying out fusion to obtain a final response result:
ft(x)=αft stc(x)+(1-α)ft hist(x) (5)
in the formula, α is a weight parameter, and α is set to 0.55.
Step 9 at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Step 10 withAs a center, re-extracting the search area according to the mode in step 1 to obtainTarget areaAnd a background regionThen toComputing new spatiotemporal context models
Where ρ is a learning rate of updating the spatio-temporal context model, and ρ is set to 0.035, and σ' are the same as in step 2.
Step 11 is to proceed to the target area according to the mode of step 3And a background regionCalculating color histogram modelAndand updating the model according to the following formula to obtain a new color histogram modelAnd
in the equation, η represents the learning rate of the color histogram model update, and η is set to 0.04.
Step 12 forIs a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shThe initial width and height of the target in step 1. And finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.
Claims (7)
1, high-speed correlation filtering tracking method based on high confidence update strategy, characterized by the following steps:
step 1: reading the 1 st frame image data in the video, and marking the initial target center position as a coordinateTarget size is noted as sw×sh,sw、shRespectively the width and height of the target willThe whole target area is marked asThen is provided withFor the center, contained target regions were determinedAnd a background regionLocal context search area of Is of size M.times.N, M being 2sw,N=2sh;
In the formula: f and F-1Respectively representing a fast fourier transform and an inverse fast fourier transform,I1(x) Representing the gray value at point x in frame 1 (a color image requires conversion of RGB values to gray values),representing point x and target centerIs a scale parameter, where σ and σ' are scale parameters
And step 3: calculating a target regionAnd a background regionThe color histogram of (a), the number of color bins is noted as β,andthe corresponding initial color histogram models are respectively recorded asAndy denotes a color interval index number, y 1, 2.., β;
step 4, reading frames of images, assuming that the current frame is the t-th frame, toCentered, the search area is extracted as in step 1Target areaAnd a background region
And 5: modeling spatiotemporal contextApplied to the current frame to obtain a spatio-temporal context model response
In the formula:which represents a convolution operation, is a function of,It(x) Representing the gray value at the point x in the t-th frame (the color image needs to convert the RGB value into the gray value), σ is the same as in step 2;
step 6: in the search areaIn, the gray value I at point x is determinedt(x) (color image is RGB value Rt(x) ) corresponding color interval index number as(or) Then, It(x) (or R)t(x) In the target areaAnd a background regionThe probability of occurrence is respectively(or) And(or) And thereby obtaining a search areaInner target color probability score map pt(x):
In the formula: lambda is an adjusting parameter;
and 7: to pt(x) calculating integral graph to obtain color histogram model response of current frame
In the formula:x ' is less than or equal to x, the abscissa of the point x ' is less than or equal to the abscissa of the point x, and the ordinate of the point x ' is less than or equal to the ordinate of the point x;
and 8: responding to the space-time context model obtained in the step 5 and the step 7And color histogram model responseFusing to obtain the final response result ft(x):
α is a weight parameter;
and step 9: at ft(x) Finding the coordinate point corresponding to the maximum response value and setting the coordinate point as the new target center of the current t-th frame
Step 10: to be provided withAs a center, re-extracting the search area according to the mode in step 1 to obtainTarget areaAnd a background regionThen toComputing new spatiotemporal context models
In the formula: rho is the learning rate of the updating of the space-time context model, and sigma' are the same as those in the step 2;
step 11: aiming at the target area according to the mode of step 3And a background regionCalculating color histogram modelAndand updating the model according to the following formula to obtain a new color histogram modelAnd
η is the learning rate of the color histogram model update;
step 12: for use inIs a center with a size of sw×shThe rectangular frame marks a new target area in the image, namely the current frame tracking result swAnd shInitial width and height for the target in step 1; and finally, judging whether all image frames in the video are processed or not, if so, finishing the algorithm, and otherwise, continuing to execute the algorithm in the step 4.
2. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the scale parameter σ' is set to 2.25.
3. The high-speed correlation filtering tracking method based on the high-confidence update strategy according to claim 1, wherein the color interval number β is set to 32 for gray-scale images and 32 for color images3。
4. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the adjustment parameter lambda is set to 10-4。
5. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, wherein the weight parameter α is set to 0.55.
6. The high-speed correlation filtering tracking method based on the high-confidence updating strategy according to claim 1, characterized in that: the learning rate ρ of the spatio-temporal context model update is set to 0.035.
7. The high-speed correlation filtering tracking method based on the high-confidence update strategy according to claim 1, wherein the learning rate η of the color histogram model update is set to 0.04.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910864988.9A CN110738685B (en) | 2019-09-09 | 2019-09-09 | Space-time context tracking method integrating color histogram response |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910864988.9A CN110738685B (en) | 2019-09-09 | 2019-09-09 | Space-time context tracking method integrating color histogram response |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110738685A true CN110738685A (en) | 2020-01-31 |
CN110738685B CN110738685B (en) | 2023-05-05 |
Family
ID=69267918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910864988.9A Active CN110738685B (en) | 2019-09-09 | 2019-09-09 | Space-time context tracking method integrating color histogram response |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738685B (en) |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1939797A1 (en) * | 2006-12-23 | 2008-07-02 | NTT DoCoMo, Inc. | Method and apparatus for automatically determining a semantic classification of context data |
US20110150329A1 (en) * | 2009-12-18 | 2011-06-23 | Nxp B.V. | Method of and system for determining an average colour value for pixels |
CN103237155A (en) * | 2013-04-01 | 2013-08-07 | 北京工业大学 | Tracking and positioning method of single-view-blocked target |
JP2013210844A (en) * | 2012-03-30 | 2013-10-10 | Secom Co Ltd | Image collation device |
CN103679756A (en) * | 2013-12-26 | 2014-03-26 | 北京工商大学 | Automatic target tracking method and system based on color and shape features |
CN104537692A (en) * | 2014-12-30 | 2015-04-22 | 中国人民解放军国防科学技术大学 | Key point stabilization tracking method based on time-space contextual information assisting |
CN105631895A (en) * | 2015-12-18 | 2016-06-01 | 重庆大学 | Temporal-spatial context video target tracking method combining particle filtering |
TW201633278A (en) * | 2015-03-10 | 2016-09-16 | 威盛電子股份有限公司 | Adaptive contrast enhancement apparatus and method |
CN106023246A (en) * | 2016-05-05 | 2016-10-12 | 江南大学 | Spatiotemporal context tracking method based on local sensitive histogram |
CN106296620A (en) * | 2016-08-14 | 2017-01-04 | 遵义师范学院 | A kind of color rendition method based on rectangular histogram translation |
CN106651913A (en) * | 2016-11-29 | 2017-05-10 | 开易(北京)科技有限公司 | Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System) |
CN107093189A (en) * | 2017-04-18 | 2017-08-25 | 山东大学 | Method for tracking target and system based on adaptive color feature and space-time context |
CN107146240A (en) * | 2017-05-05 | 2017-09-08 | 西北工业大学 | The video target tracking method of taking photo by plane detected based on correlation filtering and conspicuousness |
CN107209931A (en) * | 2015-05-22 | 2017-09-26 | 华为技术有限公司 | Color correction device and method |
CN107424175A (en) * | 2017-07-20 | 2017-12-01 | 西安电子科技大学 | A kind of method for tracking target of combination spatio-temporal context information |
CN107610159A (en) * | 2017-09-03 | 2018-01-19 | 西安电子科技大学 | Infrared small object tracking based on curvature filtering and space-time context |
CN107680119A (en) * | 2017-09-05 | 2018-02-09 | 燕山大学 | A kind of track algorithm based on space-time context fusion multiple features and scale filter |
US20180268559A1 (en) * | 2017-03-16 | 2018-09-20 | Electronics And Telecommunications Research Institute | Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor |
CN108805902A (en) * | 2018-05-17 | 2018-11-13 | 重庆邮电大学 | A kind of space-time contextual target tracking of adaptive scale |
CN109314773A (en) * | 2018-03-06 | 2019-02-05 | 香港应用科技研究院有限公司 | The generation method of high-quality panorama sketch with color, brightness and resolution balance |
CN109325966A (en) * | 2018-09-05 | 2019-02-12 | 华侨大学 | A method of vision tracking is carried out by space-time context |
CN109544600A (en) * | 2018-11-23 | 2019-03-29 | 南京邮电大学 | It is a kind of based on it is context-sensitive and differentiate correlation filter method for tracking target |
CN109584271A (en) * | 2018-11-15 | 2019-04-05 | 西北工业大学 | High speed correlation filtering tracking based on high confidence level more new strategy |
CN110070562A (en) * | 2019-04-02 | 2019-07-30 | 西北工业大学 | A kind of context-sensitive depth targets tracking |
-
2019
- 2019-09-09 CN CN201910864988.9A patent/CN110738685B/en active Active
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1939797A1 (en) * | 2006-12-23 | 2008-07-02 | NTT DoCoMo, Inc. | Method and apparatus for automatically determining a semantic classification of context data |
US20110150329A1 (en) * | 2009-12-18 | 2011-06-23 | Nxp B.V. | Method of and system for determining an average colour value for pixels |
JP2013210844A (en) * | 2012-03-30 | 2013-10-10 | Secom Co Ltd | Image collation device |
CN103237155A (en) * | 2013-04-01 | 2013-08-07 | 北京工业大学 | Tracking and positioning method of single-view-blocked target |
CN103679756A (en) * | 2013-12-26 | 2014-03-26 | 北京工商大学 | Automatic target tracking method and system based on color and shape features |
CN104537692A (en) * | 2014-12-30 | 2015-04-22 | 中国人民解放军国防科学技术大学 | Key point stabilization tracking method based on time-space contextual information assisting |
TW201633278A (en) * | 2015-03-10 | 2016-09-16 | 威盛電子股份有限公司 | Adaptive contrast enhancement apparatus and method |
CN107209931A (en) * | 2015-05-22 | 2017-09-26 | 华为技术有限公司 | Color correction device and method |
CN105631895A (en) * | 2015-12-18 | 2016-06-01 | 重庆大学 | Temporal-spatial context video target tracking method combining particle filtering |
CN106023246A (en) * | 2016-05-05 | 2016-10-12 | 江南大学 | Spatiotemporal context tracking method based on local sensitive histogram |
CN106296620A (en) * | 2016-08-14 | 2017-01-04 | 遵义师范学院 | A kind of color rendition method based on rectangular histogram translation |
CN106651913A (en) * | 2016-11-29 | 2017-05-10 | 开易(北京)科技有限公司 | Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System) |
US20180268559A1 (en) * | 2017-03-16 | 2018-09-20 | Electronics And Telecommunications Research Institute | Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor |
CN107093189A (en) * | 2017-04-18 | 2017-08-25 | 山东大学 | Method for tracking target and system based on adaptive color feature and space-time context |
CN107146240A (en) * | 2017-05-05 | 2017-09-08 | 西北工业大学 | The video target tracking method of taking photo by plane detected based on correlation filtering and conspicuousness |
CN107424175A (en) * | 2017-07-20 | 2017-12-01 | 西安电子科技大学 | A kind of method for tracking target of combination spatio-temporal context information |
CN107610159A (en) * | 2017-09-03 | 2018-01-19 | 西安电子科技大学 | Infrared small object tracking based on curvature filtering and space-time context |
CN107680119A (en) * | 2017-09-05 | 2018-02-09 | 燕山大学 | A kind of track algorithm based on space-time context fusion multiple features and scale filter |
CN109314773A (en) * | 2018-03-06 | 2019-02-05 | 香港应用科技研究院有限公司 | The generation method of high-quality panorama sketch with color, brightness and resolution balance |
CN108805902A (en) * | 2018-05-17 | 2018-11-13 | 重庆邮电大学 | A kind of space-time contextual target tracking of adaptive scale |
CN109325966A (en) * | 2018-09-05 | 2019-02-12 | 华侨大学 | A method of vision tracking is carried out by space-time context |
CN109584271A (en) * | 2018-11-15 | 2019-04-05 | 西北工业大学 | High speed correlation filtering tracking based on high confidence level more new strategy |
CN109544600A (en) * | 2018-11-23 | 2019-03-29 | 南京邮电大学 | It is a kind of based on it is context-sensitive and differentiate correlation filter method for tracking target |
CN110070562A (en) * | 2019-04-02 | 2019-07-30 | 西北工业大学 | A kind of context-sensitive depth targets tracking |
Non-Patent Citations (2)
Title |
---|
ZHANG K ET AL.: "Fast visual tracking via cense spatio-temporal context learning" * |
郭春梅 等: "融合显著度时空上下文的超像素跟踪算法" * |
Also Published As
Publication number | Publication date |
---|---|
CN110738685B (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109753913B (en) | Multi-mode video semantic segmentation method with high calculation efficiency | |
CN107452015B (en) | Target tracking system with re-detection mechanism | |
CN111260688A (en) | Twin double-path target tracking method | |
CN109993775B (en) | Single target tracking method based on characteristic compensation | |
CN110766723B (en) | Unmanned aerial vehicle target tracking method and system based on color histogram similarity | |
CN111696110B (en) | Scene segmentation method and system | |
CN111008996B (en) | Target tracking method through hierarchical feature response fusion | |
CN113034545A (en) | Vehicle tracking method based on CenterNet multi-target tracking algorithm | |
CN109446978B (en) | Method for tracking moving target of airplane based on staring satellite complex scene | |
CN111192294A (en) | Target tracking method and system based on target detection | |
CN111429485B (en) | Cross-modal filtering tracking method based on self-adaptive regularization and high-reliability updating | |
CN111222502B (en) | Infrared small target image labeling method and system | |
CN112767440B (en) | Target tracking method based on SIAM-FC network | |
CN110544267A (en) | correlation filtering tracking method for self-adaptive selection characteristics | |
CN116665097A (en) | Self-adaptive target tracking method combining context awareness | |
CN117011381A (en) | Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision | |
CN110738685A (en) | space-time context tracking method with color histogram response fusion | |
CN113269808B (en) | Video small target tracking method and device | |
CN116051601A (en) | Depth space-time associated video target tracking method and system | |
CN113379787B (en) | Target tracking method based on 3D convolution twin neural network and template updating | |
CN110751671A (en) | Target tracking method based on kernel correlation filtering and motion estimation | |
CN114067240A (en) | Pedestrian single-target tracking method based on online updating strategy and fusing pedestrian characteristics | |
KR20230046818A (en) | Data learning device and method for semantic image segmentation | |
CN108875630B (en) | Moving target detection method based on video in rainy environment | |
CN112069997A (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |