CN112330716B - Space-time channel constraint correlation filtering tracking method based on abnormal suppression - Google Patents
Space-time channel constraint correlation filtering tracking method based on abnormal suppression Download PDFInfo
- Publication number
- CN112330716B CN112330716B CN202011251969.8A CN202011251969A CN112330716B CN 112330716 B CN112330716 B CN 112330716B CN 202011251969 A CN202011251969 A CN 202011251969A CN 112330716 B CN112330716 B CN 112330716B
- Authority
- CN
- China
- Prior art keywords
- frame
- filter
- feature
- formula
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a space-time channel constraint correlation filtering tracking method based on abnormal suppression, which comprises the following steps: step S1, extracting the HOG feature, the first depth feature and the second depth feature of the t frame; step S2, carrying out fusion processing on the HOG feature, the first depth feature and the second depth feature to obtain a first fusion feature X, and then determining the position and the scale of a target in the t frame image based on the first fusion feature X and a filter; step S3, updating the filter based on the spatial-temporal channel constraint related filtering model which can restrain abnormity according to the characteristic diagram of the t-th frame; and step S4, repeating the steps S2-S4 until all the frame tracking is finished, and finally obtaining the tracking result. The invention uses manual characteristics to combine depth characteristics to remarkably improve the characteristic representation capability of the target template and passes the characteristic 2,1 The norm realizes self-adaptive channel characteristic selection, and effectively solves the problems of boundary effect and background clutter.
Description
Technical Field
The invention relates to the technical field of computer visual image processing, in particular to a spatio-temporal channel constraint correlation filtering tracking method based on abnormality suppression.
Background
Target tracking is a very popular research topic in the field of computer vision and has been widely applied to video surveillance, unmanned driving, human-computer interaction and the like. Object tracking is given the position and scale of the object in the first frame, thereby predicting the position of the object in subsequent video frames.
Discriminant correlation filter based trackers have achieved excellent results in many common video reference datasets and competitions. Starting from the dominant MOSSE filter, a discriminant correlation filter based tracker achieves very good performance in visual tracking. The KCF uses the multiple-channel HOG features to build an appearance model of the target, thereby significantly improving the performance of the algorithm. The HOG characteristic has good tracking performance on color change, and the CN characteristic has good tracking effect on target deformation and motion blur. Thus, the maple proposes a fusion algorithm that uses the combined features of HOG + CN to track objects. The BACF multiplies the target center region by a fixed binary matrix to obtain real samples, and proposes an effective ADMM method to learn the filter. CACF proposes a new framework that can add more context information and context information to the learned filter. The AutoTrack trains the tracker by using self-adaptive space-time regularization, and provides a hyper-parameter correlation filtering tracker method based on self-adaptively adjusting space-time constraint terms.
Recently, convolutional neural networks have been widely used for correlation filters in order to achieve a more accurate and comprehensive representation of the appearance of the target. Feature representations combining manual features and depth features have been widely used in some trackers. C-COT performs sub-grid tracking by learning discriminant continuous convolution operators. ECO, a lightweight version of C-COT, uses decomposed convolution operators and generated sample space to reduce model complexity and increase computation speed. LADCF selects only 5% of handmade and 20% of deep features for filter learning using a method with adaptive spatial feature selection. ASRCF provides an adaptive spatial regularization method, which can effectively acquire spatial weight to adapt to the change of the appearance of a target and realize stronger tracking performance. GFS-DCF introduced a tracking framework for Res-Net50 containing rich semantic information and, in addition, introduced a group sparse feature selection method to learn adaptive feature selection and achieve superior performance.
However, when the tracking target is interfered by occlusion, fast motion and background noise, the target sample used for training the filter may be interfered, resulting in a gradual decrease in the discrimination of the filter template.
Therefore, the invention provides a spatio-temporal channel constraint correlation filtering tracking method based on abnormal suppression, aiming at the problems.
Disclosure of Invention
In view of this, in order to solve the problem that the filter learning capability is degraded when the target template is interfered by the existing target tracking method, the invention provides a spatio-temporal channel constraint correlation filtering tracking method based on abnormal suppression.
In order to achieve the above object, the present invention provides a spatio-temporal channel constraint correlation filtering tracking method based on abnormal suppression, which comprises the following steps:
step S1, acquiring a region of a target in a t frame according to the position and the scale of the target in the t-1 frame image, taking the region as a target region, and extracting the HOG feature, the first depth feature and the second depth feature of the target region;
step S2, carrying out fusion processing on the HOG feature, the first depth feature and the second depth feature of the t-th frame to obtain a fused first fusion feature X, and then determining the position and the scale of a target in the image of the t-th frame based on the first fusion feature X and a filter, wherein the filter is trained on the target image in the image of the t-1 th frame in advance;
step S3, updating the filter to obtain a new filter based on a spatial-temporal channel constraint related filtering model capable of inhibiting abnormity according to the characteristic diagram of the t-th frame;
and step S4, repeating the steps S2-S4 until all the frame tracking is finished, and finally obtaining the tracking result.
Further, in the step S1, a preset ResNet50 depth model is used to extract the Conv4-3 as the first depth feature and extract the Conv4-6 as the second depth feature.
Further, in step S2, the determining the position and the scale of the target in the t-th frame image specifically includes:
and acquiring 7 second fusion features X with sequentially increasing scales according to the first fusion feature X, performing Fourier transform on the 7 second feature X by using the filter to obtain a Fourier domain, performing convolution operation to obtain a response map, taking the position of the maximum response value in the response map as the position of the target in the t-th frame image, and taking the scale corresponding to the maximum response value in the response map as the target scale in the t-th frame image.
Further, step S3 specifically includes:
301, the spatial domain expression of the spatial-temporal channel constraint related filtering model capable of suppressing the abnormality is as follows:
in formula (1), y is an ideal gaussian function, x is a spatial correlation operator, p and q represent the position difference of two peaks in two response maps in two-dimensional space, [ psi [ ] p,q ]Indicating a shifting operation performed in order to make two peaks coincide with each other,representing the filter on the kth channel at the t-1 frame,representing the filter on the kth channel at the tth frame,andrespectively representing the characteristic samples and filters from the K channels, λ 1 And λ 2 Is a regular parameter, gamma is an abnormal punishment parameter;
step 302, converting the space-time channel constraint related filtering model capable of suppressing the abnormity into a Fourier domain by using Parseval's theorem, and obtaining an expression as follows:
in the formula (2), the upper criterion Λ is the discrete fourier transform of a given signal,for the discrete fourier transform of the introduced filter auxiliary variables, T is the size of the input data, F is an orthogonal matrix of size T x T, which maps the arbitrary T-dimensional vectorized signal into the fourier domain,is defined as M t-1 [ψ p,q ]Discrete fourier transform of (d);
step 303, constructing formula (2) in step S302 as an augmented lagrangian function, where the expression is:
in the formula (3), the first and second groups of the compound,in order to be a lagrange multiplier,is a fourier domain transform of the lagrange multiplier,expressed as the transpose of the lagrange multiplier fourier domain transform, mu is a penalty factor,for the discrete fourier transform of the introduced filter auxiliary variables,is the discrete fourier transform of the filter auxiliary variables introduced in the t-1 th frame.
Step 304, givenAndconverting the augmented Lagrange function of the step S303 into a subproblem solving filter
In the formula (4), the first and second groups,expressed as the transpose of the lagrange multiplier fourier domain transform,representing the solution of the filter at the t +1 th frame.
Solving for each channel yields:
in the formula (5), the first and second groups,representing the filter auxiliary variables on the kth channel at the tth frame.
Step 305, giveAndconverting the augmented Lagrange function of step S303 into a subproblem solving auxiliary variable
Equation (6) is then decomposed into N subproblems, each expressed as:
solving the formula (7) and optimizing by using a Sherman Morrison formula to obtain:
in the formula (8), the first and second groups,andit should be noted that the above variables are not of practical significance, but are merely combined for computational convenience.
In the formula (9), the reaction mixture is,is the lagrangian multiplier term at the ith iteration,andis solved by the formulas in steps S305 and S304 in the (i + 1) th iteration;
step 307, updating a regular penalty factor μ:
μ(i+1)=min(βμ(i),μ max ) (10)
in the formula (10), β is 1.5, μ max =1;
The compound of the formula (11),representing the updated filter template at the t-th frame,representing the filter template at frame t-1,representing the result of solving the formula of step S304,is the online update rate.
Further, before performing step S1, it is determined whether the t-th frame is the first frame in the video sequence, if not, step S1 is performed, if so, the position of the target and the relevant filter are initialized, after the initialization operation is completed, it is determined whether tracking of all frames is currently completed, if so, the tracking result is output, and if not, step S1 is returned to.
The beneficial effects of the invention are:
the invention significantly improves the feature representation capability of the target template in the aspect of feature representation by combining manual features with depth features. In the spatial-temporal channel constraint correlation filtering model based on suppressible abnormity 2,1 The norm realizes self-adaptive channel characteristic selection, and effectively solves the problems of boundary effect and background clutter ground; similarity constraint is carried out on the target template from the time sequence through a time constraint item, and historical frame information is reserved for learning of the filter, so that the degradation problem of the filter is relieved; by passingThe target tracking is more robust and more accurate by inhibiting the abnormal constraint item; by optimizing the model using the ADMM algorithm, the time complexity is significantly reduced and the computation speed is increased.
Drawings
Fig. 1 is a flow chart of a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For better illustration of the present invention, two terms are explained first, the overall name of HOG being Historgram ordered Gradients, representing Histogram of Oriented Gradients; ADMM is generally called Alternating Direction Method of Multipliers, and is expressed as the Alternating Direction multiplier Method.
Example 1
Referring to fig. 1, the present embodiment provides a spatio-temporal channel constraint correlation filtering tracking method based on suppressive anomaly, which uses a lemming sequence in a target tracking reference data set (OTB100) as a verification set, where the video size is 640 × 480, and total 1336 frames include illumination change, scale scaling, occlusion, fast motion, background clutter, and other appearance significant changes; before the method is used, namely in a first frame, initializing the position of a target to be tracked and a relevant filter, and completing target tracking of a subsequent frame based on the method, wherein the general flow chart is shown in fig. 1, and the method comprises the following steps:
and step S1, acquiring the area of the target in the t frame according to the target position and the scale in the t-1 frame image, taking the area as a target area, and extracting the HOG characteristic, the first depth characteristic and the second depth characteristic of the target area.
Specifically, in the embodiment, a preset ResNet50 depth model is adopted to extract Conv4-3 as a first depth feature and Conv4-6 as a second depth feature.
Step S2, fusion processing is carried out on the HOG feature, the first depth feature and the second depth feature of the t-th frame to obtain a first fusion feature X after fusion, then the position and the scale of the target in the image of the t-th frame are determined based on the first fusion feature X and a filter, and the filter is trained by the target image in the image of the t-1 th frame in advance.
Specifically, in this embodiment, the determining the position and the scale of the target in the t-th frame image specifically includes: and acquiring 7 second fusion features X with sequentially increasing scales according to the first fusion feature X, performing Fourier transform on the 7 second feature X by using the filter to obtain a Fourier domain, performing convolution operation to obtain a response map, taking the position of the maximum response value in the response map as the position of the target in the t-th frame image, and taking the scale corresponding to the maximum response value in the response map as the target scale in the t-th frame image.
Step S3, according to the characteristic diagram of the t frame, based on the space-time channel constraint related filtering model which can restrain abnormity, updating the filter to obtain a new filter;
specifically, step S3 includes the following sub-steps:
step 301, the spatial expression of the spatial-temporal channel constraint correlation filtering model capable of suppressing the anomaly is as follows:
in formula (1), y is an ideal Gaussian function, and is a spatial correlation operator, p and q represent the position difference of two peaks in two response maps in two-dimensional space, [ psi [ p,q ]Indicating a shifting operation performed in order to make two peaks coincide with each other,indicates the kth frame at the t-1 frameA filter on the channel of the optical fiber,representing the filter on the kth channel at the tth frame,andrespectively representing the characteristic samples and filters from the K channels, λ 1 And λ 2 Is a regular parameter, gamma is an abnormal punishment parameter;
in the above formula, the first term is a ridge regression term to fit the response map and the label y as much as possible; the second term being the filter h t Channel regular penalty term of (1), using 2,1 Norm to realize self-adaptive feature selection; the third term is a temporal regularization term, whereinRepresents the filter on the kth channel at the t-1 frame; the fourth term is an abnormal term inhibition term, wherein the response graph of the t-1 frameCan simplify M t-1 [ψ p,q ]And (4) showing.
Step 302, converting the space-time channel constraint related filtering model capable of suppressing the abnormity into a Fourier domain by using Parseval's theorem, and obtaining an expression as follows:
in the formula (2), the upper criterion Λ is the discrete fourier transform of a given signal,for the discrete Fourier transform of the introduced filter auxiliary variables, T is the size of the input data, F is an orthogonal matrix of size T x T, FisAny T-dimension vectorized signal is mapped to the fourier domain,is defined as M t-1 [ψ p,q ]Discrete fourier transform of (d);
step 303, constructing the formula (2) in step S302 into an augmented lagrangian function, wherein the expression is as follows:
in the formula (3), the first and second groups,in order to be a lagrange multiplier,is a fourier domain transform of the lagrange multiplier,expressed as the transpose of the lagrange multiplier fourier domain transform, mu is a penalty factor,for the discrete fourier transform of the introduced filter auxiliary variables,is the discrete fourier transform of the filter auxiliary variable introduced in the t-1 th frame.
Step 304, givenAndconverting the augmented Lagrange function of step S303 into a subproblem solving filter
In the formula (4), the first and second groups of the chemical reaction are shown in the specification,expressed as the transpose of the lagrange multiplier fourier domain transform,representing the solution of the filter at frame t + 1.
Solving for each channel yields:
in the formula (5), the first and second groups,representing the filter auxiliary variables on the kth channel at the tth frame.
Step 305, givenAndconverting the augmented Lagrange function of step S303 into a subproblem solving auxiliary variable
The equation (6) is then decomposed into N subproblems, each equation being expressed as:
solving the formula (7) and optimizing by using a Sherman Morrison formula to obtain:
in the formula (8), the first and second groups,andit is to be noted that; the above variables are not of practical significance, but are merely combined for computational convenience.
In the formula (9), the reaction mixture is,is the lagrangian multiplier term at the ith iteration,andis solved by the formulas in steps S305 and S304 in the (i + 1) th iteration;
step 307, updating the regular penalty factor μ:
μ(i+1)=min(βμ(i),μ max ) (10)
in the formula (10), β is 1.5, μ max =1;
The compound of the formula (11),filter template updated at the time of representing the t-th frameRepresenting the filter template at frame t-1,representing the result of solving the formula of step S304,is the online update rate.
And step S4, repeating the steps S2-S4 until all the frame tracking is finished, and finally obtaining the tracking result.
According to the technical scheme, the target tracking is carried out on the video sequence, no deviation of the tracked target is generated all the time in the scenes with shielding, rapid motion or background clutter, and the tracking precision is high.
The invention significantly improves the feature representation capability of the target template in the aspect of feature representation by combining manual features with depth features. In the spatial-temporal channel constraint correlation filtering model based on suppressible abnormity 2,1 The norm realizes the self-adaptive channel characteristic selection and effectively solves the boundary effectAnd background clutter ground problems; the filter degradation problem is alleviated through a time constraint term; more robust and more accurate target tracking is realized by suppressing the abnormal constraint item; by optimizing the model using the ADMM algorithm, the time complexity is significantly reduced and the computation speed is increased.
It should be noted that the above are only specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes and substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are also included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (4)
1. A spatial-temporal channel constraint correlation filtering tracking method based on suppressive abnormality is characterized by comprising the following steps:
step S1, acquiring a region of a target in a t frame according to the position and the scale of the target in the t-1 frame image, taking the region as a target region, and extracting the HOG feature, the first depth feature and the second depth feature of the target region;
step S2, carrying out fusion processing on the HOG feature, the first depth feature and the second depth feature of the t-th frame to obtain a fused first fusion feature X, and then determining the position and the scale of a target in the image of the t-th frame based on the first fusion feature X and a filter, wherein the filter is trained on the target image in the image of the t-1 th frame in advance;
step S3, updating the filter to obtain a new filter based on a spatial-temporal channel constraint related filtering model capable of inhibiting abnormity according to the characteristic diagram of the t-th frame;
step S4, repeating the steps S2-S4 until all the frame tracking is finished, and finally obtaining a tracking result;
wherein, the step S3 specifically includes:
301, the spatial domain expression of the spatial-temporal channel constraint related filtering model capable of suppressing the abnormality is as follows:
in formula (1), y is an ideal Gaussian function, and is a spatial correlation operator, p and q represent the position difference of two peaks in two response maps in two-dimensional space, [ psi [ p,q ]Indicating a shifting operation performed in order to make two peaks coincide with each other,representing the filter on the kth channel at the t-1 frame,representing the filter on the kth channel at the tth frame,andrespectively representing the characteristic samples and filters, λ, from K channels 1 And λ 2 Is a regular parameter, gamma is an abnormal punishment parameter;
step 302, converting the space-time channel constraint related filtering model capable of suppressing the abnormity into a Fourier domain by using Parseval's theorem, and obtaining an expression as follows:
in the formula (2), the upper criterion Λ is the discrete fourier transform of a given signal,for the discrete fourier transform of the filter auxiliary variable introduced in the t-th frame,for introduction into the t-1 th frameT is the size of the input data, F is an orthogonal matrix of size T x T, which maps any T-dimensional vectorized signal into the fourier domain,is defined as M t-1 [ψ p,q ]Discrete fourier transform of (d);
step 303, constructing formula (2) in step S302 as an augmented lagrangian function, where the expression is:
in the formula (3), the first and second groups of the compound,is a function of the lagrange multiplier and is,is a fourier domain transform of the lagrange multiplier,expressed as the transpose of the lagrange multiplier fourier domain transform, mu is a penalty factor,for the discrete fourier transform of the introduced filter auxiliary variables,discrete Fourier transform of filter auxiliary variables introduced in the t-1 frame;
step 304, givenAndconverting the augmented Lagrange function of step S303 into a subproblem solving filter
In the formula (4), the first and second groups,expressed as the transpose of the lagrange multiplier fourier domain transform,represents the solution of the filter at the t +1 th frame;
solving for each channel yields:
in the formula (5), the first and second groups of the chemical reaction materials are selected from the group consisting of, represents the filter auxiliary variable on the kth channel at the tth frame;
step 305, givenAndconverting the augmented Lagrange function of step S303 into a subproblem solving auxiliary variable
Equation (6) is then decomposed into N subproblems, each expressed as:
solving the formula (7) and optimizing by using a Sherman Morrison formula to obtain:
in the formula (8), the first and second groups of the chemical reaction are shown in the specification, and
In the formula (9), the reaction mixture,is the lagrangian multiplier term at the ith iteration,andis solved by the formulas in steps S305 and S304 in the (i + 1) th iteration;
step 307, updating a regular penalty factor μ:
μ(i+1)=min(βμ(i),μ max ) (10)
in the formula (10), β is 1.5, μ max =1;
2. The method for spatio-temporal channel constraint correlation filtering tracking based on suppressible abnormality of claim 1, characterized in that in said step S1, a preset ResNet50 depth model is used to extract Conv4-3 as the first depth feature and Conv4-6 as the second depth feature.
3. The method for spatio-temporal channel constraint correlation filtering tracking based on suppressible abnormality according to claim 2, characterized in that in said step S2, said determining the position and scale of the target in the t-th frame image is specifically:
and acquiring 7 second fusion features X with sequentially increasing scales according to the first fusion feature X, performing Fourier transform on the 7 second feature X by using the filter to obtain a Fourier domain, performing convolution operation to obtain a response map, taking the position of the maximum response value in the response map as the position of the target in the t-th frame image, and taking the scale corresponding to the maximum response value in the response map as the target scale in the t-th frame image.
4. The method as claimed in claim 3, wherein before step S1, it is determined whether the t-th frame is the first frame in the video sequence, if not, step S1 is performed, if it is the first frame, the position of the target and the correlation filter are initialized, and after the initialization operation is completed, it is determined whether the tracking of all frames is completed currently, if yes, the tracking result is output, otherwise, step S1 is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011251969.8A CN112330716B (en) | 2020-11-11 | 2020-11-11 | Space-time channel constraint correlation filtering tracking method based on abnormal suppression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011251969.8A CN112330716B (en) | 2020-11-11 | 2020-11-11 | Space-time channel constraint correlation filtering tracking method based on abnormal suppression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112330716A CN112330716A (en) | 2021-02-05 |
CN112330716B true CN112330716B (en) | 2022-08-19 |
Family
ID=74318899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011251969.8A Active CN112330716B (en) | 2020-11-11 | 2020-11-11 | Space-time channel constraint correlation filtering tracking method based on abnormal suppression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112330716B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113128558B (en) * | 2021-03-11 | 2022-07-19 | 重庆邮电大学 | Target detection method based on shallow space feature fusion and adaptive channel screening |
CN113344973B (en) * | 2021-06-09 | 2023-11-24 | 南京信息工程大学 | Target tracking method based on space-time regularization and feature reliability evaluation |
CN113538340A (en) * | 2021-06-24 | 2021-10-22 | 武汉中科医疗科技工业技术研究院有限公司 | Target contour detection method and device, computer equipment and storage medium |
CN113470074B (en) * | 2021-07-09 | 2022-07-29 | 天津理工大学 | Self-adaptive space-time regularization target tracking method based on block discrimination |
CN113838093B (en) * | 2021-09-24 | 2024-03-19 | 重庆邮电大学 | Self-adaptive multi-feature fusion tracking method based on spatial regularization correlation filter |
CN115018906A (en) * | 2022-04-22 | 2022-09-06 | 国网浙江省电力有限公司 | Power grid power transformation overhaul operator tracking method based on combination of group feature selection and discrimination related filtering |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280808A (en) * | 2017-12-15 | 2018-07-13 | 西安电子科技大学 | The method for tracking target of correlation filter is exported based on structuring |
-
2020
- 2020-11-11 CN CN202011251969.8A patent/CN112330716B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280808A (en) * | 2017-12-15 | 2018-07-13 | 西安电子科技大学 | The method for tracking target of correlation filter is exported based on structuring |
Non-Patent Citations (1)
Title |
---|
Learning Aberrance Repressed Correlation Filters for Real-Time UAV Tracking;Ziyuan Huang等;《ICCV2019》;20191102;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112330716A (en) | 2021-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112330716B (en) | Space-time channel constraint correlation filtering tracking method based on abnormal suppression | |
CN111860162B (en) | Video crowd counting system and method | |
CN108549839B (en) | Adaptive feature fusion multi-scale correlation filtering visual tracking method | |
CN112560695A (en) | Underwater target tracking method, system, storage medium, equipment, terminal and application | |
CN107369166B (en) | Target tracking method and system based on multi-resolution neural network | |
US9111375B2 (en) | Evaluation of three-dimensional scenes using two-dimensional representations | |
CN109859241B (en) | Adaptive feature selection and time consistency robust correlation filtering visual tracking method | |
Huang et al. | Robust visual tracking via constrained multi-kernel correlation filters | |
CN110349190B (en) | Adaptive learning target tracking method, device, equipment and readable storage medium | |
CN109685037B (en) | Real-time action recognition method and device and electronic equipment | |
Lu et al. | Learning transform-aware attentive network for object tracking | |
CN109166139B (en) | Scale self-adaptive target tracking method combined with rapid background suppression | |
CN113344973B (en) | Target tracking method based on space-time regularization and feature reliability evaluation | |
CN110992401A (en) | Target tracking method and device, computer equipment and storage medium | |
EP1801731B1 (en) | Adaptive scene dependent filters in online learning environments | |
CN114359347A (en) | Space-time regularization self-adaptive correlation filtering target tracking algorithm based on sample reliability | |
Mocanu et al. | Single object tracking using offline trained deep regression networks | |
CN110706253B (en) | Target tracking method, system and device based on apparent feature and depth feature | |
Sun et al. | Adaptive kernel correlation filter tracking algorithm in complex scenes | |
Zeng et al. | Deep stereo matching with hysteresis attention and supervised cost volume construction | |
Zhang et al. | Learning target-aware background-suppressed correlation filters with dual regression for real-time UAV tracking | |
Huang et al. | SVTN: Siamese visual tracking networks with spatially constrained correlation filter and saliency prior context model | |
CN111161323B (en) | Complex scene target tracking method and system based on correlation filtering | |
CN110580712B (en) | Improved CFNet video target tracking method using motion information and time sequence information | |
CN115100740B (en) | Human motion recognition and intention understanding method, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |