CN112150511A - Target tracking algorithm based on combination of image matching and improved kernel correlation filter - Google Patents

Target tracking algorithm based on combination of image matching and improved kernel correlation filter Download PDF

Info

Publication number
CN112150511A
CN112150511A CN202011201207.7A CN202011201207A CN112150511A CN 112150511 A CN112150511 A CN 112150511A CN 202011201207 A CN202011201207 A CN 202011201207A CN 112150511 A CN112150511 A CN 112150511A
Authority
CN
China
Prior art keywords
target
frame
image matching
correlation filter
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011201207.7A
Other languages
Chinese (zh)
Inventor
罗欣
李卓韬
吴禹萱
王枭
许文波
贾海涛
赫熙煦
张民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011201207.7A priority Critical patent/CN112150511A/en
Publication of CN112150511A publication Critical patent/CN112150511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target tracking algorithm based on image matching and improved kernel correlation filter combination, which is used for acquiring a video sequence to be tracked and a target picture, and carrying out image matching on a first frame of the tracked video sequence and the target picture by utilizing an improved SIFT algorithm so as to initialize a correlation filter; and carrying out target tracking on the video sequence to be tracked by utilizing a correlation filter. The invention can track even under the condition that the initial position information of the target is unknown, widens the application range of target tracking, improves the precision and robustness of the tracking algorithm and ensures the speed to the maximum extent.

Description

Target tracking algorithm based on combination of image matching and improved kernel correlation filter
Technical Field
The invention relates to a target tracking algorithm.
Background
Target tracking is one of the hot research directions in the field of computer vision at present, and plays an important role in practical application of military affairs, smart cities and the like. With the development of artificial intelligence technology, the target tracking algorithm has great progress in the last decade. However, a great challenge is still existed in that when the tracking in a complex scene is faced, such as severe occlusion and severe motion, the tracking accuracy is still affected to a certain extent, and at present, a method for completely responding to the challenge does not exist. At present, the mainstream tracking algorithm mainly comprises a neural network, a CNN and related filtering, and target positioning and tracking are carried out by extracting CNN characteristics or adding artificial characteristics and fusing various characteristics and then training the neural network, and the algorithms all achieve higher tracking precision, but due to the complex process, a proper data set is required, and the training amount is large, so that the defects of instantaneity and tracking speed are caused.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a target tracking algorithm based on the combination of image matching and an improved kernel correlation filter, the SIFT algorithm is used for image matching so as to initialize the kernel correlation filter, and the tracking performance of the kernel correlation filter under a complex tracking scene such as shielding is improved by introducing an APCE confidence coefficient and a learning mechanism corresponding to the improved kernel correlation filter.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1, acquiring a video sequence to be tracked and a target picture;
step 2, carrying out image matching on a first frame of a tracking video sequence and a target picture by utilizing an improved SIFT algorithm so as to initialize a related filter;
and 3, carrying out target tracking on the video sequence to be tracked by utilizing the correlation filter.
And 1, selecting a target picture meeting set similarity with a target to be tracked in the video sequence.
The set similarity means that the PSNR value of the target picture and the PSNR value of the first frame of the video sequence to be tracked are greater than 30.
And 2, performing image matching on the target picture and the video sequence to be tracked by using an SIFT operator, determining the position of the target in the first frame, and transmitting the position information into the kernel correlation filter for initial initialization of the target position.
The step 3 comprises the following steps:
(1) determining the position of the target in the next frame by using the position of the target in the previous frame in the video sequence to be tracked and the characteristic information, calculating the APCE value of each frame according to a response map generated by an algorithm,
Figure BDA0002755319160000021
wherein FmaxIs the maximum value of the response of the current frame, FminIs the response minimum of the current frame, Fw,hThe response value at any position in the response image of the current frame;
(2) the average APCE and average response maximum from the first frame to the current frame are calculated and are respectively denoted as mean _ APCE and mean _ Fmax(ii) a Judging whether the tracking of the target at the moment is lost or not by judging whether the APCE value of the current frame and the response maximum value in the current frame simultaneously meet the set condition, wherein the set condition formula is
(apce>=beta1×mean_acpe)&&(Fmax>=beta2×mean_Fmax)
Wherein APCE is the APCE value of the current frame, and beta1 and beta2 are set thresholds;
(3) carrying out model updating on the video frames meeting the conditions according to an updating strategy of a KCF original algorithm, wherein the updating strategy is
Figure BDA0002755319160000022
Whereina represents ridge regression coefficient, x represents feature information of the object, apRepresenting the current ridge regression coefficient of the target, axRepresenting the ridge regression coefficient to be updated of the current frame, beta is a learning factor, and when a conditional formula is satisfied, 0.02 x is takenpRepresenting the current characteristic information of the object, xxFeature information to be updated for the current frame;
for the video frames which do not meet the conditions, the learning factor beta in the model updating strategy is dynamically adjusted according to the APCE value, and the corresponding relation is
Figure BDA0002755319160000023
Thereby suppressing model drift and retaining part of useful feature information.
The set thresholds beta1 and beta2 are set to 0.5 and 0.6, respectively.
The invention has the beneficial effects that: the target position is initialized by using image matching, the defect that the initial position of the target must be marked manually in the existing algorithm is overcome, the tracking effect can be achieved even under the condition that the initial position information of the target is unknown, and the application range of target tracking is widened. In addition, the method introduces confidence into the classical kernel correlation filtering algorithm and modifies the model updating strategy, overcomes the defects of high tracking precision but low speed or high tracking speed but low precision of the existing algorithm, improves the precision and robustness of the tracking algorithm and ensures the speed to the greatest extent.
Drawings
FIG. 1 is a comparison of tracking effects of 4 algorithms under 4 sample videos; wherein, the first line is the operation condition of the box video sequence at the 236 th frame, 489 and 783. The second line is the operation of Sylvester video sequence at frames 666, 1116 and 1345 respectively. The third line shows the performance of the basketball video sequence at frames 155, 277 and 725, respectively. The fourth line is the running conditions of the Kitesurf video sequence in the 13 th frame, the 48 th frame and the 80 th frame respectively; the tracking frame of the thickened line is the text algorithm, and the other algorithms are respectively other three algorithms.
FIG. 2 is a flow chart of the present invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
The invention combines the improved kernel correlation filter with image matching to obtain a target tracking algorithm with higher precision and wider application. Because the KCF kernel correlation filtering algorithm has high running speed, the method improves the KCF kernel correlation filtering algorithm, introduces confidence coefficient and a corresponding tracking strategy adjusting mechanism, improves tracking precision and keeps tracking speed. In addition, the method of adding image matching as filter initialization widens the application field of target tracking, thereby overcoming the defects of the traditional target tracking algorithm.
The present invention uses video sequences from the OTB-2015 data set for testing of tracking results. The data set comprised 100 annotated video sequences of 26 grayscale sequences, 74 color sequences, and a total of 58897 frames for the entire data set, including long-term tracking and short-term tracking. These videos involve a variety of tracking scenes, such as lighting changes, scale changes, occlusions, and so on.
1) Obtaining input picture and finishing initialization of kernel correlation filter
And carrying out SIFT image matching on the target picture meeting the conditions and the first frame of the video so as to complete the initialization of the kernel correlation filter.
2) And completing target tracking and obtaining result evaluation, and comparing with other algorithms, wherein the algorithm is called KCFAPCE.
3) Algorithmic analysis
The evaluation indexes adopted herein are as follows:
(1) precision graph: the tracking algorithm estimates the center point of the target position (bounding box) and the center point of the artificially labeled (ground-route) target, and the distance between the center point and the center point is less than the percentage of video frames of a given threshold value. Different thresholds, resulting in different percentages, may result in a curve.
(2) Success rate: defining a coincidence score (OS), wherein a target position (bounding box) obtained by a tracking algorithm is denoted as a, an actual position (ground-route) of the target is denoted as b, and the coincidence is defined as: OS ═ a ═ b |/| a @ |, denotes the number of pixels of the region. When the OS of a certain frame is greater than a set threshold, the frame is regarded as successful (Success), and the Success rate is determined as the percentage of the successful frame number in all frames. The value range of the OS is 0-1, so that a curve can be drawn.
Robustness assessment (SRE): by shuffling from space (bounding box), and then evaluated. SRE (spatial Robustness evaluation) slightly translates a bounding box obtained by image matching of the first frame and expands and contracts the scale to evaluate whether the tracking algorithm is sensitive to initialization. The translation size is 10% of the size of the target object, the scale change range is 80% to 120% of the ground-truth, and the translation size is increased by 10% in sequence. The average of these results was finally taken as SRE score.
The whole OTB-2015 data set is tested by adopting 4 tracking algorithms, and finally obtained running speed, precision and robustness are shown in the following table:
TABLE 1 run frame rate, accuracy and robustness (SRE) comparisons of different algorithms on OTB-2015 data
Figure BDA0002755319160000041
As can be seen from table 1, the accuracy of the algorithm is higher than that of the other three algorithms, the operation speed of the algorithm is reduced by a part compared with that of the KCF algorithm due to the judgment of the tracking condition of each frame and the dynamic adjustment of the learning factor of the relevant filter, and the robustness of tracking is improved by introducing a mechanism for judging whether the target is lost, which has certain advantages in all aspects compared with the other algorithms.
(1) For box video sequences, the tracking performance of the algorithm under more severe occlusion is mainly considered. In the 489 th frame, the object is severely blocked and interfered, the APCE value is 46.5790, the Fmax value is 0.4143, the learning factor β is substituted into the formula 2 and the formula 4 to be 0, the updating of the kernel correlation filter of the current frame is stopped, so that the accumulation of error information is reduced, and the real-time updating of the kernel correlation filter is recovered when the subsequent tracking condition of the object is good. And by the 783 frames of original KCF algorithm and DSST, the model is drifted and cannot be tracked due to accumulation of errors and error updating of the template, the algorithm still keeps good tracking, and the SAMF algorithm also keeps good tracking due to the fact that the detection of CN (color name) characteristics and HOG characteristics is combined, the characteristic information of detection targets is increased, the operation rate is far lower than that of the algorithm, the frame rate is about 1/5 of the algorithm, and the algorithm further has the advantages of precision and speed.
(2) For a Sylvester video sequence, the tracking performance of the algorithm under the condition that a target has interference and violent shaking is mainly considered, the target in a test sequence is subjected to stronger interference of a background, the target angle in a 1345 frame is greatly overturned, the APCE value is 62.5684, the Fmax value is 0.5304, a learning factor beta is made to be beta/2 in formula 2 and formula 4, and the target information of part of the current frame is reserved, so that the error accumulation of the model is reduced, a nuclear correlation filter is updated to a certain extent, the subsequent tracking is facilitated, and only the algorithm keeps a good tracking effect when target tracking is carried out after the other three algorithms lack a corresponding mechanism under the poor target tracking condition and target tracking is carried out later.
(3) For a basketball video sequence, the tracking performance of the algorithm under the conditions of shooting angle transformation and target deformation is mainly considered. However, because the target in the whole video sequence is less interfered, the four algorithms can keep good tracking effect, but only the text algorithm and the KCF algorithm can have higher tracking speed.
(4) For a Kitesurf video sequence, the tracking performance of an algorithm under the conditions of target rotation and rapid motion is mainly considered, the target of a test sequence is in rapid motion as a whole, particularly severe rotation and angle change occur at the 48 th frame, the APCE value is 5.8333, the Fmax value is 0.1776, the target information is brought into a judgment formula (2), so that the target information is rapidly reduced and possibly lost at the moment, the information is continuously brought into a formula (4), so that the learning factor beta at the moment is 0, so that the position characteristic information of the related target in the current frame cannot be stored, and the kernel correlation filter is not updated until the tracking condition is better than the judgment formula. Although the tracking frame of the algorithm has a certain offset, the target can be quickly retrieved and tracked in the follow-up process by means of the known characteristic information of the target, and the algorithm has a better tracking effect compared with other algorithms.
In summary, a mechanism of initializing the KCF filter by image matching is added on the basis of the kernel correlation filter, so that the step of manually calibrating the target is omitted, and the target tracking has wider applicability. And a KCF nuclear correlation filter is improved, the confidence coefficient of the APCE is evaluated on the extracted HOG characteristic response diagram, a mechanism of dynamically adjusting a learning factor is adopted according to the difference of the confidence coefficient, and the robustness and the accuracy of the target are improved. The test is carried out under an OTB-2015 data set, and the result shows that the traditional algorithm of the algorithm is improved to a certain degree.
The experimental process of the invention is shown in figure 2 and comprises the following steps:
step 1: acquiring a video sequence to be tracked and a picture of a target;
step 2: carrying out image matching on a first frame of a tracking video sequence and a target picture by utilizing an improved SIFT algorithm so as to initialize a related filter;
and step 3: carrying out target tracking on the tracking video sequence;
further, the specific method of step 1 is as follows: a picture with a certain similarity to the target in the video sequence is selected as input, where the similarity index suggests that the PSNR is larger than 30.
Further, the specific method in step 2 is as follows:
(1) and performing image matching on the input picture and the video sequence by using a SIFT operator to determine the position of the target in the first frame and transmitting the position information into the kernel correlation filter for initialization.
Further, the specific method of step 3 is as follows:
(1) determining the position of the target in the next frame by using the position of the target in the previous frame and the characteristic information (obtained by automatically extracting the relevant filter), and calculating the APCE value of each frame according to the response graph generated by the algorithm, wherein the calculation formula of the APCE confidence coefficient is as follows:
Figure BDA0002755319160000061
wherein FmaxIs the maximum value in the response map of the current frame, FminIs the minimum value in the response map of the current frame, Fw,hIs the response value at any position in the response map of the current frame.
(2) And simultaneously calculating the average APCE value from the first frame to the current frame as mean _ APCE and the average response maximum value as mean _ Fmax
Judging whether the tracking of the target at the moment is lost or not by judging whether the APCE value of the current frame and the response maximum value in the current frame simultaneously meet set conditions, wherein the set condition formula is as follows:
(apce>=beta1×mean_acpe)&&(Fmax>=beta2×mean_Fmax) (2)
where APCE is the APCE value of the current frame, and beta1 and beta2 are threshold values that are suggested to be set to 0.5 and 0.6, respectively.
(3) Updating the model of the video frame meeting the conditions according to the updating strategy of the KCF original algorithm, wherein the updating strategy is as follows:
Figure BDA0002755319160000062
in the formula, a represents ridge regression coefficient, x represents characteristic information of the template, and apRepresenting the current ridge regression coefficient of the model, axThe ridge regression coefficient to be updated of the current frame, beta is 0.02 when the learning factor satisfies the formula 2, xp
Representing the current characteristic information of the model, xxFeature information to be updated for the current frame.
(4) And for the video frames which do not meet the conditions, dynamically adjusting the learning factor beta in the model updating strategy according to the APCE value, wherein the corresponding relation is as follows:
Figure BDA0002755319160000063
thereby suppressing model drift and retaining part of useful feature information.

Claims (6)

1. A target tracking algorithm based on image matching in combination with an improved kernel correlation filter, comprising the steps of:
step 1, acquiring a video sequence to be tracked and a target picture;
step 2, carrying out image matching on a first frame of a tracking video sequence and a target picture by utilizing an improved SIFT algorithm so as to initialize a related filter;
and 3, carrying out target tracking on the video sequence to be tracked by utilizing the correlation filter.
2. The target tracking algorithm based on image matching combined with an improved kernel correlation filter as claimed in claim 1, wherein said step 1 selects a target picture satisfying a set similarity with a target to be tracked in the video sequence.
3. The image matching and improved kernel correlation filter based target tracking algorithm of claim 2, wherein the set similarity indicates that the PSNR value of the target picture and the first frame of the video sequence to be tracked is greater than 30.
4. The target tracking algorithm based on image matching combined with the improved kernel correlation filter as claimed in claim 1, wherein the step 2 uses a SIFT operator to perform image matching on the target picture and the video sequence to be tracked, determine the position of the target in the first frame, and transmit the position information into the kernel correlation filter for initial initialization of the target position.
5. The image matching based target tracking algorithm in combination with the improved kernel correlation filter as claimed in claim 1, wherein said step 3 comprises the steps of:
(1) determining the position of the target in the next frame by using the position of the target in the previous frame in the video sequence to be tracked and the characteristic information, calculating the APCE value of each frame according to a response map generated by an algorithm,
Figure FDA0002755319150000011
wherein FmaxIs the maximum value of the response of the current frame, FminIs the response minimum of the current frame, Fw,hThe response value at any position in the response image of the current frame;
(2) the average APCE and average response maximum from the first frame to the current frame are calculated and are respectively denoted as mean _ APCE and mean _ Fmax(ii) a Judging whether the tracking of the target at the time is lost or not by judging whether the APCE value of the current frame and the response maximum value in the current frame simultaneously meet a set condition, wherein the set condition is as follows (APCE > ═ beta1 multiplied by mean _ acpe)&&(Fmax>=beta2×mean_Fmax)
Wherein APCE is the APCE value of the current frame, and beta1 and beta2 are set thresholds;
(3) carrying out model updating on the video frames meeting the conditions according to an updating strategy of a KCF original algorithm, wherein the updating strategy is
Figure FDA0002755319150000021
Wherein alpha represents a ridge regression coefficient, x represents feature information of the object, apRepresenting the current ridge regression coefficient of the target, axRepresenting the ridge regression coefficient to be updated of the current frame, beta is a learning factor, and when a conditional formula is satisfied, 0.02 x is takenpRepresenting the current characteristic information of the object, xxFeature information to be updated for the current frame;
for the video frames which do not meet the conditions, the learning factor beta in the model updating strategy is dynamically adjusted according to the APCE value, and the corresponding relation is
Figure FDA0002755319150000022
Thereby suppressing model drift and preserving partUseful characteristic information.
6. The image matching and kernel correlation filter based target tracking algorithm of claim 5 wherein the set thresholds beta1 and beta2 are set to 0.5 and 0.6, respectively.
CN202011201207.7A 2020-11-02 2020-11-02 Target tracking algorithm based on combination of image matching and improved kernel correlation filter Pending CN112150511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011201207.7A CN112150511A (en) 2020-11-02 2020-11-02 Target tracking algorithm based on combination of image matching and improved kernel correlation filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011201207.7A CN112150511A (en) 2020-11-02 2020-11-02 Target tracking algorithm based on combination of image matching and improved kernel correlation filter

Publications (1)

Publication Number Publication Date
CN112150511A true CN112150511A (en) 2020-12-29

Family

ID=73955149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011201207.7A Pending CN112150511A (en) 2020-11-02 2020-11-02 Target tracking algorithm based on combination of image matching and improved kernel correlation filter

Country Status (1)

Country Link
CN (1) CN112150511A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222776A (en) * 2022-09-19 2022-10-21 中国人民解放军国防科技大学 Matching auxiliary visual target tracking method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2858008A2 (en) * 2013-09-27 2015-04-08 Ricoh Company, Ltd. Target detecting method and system
CN106338733A (en) * 2016-09-09 2017-01-18 河海大学常州校区 Forward-looking sonar object tracking method based on frog-eye visual characteristic
CN106559605A (en) * 2016-11-17 2017-04-05 天津大学 Digital video digital image stabilization method based on improved block matching algorithm
CN107368802A (en) * 2017-07-14 2017-11-21 北京理工大学 Motion target tracking method based on KCF and human brain memory mechanism
CN107657630A (en) * 2017-07-21 2018-02-02 南京邮电大学 A kind of modified anti-shelter target tracking based on KCF
CN108846855A (en) * 2018-05-24 2018-11-20 北京飞搜科技有限公司 Method for tracking target and equipment
CN109360224A (en) * 2018-09-29 2019-02-19 吉林大学 A kind of anti-shelter target tracking merging KCF and particle filter
CN110035329A (en) * 2018-01-11 2019-07-19 腾讯科技(北京)有限公司 Image processing method, device and storage medium
CN110796676A (en) * 2019-10-10 2020-02-14 太原理工大学 Target tracking method combining high-confidence updating strategy with SVM (support vector machine) re-detection technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2858008A2 (en) * 2013-09-27 2015-04-08 Ricoh Company, Ltd. Target detecting method and system
CN106338733A (en) * 2016-09-09 2017-01-18 河海大学常州校区 Forward-looking sonar object tracking method based on frog-eye visual characteristic
CN106559605A (en) * 2016-11-17 2017-04-05 天津大学 Digital video digital image stabilization method based on improved block matching algorithm
CN107368802A (en) * 2017-07-14 2017-11-21 北京理工大学 Motion target tracking method based on KCF and human brain memory mechanism
CN107657630A (en) * 2017-07-21 2018-02-02 南京邮电大学 A kind of modified anti-shelter target tracking based on KCF
CN110035329A (en) * 2018-01-11 2019-07-19 腾讯科技(北京)有限公司 Image processing method, device and storage medium
CN108846855A (en) * 2018-05-24 2018-11-20 北京飞搜科技有限公司 Method for tracking target and equipment
CN109360224A (en) * 2018-09-29 2019-02-19 吉林大学 A kind of anti-shelter target tracking merging KCF and particle filter
CN110796676A (en) * 2019-10-10 2020-02-14 太原理工大学 Target tracking method combining high-confidence updating strategy with SVM (support vector machine) re-detection technology

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BING ZHONG等: "Image Feature Point Matching Based on Improved SIFT Algorithm", 《2019 IEEE 4TH INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC)》 *
WANG YANGPING等: "Augmented Reality Tracking Registration Based on Improved KCF Tracking and ORB Feature Detection", 《2019 7TH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATION AND NETWORKS (ICICN)》 *
孙健等: "改进的核相关滤波跟踪算法", 《计算机工程与应用》 *
李凯峰: "基于KCF的目标跟踪算法研究及嵌入式系统实现", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222776A (en) * 2022-09-19 2022-10-21 中国人民解放军国防科技大学 Matching auxiliary visual target tracking method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10885372B2 (en) Image recognition apparatus, learning apparatus, image recognition method, learning method, and storage medium
CN110796678B (en) Underwater multi-target tracking method based on IoU
CN110473231B (en) Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN109598196B (en) Multi-form multi-pose face sequence feature point positioning method
CN110490907B (en) Moving target tracking method based on multi-target feature and improved correlation filter
CN110009060B (en) Robustness long-term tracking method based on correlation filtering and target detection
CN109685045B (en) Moving target video tracking method and system
CN112085765B (en) Video target tracking method combining particle filtering and metric learning
CN107609571B (en) Adaptive target tracking method based on LARK features
CN107563323A (en) A kind of video human face characteristic point positioning method
CN112862860B (en) Object perception image fusion method for multi-mode target tracking
CN111429485A (en) Cross-modal filtering tracking method based on self-adaptive regularization and high-reliability updating
Kuai et al. Masked and dynamic Siamese network for robust visual tracking
Noman et al. Avist: A benchmark for visual object tracking in adverse visibility
CN106250878B (en) Multi-modal target tracking method combining visible light and infrared images
Li et al. Robust visual tracking with occlusion judgment and re-detection
CN112150511A (en) Target tracking algorithm based on combination of image matching and improved kernel correlation filter
CN114973071A (en) Unsupervised video target segmentation method and system based on long-term and short-term time sequence characteristics
CN113888603A (en) Loop detection and visual SLAM method based on optical flow tracking and feature matching
CN108280845B (en) Scale self-adaptive target tracking method for complex background
JP6600288B2 (en) Integrated apparatus and program
CN113836980A (en) Face recognition method, electronic device and storage medium
CN109242885B (en) Correlation filtering video tracking method based on space-time non-local regularization
CN113763432B (en) Target detection tracking method based on image definition and tracking stability conditions
Rodríguez et al. SD-DefSLAM: semi-direct monocular SLAM for deformable and intracorporeal scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229