CN112164093A - Automatic person tracking method based on edge features and related filtering - Google Patents

Automatic person tracking method based on edge features and related filtering Download PDF

Info

Publication number
CN112164093A
CN112164093A CN202010880447.8A CN202010880447A CN112164093A CN 112164093 A CN112164093 A CN 112164093A CN 202010880447 A CN202010880447 A CN 202010880447A CN 112164093 A CN112164093 A CN 112164093A
Authority
CN
China
Prior art keywords
frame
target
features
updating
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010880447.8A
Other languages
Chinese (zh)
Inventor
刘成菊
王乃佳
陈启军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202010880447.8A priority Critical patent/CN112164093A/en
Publication of CN112164093A publication Critical patent/CN112164093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a person automatic tracking method based on edge features and related filtering, which comprises the following steps: 1) acquiring a current input frame, and taking a current target position as a center to extract a training sample; 2) extracting edge features and color features of the training samples to obtain corresponding feature maps, judging whether the filter template needs to be updated according to a set rule, if so, executing the step 3), and otherwise, executing the step 4); 3) iteratively updating the filter template by using the feature diagram of the training sample; 4) respectively carrying out correlation operation on feature graphs corresponding to the edge features and the color features and a filter template, and predicting the target position of the next frame; 5) and carrying out scale prediction on the target position through a scale filter to obtain a target frame of the next frame, and returning to execute the step 2).

Description

Automatic person tracking method based on edge features and related filtering
Technical Field
The invention relates to the field of machine vision, in particular to a person automatic tracking method based on edge features and relevant filtering.
Background
The figure automatic tracking based on computer vision obtains the image coordinates of the figure target from the visual image, is the basis for realizing the function of positioning following the figure, and has important significance for promoting the application of the service robot by adopting a light-weight, rapid and accurate figure automatic tracking system. In addition, the technology has wide application prospect in the fields of automatic driving, smart cities, intelligent monitoring and the like.
The automatic tracking relates to the integration of detection and tracking, and belongs to two independent research fields, and the existing automatic tracking methods mainly comprise two types:
the first method is a method for matching by data association: the method utilizes data association to realize optimal matching between a tracking prediction frame and an actually detected observation frame; the existing methods have the following problems: the method is based on detection, and tracking target switching is easy to occur when a target is shielded or interfered; target feature extraction is carried out on each frame to serve as a matching standard, the operation burden is large, and the requirement on instantaneity is difficult to meet.
The second method for detecting and tracking tandem work comprises the following steps: the method comprises the steps that a detector is used for initialization and correction of tracking, the detector obtains corresponding coordinate positions, classification and confidence degrees, the tracker is initialized according to detection results, and the positions given by the tracker are used as effective results in subsequent frames; in such a scheme, the performance of the detector determines the reliability of the tracked target and the performance of the tracker determines the accuracy of the result.
The detector mostly adopts a machine learning method to construct a person detection classifier for distinguishing a person target from a background, wherein the problems to be overcome are as follows: similar interference between the target and the background or multiple targets, light change and deformation when the target moves, target shielding and the like.
The tracker is a single target tracking problem, and can be divided into the following according to a target model:
1) the method is a generative model method, and the method carries out tracking according to target modeling, but background information is not fully utilized, characteristics are excessively depended on, so that the influence of the appearance change of a target on a result is great, a whole picture needs to be processed, and the real-time performance is poor;
2) the discriminant model method considers the target model and the background information at the same time, and is based on a manual feature method, so that the problems of target drift caused by insufficient robustness in the face of background interference and shielding are solved, and the method based on deep learning extracts the target features by using a deep convolution network, so that the target drift can be improved to a certain extent, but the real-time performance is poor.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic person tracking method which has real-time performance and accuracy and is based on edge features and related filtering.
The purpose of the invention can be realized by the following technical scheme:
a person automatic tracking method based on edge features and relevant filtering comprises the following steps:
1) acquiring a current input frame, and taking a current target position as a center to extract a training sample;
2) extracting edge features and color features of the training samples to obtain corresponding feature maps, judging whether the filter template needs to be updated according to a set rule, if so, executing the step 3), and otherwise, executing the step 4);
3) iteratively updating the filter template by using the feature diagram of the training sample;
4) respectively carrying out correlation operation on feature graphs corresponding to the edge features and the color features and a filter template, and predicting the target position of the next frame;
5) and (5) carrying out scale prediction on the target position through a scale filter to obtain a target frame of the next frame, and returning to execute the step 2).
Further, in step 1), when the input image is an initial frame, the target position is an initial target frame detected by the detector, and when the input image is a subsequent frame, the target position is the target frame obtained in step 5).
Further, the step of detecting the target frame by the detector specifically includes:
11) acquiring a first frame color image;
12) extracting edge features after graying;
13) inputting the edge features into a classifier, judging whether a person target exists in an input image, if so, executing step 14), otherwise, acquiring a next frame of color image, and returning to execute step 12);
14) the classifier outputs all the human targets in the image;
15) traversing all the detection frames obtained in the step 14), selecting a target character closest to the center of the picture, and taking a rectangular area where the target character is located as an initial target frame.
Further, the extracting the training samples specifically includes:
generating sample distribution according to a component model obtained by a Gaussian mixture model, extracting a training sample by taking the current target position as the center of each frame, initializing a new model component in the distribution, and setting weight to control the influence of each sample, wherein the size of the training sample is 5 multiplied by 5 times that of the target frame.
Further, the step 2) specifically comprises:
21) extracting edge features and color features with different resolutions from a training sample in a discrete domain to obtain the channel number and the overall dimension of a discrete feature map;
22) converting discrete feature maps corresponding to the edge features and the color features into a continuous spatial domain by using a continuous convolution operator;
23) and judging whether the filter template needs to be updated according to a set rule, if so, executing the step 3), otherwise, executing the step 4).
Further, the step 4) specifically includes:
41) performing interpolation operation on the edge characteristics and the color characteristics in the continuous domain;
42) performing correlation operation on the edge characteristics and the color characteristics after interpolation and a filter template on a continuous domain to obtain a continuous characteristic response graph;
43) weighting and summing the characteristic response graphs;
44) obtaining a maximum response position according to a maximum grid search method and Newton iteration;
45) and taking the maximum response position as the target position of the next frame predicted by the tracker.
Further, in the step 5), the position of the response maximum value is used as a prediction center position, a multi-scale search strategy is adopted to perform scale prediction, the position and scale information of the target frame are obtained, and the target frame of the next frame is obtained through an fDSST adaptive prediction scale algorithm.
Further, the setting rule specifically includes:
if the current input frame is an initial frame, judging that a filter template needs to be updated;
and if the set interval of frame interval updating is reached and the threshold supervision updating mechanism is met, judging that the filter template needs to be updated.
Further, the target frame obtained in the step 5) is used for model updating after response peak distribution verification, the response peak distribution verification adopts a standard deviation as a measure of the dispersion degree of response peaks, and the model updating comprises filter model updating, scale filter updating and sample space updating;
the threshold supervision updating mechanism specifically comprises:
and if the standard deviation in the response peak value distribution verification is larger than the set threshold, the target frame meets a threshold supervision updating mechanism.
Further, the edge feature is an fHOG feature, and the color feature is a multi-channel color feature.
Compared with the prior art, the invention has the following advantages:
1) the method utilizes the detector to detect the input picture, the trained detector detects the figure target, then the figure in the center of the picture is taken as the initial target to track, the detector and the tracker are combined to realize the automatic detection and tracking of the figure target, the contour characteristics of the figure are extracted in the process, the detection stage has higher accuracy, and then the tracking algorithm is applied, so that the operation resources are saved, and the effectiveness of the initial target is ensured;
2) according to the method, the edge features and the multi-channel color features are fused in a continuous domain by using a tracking algorithm based on related filtering, the color features robust to motion and the edge features robust to light are fused, the response of each feature layer is subjected to weighted summation, and the maximum value is taken as a prediction position, so that the features have good expression capability, the conditions of motion deformation, light change and the like can be met, and the accuracy of target tracking is improved;
3) when the filter template is updated, a response diagram distribution threshold checking mechanism is used for checking the reliability of the current tracking result, and the check is reliable to update, so that the additional checking link can solve the problems of tracking frame drift and target loss, the robustness of the tracking method is improved, and the updating frequency is reduced by adopting a frame-by-frame updating method, so that the tracking speed is improved;
4) the invention can automatically track the person, has accurate tracking and positioning on the situations of person shielding, form change and the like, is light in calculation, and is suitable for being deployed on a hardware platform with limited calculation capacity.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of the processing of a first frame and a next frame of the method of the present invention;
FIG. 3 is a schematic diagram of a human detection process;
FIG. 4 is a schematic diagram of a human tracking process;
fig. 5 is a diagram showing the human tracking effect of the present invention, in which fig. 5a shows the human tracking effect of the initial frame, fig. 5b shows the human tracking effect of the 337 th frame, fig. 5c shows the human tracking effect of the 532 th frame, fig. 5d shows the human tracking effect of the 694 th frame, fig. 5e shows the human tracking effect of the 910 th frame, and fig. 5f shows the human tracking effect of the 1012 th frame.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Examples
As shown in fig. 1 to 4, the present invention provides a fast person automatic tracking method based on edge features and related filtering, which specifically includes the following steps:
and S1, acquiring the initially input color image, extracting edge features after graying, and inputting the edge features into a classifier to obtain a character target candidate frame.
The classifier adopts a support vector machine, a linear SVM is adopted in the embodiment, an INRIA data set is used for training the classifier, the training process comprises data set cutting preprocessing, edge feature extraction, model training and difficult case optimization, and a classifier model is finally obtained.
S2, the tracker selects the target frame in the center of the screen according to the detection result of the step S1 by using the target frame in the center of the screen as the initial tracking target among the detected candidate frames of the human target.
Specifically, according to the result of the image sliding window detection, a person closest to the center of the screen is detected as a target object. In the process of online detection, from the beginning of system operation, if no human appears in the picture, the detection process is carried out until a target appears. The training and detection process of the classifier is shown in fig. 2.
And S3, taking the current target position of the input image as the center to extract a training sample, wherein the length and the width of the training sample are 5 multiplied by 5 times of the size of the target frame, calculating the edge characteristic and the multi-channel color characteristic of the training sample, and training the corresponding filter template by using the characteristic image when the initial frame and the filter template need to be updated.
The method specifically comprises the following steps:
when the input image is an initial frame, the target position is the target frame detected in step S2, and when the input image is a subsequent frame, the target position is the target frame predicted in step S5;
taking a target position as a center, extracting training samples according to an area which is 5 multiplied by 5 times of the size of an original target frame, generating sample distribution according to a component model obtained by a Gaussian Mixture Model (GMM), enhancing the diversity of a sample space by setting multiple components, initializing a model component by using a new sample, and setting weight to control the influence of each sample;
extracting edge features and CN color features with different resolutions from a discrete domain for a training sample, obtaining the quantity and the overall dimension of a discrete feature map, carrying out continuous interpolation operation on the features, and converting the feature map into a continuous space domain, wherein the edge features are fHOG features which are improved based on the HOG features so as to be suitable for the cyclic shift processing of related filtering;
when an input image is an initial frame or a filter template needs to be updated, training coefficients corresponding to a relevant filter on a continuous domain by using feature maps of a plurality of channels obtained from all historical samples in a sample space to obtain a filter template corresponding to each channel, and initializing or updating the filter template, wherein the training process adopts a conjugate gradient method for optimization, a target position obtained from the previous frame is used as initialization of iterative optimization, if the input image is the initial frame, 100 iterations are performed, and if the input image is a subsequent frame, 5 iterations are performed.
And S4, respectively carrying out correlation operation on the feature graphs of the edge features and the color features and the filter template to obtain different feature response graphs, and selecting the position of the response maximum value after weighted summation as the predicted target center position.
The method specifically comprises the following steps: and after the characteristic response graphs are subjected to weighted summation, the maximum response position of the current frame is obtained according to a maximum grid search method and Newton iteration and is used as the target position predicted by the tracker. Here, since the edge feature and the multi-channel color feature have different resolutions, they are converted to the continuous domain using a continuous convolution operator.
S5, obtaining the optimal estimation of the target size by using a scale filter, determining a final target frame, returning to the step S3 to continue the tracking prediction of the next frame, introducing a threshold supervision updating mechanism based on the distribution of a relevant response diagram, and performing iterative updating of the filter at intervals of frames;
the method specifically comprises the following steps: and taking the position of the maximum response value as the prediction center position, performing scale prediction by adopting a multi-scale search strategy (17 scales and a relative scale factor of 1.02) on the basis of the target position, finding the scale with the maximum response value, determining the position and scale information of a final target frame of the next frame, and taking the target frame as a training sample target frame predicted by the next frame.
And performing model updating at intervals of frames through response peak value distribution verification, wherein the model updating comprises a correlation filter, a scale filter and a sample space, in the embodiment, the model updating is performed every 5 frames, the optimization iteration is performed on the correlation filter by using a conjugate gradient method, and the parameters and the sample space of the scale filter are updated.
In the response peak value distribution verification, the standard deviation is used as the measurement of the dispersion degree of the response peak values, the response in the peak value set is regarded as reliable tracking, the threshold value is set for supervision and updating, the reliability of the tracking target is verified before updating, the standard deviation is used as the index for measuring the response peak value distribution, the response template with the standard deviation smaller than a certain threshold value is regarded as unreliable, the model is not updated, and the filter template tracked last time is reserved.
As shown in fig. 5, the effect diagram of online person tracking by using the method of the present invention, compared with the existing target tracking technology, the method for automatically tracking a person provided by the present invention has two greatest innovation points: firstly, the human detection and tracking framework in the current research is combined, so that the human target of the real-time video is automatically tracked, and the rapid and accurate tracking effect can be achieved; and secondly, a characteristic fusion mode is used, color characteristics robust to motion and edge characteristics robust to light are fused, the response of each characteristic layer is subjected to weighted summation, the maximum value is taken as a prediction position, and the robustness is high. The two innovation points enable people to track automatically, have accurate tracking and positioning on the situations of people shielding, form change and the like, are light in calculation, and are suitable for being deployed on a hardware platform with limited calculation capacity.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and those skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A person automatic tracking method based on edge features and relevant filtering is characterized by comprising the following steps:
1) acquiring a current input frame, and taking a current target position as a center to extract a training sample;
2) extracting edge features and color features of the training samples to obtain corresponding feature maps, judging whether the filter template needs to be updated according to a set rule, if so, executing the step 3), and otherwise, executing the step 4);
3) iteratively updating the filter template by using the feature diagram of the training sample;
4) respectively carrying out correlation operation on feature graphs corresponding to the edge features and the color features and a filter template, and predicting the target position of the next frame;
5) and (5) carrying out scale prediction on the target position through a scale filter to obtain a target frame of the next frame, and returning to execute the step 2).
2. The method as claimed in claim 1, wherein in step 1), when the input image is an initial frame, the target position is an initial target frame detected by the detector, and when the input image is a subsequent frame, the target position is the target frame obtained in step 5).
3. The method as claimed in claim 2, wherein the step of detecting the target frame by the detector comprises:
11) acquiring a first frame color image;
12) extracting edge features after graying;
13) inputting the edge features into a classifier, judging whether a person target exists in an input image, if so, executing step 14), otherwise, acquiring a next frame of color image, and returning to execute step 12);
14) the classifier outputs all the human targets in the image;
15) traversing all the detection frames obtained in the step 14), selecting a target character closest to the center of the picture, and taking a rectangular area where the target character is located as an initial target frame.
4. The method of claim 1, wherein the extracting training samples specifically comprises:
generating sample distribution according to a component model obtained by a Gaussian mixture model, extracting a training sample by taking the current target position as the center of each frame, setting the size of the training sample to be 5 multiplied by 5 times of the size of a target frame, initializing a new model component in the distribution, and setting weight.
5. The method for automatically tracking a person based on edge features and related filtering as claimed in claim 1, wherein the step 2) specifically comprises:
21) extracting edge features and color features of different resolutions from a training sample in a discrete domain to obtain the channel number and the overall dimension of a feature map;
22) converting discrete feature maps corresponding to the edge features and the color features into a continuous spatial domain by using a continuous convolution operator;
23) and judging whether the filter template needs to be updated according to a set rule, if so, executing the step 3), otherwise, executing the step 4).
6. The method for automatically tracking a person based on edge features and related filtering as claimed in claim 1, wherein the step 4) specifically comprises:
41) performing interpolation operation on the edge characteristics and the color characteristics in the continuous domain;
42) performing correlation operation on the edge characteristics and the color characteristics after interpolation and a filter template on a continuous domain to obtain a continuous characteristic response graph;
43) weighting and summing the characteristic response graphs;
44) obtaining a maximum response position according to a maximum grid search method and Newton iteration;
45) and taking the maximum response position as the target position of the next frame predicted by the tracker.
7. The method as claimed in claim 6, wherein in step 5), the response maximum position is used as the prediction center position, a multi-scale search strategy is used to perform scale prediction to obtain the position and scale information of the target frame, and the target frame of the next frame is obtained through an fdst adaptive prediction scale algorithm.
8. The method of claim 7, wherein the setting rules specifically include:
if the current input frame is an initial frame, judging that a filter template needs to be updated;
and if the set interval of frame interval updating is reached and the threshold supervision updating mechanism is met, judging that the filter template needs to be updated.
9. The method of claim 8, wherein the target frame obtained in step 5) is used for model updating after a response peak distribution check, wherein the response peak distribution check uses a standard deviation as a measure of dispersion degree of response peaks, and the model updating includes filter model updating, scale filter updating and sample space updating;
the threshold supervision updating mechanism specifically comprises:
and if the standard deviation in the response peak value distribution verification is larger than the set threshold, the target frame meets a threshold supervision updating mechanism.
10. The method of claim 1, wherein the edge feature is an fHOG feature and the color feature is a multi-channel color feature.
CN202010880447.8A 2020-08-27 2020-08-27 Automatic person tracking method based on edge features and related filtering Pending CN112164093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010880447.8A CN112164093A (en) 2020-08-27 2020-08-27 Automatic person tracking method based on edge features and related filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010880447.8A CN112164093A (en) 2020-08-27 2020-08-27 Automatic person tracking method based on edge features and related filtering

Publications (1)

Publication Number Publication Date
CN112164093A true CN112164093A (en) 2021-01-01

Family

ID=73860345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010880447.8A Pending CN112164093A (en) 2020-08-27 2020-08-27 Automatic person tracking method based on edge features and related filtering

Country Status (1)

Country Link
CN (1) CN112164093A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658224A (en) * 2021-08-18 2021-11-16 中国人民解放军陆军炮兵防空兵学院 Target contour tracking method and system based on correlated filtering and Deep Snake
CN115018885A (en) * 2022-08-05 2022-09-06 四川迪晟新达类脑智能技术有限公司 Multi-scale target tracking algorithm suitable for edge equipment
CN115631216A (en) * 2022-12-21 2023-01-20 中航金城无人系统有限公司 Holder target tracking system and method based on multi-feature filter fusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108986140A (en) * 2018-06-26 2018-12-11 南京信息工程大学 Target scale adaptive tracking method based on correlation filtering and color detection
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti- based on multi-peak detection blocks pedestrian tracting method
CN110490907A (en) * 2019-08-21 2019-11-22 上海无线电设备研究所 Motion target tracking method based on multiple target feature and improvement correlation filter
CN110796676A (en) * 2019-10-10 2020-02-14 太原理工大学 Target tracking method combining high-confidence updating strategy with SVM (support vector machine) re-detection technology
CN111260738A (en) * 2020-01-08 2020-06-09 天津大学 Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108986140A (en) * 2018-06-26 2018-12-11 南京信息工程大学 Target scale adaptive tracking method based on correlation filtering and color detection
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti- based on multi-peak detection blocks pedestrian tracting method
CN110490907A (en) * 2019-08-21 2019-11-22 上海无线电设备研究所 Motion target tracking method based on multiple target feature and improvement correlation filter
CN110796676A (en) * 2019-10-10 2020-02-14 太原理工大学 Target tracking method combining high-confidence updating strategy with SVM (support vector machine) re-detection technology
CN111260738A (en) * 2020-01-08 2020-06-09 天津大学 Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李欣等: ""基于ECO-HC改进的运动目标跟踪方法研究"", 《南京大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658224A (en) * 2021-08-18 2021-11-16 中国人民解放军陆军炮兵防空兵学院 Target contour tracking method and system based on correlated filtering and Deep Snake
CN113658224B (en) * 2021-08-18 2024-02-06 中国人民解放军陆军炮兵防空兵学院 Target contour tracking method and system based on correlation filtering and Deep Snake
CN115018885A (en) * 2022-08-05 2022-09-06 四川迪晟新达类脑智能技术有限公司 Multi-scale target tracking algorithm suitable for edge equipment
CN115631216A (en) * 2022-12-21 2023-01-20 中航金城无人系统有限公司 Holder target tracking system and method based on multi-feature filter fusion

Similar Documents

Publication Publication Date Title
Salhi et al. Object tracking system using Camshift, Meanshift and Kalman filter
CN107424171B (en) Block-based anti-occlusion target tracking method
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN105913028B (en) Face + + platform-based face tracking method and device
KR100519781B1 (en) Object tracking method and apparatus
CN107633226B (en) Human body motion tracking feature processing method
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
KR100612858B1 (en) Method and apparatus for tracking human using robot
CN108154159B (en) A kind of method for tracking target with automatic recovery ability based on Multistage Detector
CN112836639A (en) Pedestrian multi-target tracking video identification method based on improved YOLOv3 model
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN111862145B (en) Target tracking method based on multi-scale pedestrian detection
CN110991397B (en) Travel direction determining method and related equipment
CN111008991B (en) Background-aware related filtering target tracking method
CN103456030B (en) Based on the method for tracking target of scattering descriptor
CN110717934B (en) Anti-occlusion target tracking method based on STRCF
CN110827262B (en) Weak and small target detection method based on continuous limited frame infrared image
CN109345559B (en) Moving target tracking method based on sample expansion and depth classification network
CN111091101A (en) High-precision pedestrian detection method, system and device based on one-step method
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN113379789A (en) Moving target tracking method in complex environment
CN112991394A (en) KCF target tracking method based on cubic spline interpolation and Markov chain
CN107665495B (en) Object tracking method and object tracking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210101