CN107194947B - Target tracking method with self-adaptive self-correction function - Google Patents

Target tracking method with self-adaptive self-correction function Download PDF

Info

Publication number
CN107194947B
CN107194947B CN201710354666.0A CN201710354666A CN107194947B CN 107194947 B CN107194947 B CN 107194947B CN 201710354666 A CN201710354666 A CN 201710354666A CN 107194947 B CN107194947 B CN 107194947B
Authority
CN
China
Prior art keywords
self
target
appearance model
adaptive
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710354666.0A
Other languages
Chinese (zh)
Other versions
CN107194947A (en
Inventor
王高峰
卫保国
高涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Yupeng Technology Co ltd
Original Assignee
Guizhou Yupeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Yupeng Technology Co ltd filed Critical Guizhou Yupeng Technology Co ltd
Priority to CN201710354666.0A priority Critical patent/CN107194947B/en
Publication of CN107194947A publication Critical patent/CN107194947A/en
Application granted granted Critical
Publication of CN107194947B publication Critical patent/CN107194947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-adaptive self-correcting target tracking method, which divides interference factors in target tracking into environmental factors and target self factors, respectively provides a static appearance model and a self-adaptive appearance model according to the classified interference factors, then carries out denoising and fusion on the static appearance model and the self-adaptive appearance model, and finally improves the tracking accuracy through a self-correcting tracking frame, and comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module; the invention solves the problem of tracking loss caused by environment and target change in the long-term tracking process, and provides a tracking framework capable of self-correcting on the basis.

Description

Target tracking method with self-adaptive self-correction function
Technical Field
The invention relates to a self-adaptive self-correcting target tracking method, belonging to the technical field of robust target tracking.
Background
Target tracking refers to estimating the motion trajectory of a specified target in a video image sequence, and in 2012, ZdenekK and the like propose a target tracking algorithm [1] combining tracking, Detection and Learning, in the tracking process, data observed by the target is used for Training a classifier, meanwhile, the algorithm uses a tracker based on optical flow to continuously correct the Detection result of the classifier, and during Detection, TLD (tracking-Learning Detection) uses a sample discrimination strategy based on structural constraint conditions to ensure that the sample at the Training stage of the classifier is close to the real situation enough. Although the TLD as a single-target tracking algorithm can solve the problem that the target is difficult to track in the long-term tracking process to some extent, the algorithm has poor real-time performance and relies on a priori knowledge of the target in the initial tracking stage.
The key of long-term tracking is that the algorithm can resist various interference factors which may appear in the actual environment, the difficulty is that the goal disappears and then appears, the change of the goal during the disappearance of the goal can not be obtained from a video image sequence, the motion condition of the goal can not be estimated through a motion estimation algorithm (such as an optical flow method), and meanwhile, the goal drifts in the tracking process, so based on the above situation, a static appearance model and a self-adaptive appearance model are respectively provided for different types of interference factors, then a tracking algorithm frame capable of self-correcting is formed by combining with an auxiliary tracking algorithm, and the high-efficiency and real-time tracking is realized under various interference factors.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a self-adaptive self-correcting target tracking method, solve the problem of tracking loss caused by environment and target change in the long-term tracking process, provide a tracking framework capable of self-correcting on the basis, and overcome the defects of the prior art.
The technical scheme of the invention is as follows: a target tracking method of self-adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the tracking accuracy is improved through the self-correcting tracking frame, and the tracking accuracy comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
The static module can always keep the initial information of the target, and is a set consisting of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
the denoising module removes noises of a static appearance model and a self-adaptive appearance model with uncertain types by using a hierarchical clustering method, and obtains three data information of the central position, the scale change and the rotation angle of a tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model and used for candidate search.
The static appearance model and the self-adaptive appearance model are characterized in that the characteristic description mode is a local invariant characteristic SURF, the time complexity and the actual effect are integrated, and the Euclidean distance is used as a distance calculation method of SURF characteristic points.
Compared with the prior art, the self-adaptive self-correcting target tracking method of the invention divides interference factors in target tracking into environmental factors and target self factors, respectively provides a static appearance model and a self-adaptive appearance model according to the classified interference factors, then carries out denoising and fusion on the static appearance model and the self-adaptive appearance model, and finally improves the tracking accuracy through a self-correcting tracking frame, wherein the self-adaptive self-correcting target tracking method comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module; the result of the method and the four modules shows that compared with a classical or current popular algorithm, the algorithm has good tracking precision, which is particularly shown in the combination of a static module and an adaptive module, because in the process of keeping target tracking, the adaptive appearance model needs to be updated in each frame in order to adapt to the change of a target, but also brings bad influence, which shows that background information is inevitably brought in to cause tracking drift, and in order to solve the problem, the static appearance model needs to be used, is relatively stable in the tracking process, can always keep the initial information of the target, so that the tracking result of the adaptive appearance model can be corrected, and the tracking accuracy is improved; the use of a denoising module: because the matching results of the static appearance model and the self-adaptive appearance model are directly fused, noise is continuously introduced into the appearance model to influence the expression of target characteristics, therefore, the appearance model noise with uncertain types is removed by using a hierarchical clustering method, the clustered appearance model can be used for effectively estimating the scale change and rotation information of a target in the tracking process, and the accuracy of a tracking algorithm is improved; the auxiliary tracking algorithm has the characteristic of high real-time performance, can be mutually corrected with a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module, can realize long-term stable tracking, and can perform global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, so that the problem of low probability of re-capturing when a moving target disappears and reappears can be effectively solved; a static appearance model is arranged to provide candidate search areas, so that the time complexity is reduced; the static appearance model and the self-adaptive appearance model have the characteristic description mode of local invariant characteristic SURF, the time complexity and the actual effect are integrated, the Euclidean distance is used as a distance calculation method of the sURF characteristic point, and the good tracking effect can be ensured in consideration of poor robustness of the global characteristic when the target is locally shielded.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Fig. 2 an initial frame feature point map.
Fig. 3 is a feature point diagram for different frames.
FIG. 4 corrected different frame feature maps.
FIG. 5 is a graph of the tracking results.
Detailed Description
Example 1.
As shown in fig. 1 below, a target tracking method with adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the tracking accuracy is improved through the self-correcting tracking frame, and the tracking accuracy comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
the static module can always keep the initial information of the target, and is a set consisting of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
The denoising module removes noises of a static appearance model and a self-adaptive appearance model with uncertain types by using a hierarchical clustering method, and obtains three data information of the central position, the scale change and the rotation angle of a tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model and used for candidate search.
Considering that the robustness of global features is poor when a target is locally shielded, the method adopts the local invariant feature SURF as a feature description mode of an appearance model, integrates time complexity and practical effect, and uses the Euclidean distance as a distance calculation method of SURF feature points.
One, in the design of appearance model
No matter the static appearance model or the self-adaptive appearance model sets an active region for the key points, each key point has enough freedom degree, and all changes of the variable object without prior information are sensed as much as possible.
Since the initial condition of tracking is the position and scale of the target in the initial frame, we use a rectangular box to represent this information, assuming the initial bounding box is b0. Firstly, carrying out feature point detection on a boundary frame area to obtain a key point set:
Figure BDA0001297726310000041
where x is the location of the keypoint in the image.
Then in the t-th frame the object information can be used with bounding box btTo indicate, the set of keypoints in the target region can be represented by PtTo show that:
Figure BDA0001297726310000042
because the number of frames between the initial frame and the current frame can be any non-negative value, in order to describe the relationship between the target in the initial frame and any frame after the initial frame, a matching relationship is defined for each key point of the target in the initial frame, which reflects the change condition of the target in the two frames:
Figure BDA0001297726310000043
wherein the content of the first and second substances,
Figure BDA0001297726310000044
is a key point in the initial frame
Figure BDA0001297726310000045
The position in the t-th frame.
In the tracking problem, the objective is to find the corresponding relation set of the change information of the characterization target as accurately as possible in each frame:
Lt={m1,m2,m3,...,mn}
how to obtain the set of correspondences is the task of the static-adaptive appearance model we propose. Obtaining a static corresponding relation set corresponding to the static appearance model through the static appearance model
Figure BDA00012977263100000514
Obtaining expectations by adaptive appearance modelsCorresponding adaptive correspondence set
Figure BDA00012977263100000515
Figure BDA00012977263100000516
Finally, the two are combined according to a certain rule to obtain Lt
(1) Design of static appearance model
The static appearance model is built based on the target appearance of the initial frame, which is composed of all key points of the target region of interest in the initial frame (i.e., the
Figure BDA0001297726310000051
) The corresponding feature descriptor component.
We design a static corresponding relation set for the static appearance model, which is a set composed of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target. (supplementary description)
In the tracking process, we aim at each key point in the initial frame
Figure BDA0001297726310000052
A global search is conducted in the current frame for candidate keypoints that match it
Figure BDA0001297726310000053
Require that
Figure BDA0001297726310000054
And the key point in the initial frame
Figure BDA0001297726310000055
The following relationship is satisfied:
Figure BDA0001297726310000056
namely:
Figure BDA0001297726310000057
require that
Figure BDA0001297726310000058
And the key point in the initial frame
Figure BDA0001297726310000059
Is less than a specified threshold, theta, while in the tth frame,
Figure BDA00012977263100000510
and
Figure BDA00012977263100000511
is closest to other keypoints, where θ and γ are empirical values.
In addition, candidate keypoints matched with the keypoint descriptors of the background in the initial frame are removed from the static appearance model of the current frame, and finally, a static corresponding relation set can be obtained
Figure BDA00012977263100000512
Figure BDA00012977263100000513
Representing the target appearance correspondence provided by the static appearance model in the initial frame and the current frame.
(2) Design of adaptive appearance model
The adaptive appearance model of the current frame is not built by the image information of the current frame, but is composed of region blocks near all key points in the target region of the previous frame. In order to unify the representation mode in the static corresponding relation set, after the adaptive corresponding relation of each frame is obtained and combined, the adaptive corresponding relation is corresponding to the initial frame, namely the adaptive corresponding relation of the previous frame (t-1 th frame)
Figure BDA00012977263100000610
Derived from the current frame (t-th frame)
Figure BDA0001297726310000061
Figure BDA0001297726310000062
Can obtain
Figure BDA0001297726310000063
Figure BDA0001297726310000064
The static corresponding relation set and the self-adaptive corresponding relation set are compared, and the static model cannot drift due to environmental interference factors in the tracking process, so that the self-adaptive corresponding relation between the static model and the self-adaptive corresponding relation set is denied uniformly if different results appear for the same key point in the two sets.
Combining the filtered static corresponding relation set and the self-adapting corresponding relation set, and recording the combined set as a corresponding relation set
Figure BDA0001297726310000065
Namely:
Figure BDA0001297726310000066
denoising aiming at target appearance model
Formula (a) and principle
After the corresponding relation output by the static appearance model and the self-adaptive appearance model is combined, the representation area of the target is inevitably expanded, and some areas which should be the background are also brought into the target area.
According to the difference measure D, aiming at the corresponding relation set
Figure BDA0001297726310000067
And (3) setting a threshold T of the difference measurement by using a single connection hierarchical clustering method to obtain a clustering result.
Suppose dijSample m representing correspondenceiAnd mjMeasure of dissimilarity between, DijRepresents a sample miAnd mjCluster C ofiAnd CjMeasure of dissimilarity therebetween, then DijShould be the distance of the nearest sample between two clusters, i.e.
Figure BDA0001297726310000068
Output results to appearance model
Figure BDA0001297726310000069
And (4) clustering:
1. first, each sample is treated as a cluster, cluster CiAnd CjThe shortest distance D betweenij=dij
2. Let t be 0, the shortest distance between all clusters may form a distance matrix m (t).
3. Finding the minimum element on the off-diagonal line in M (T) which is not more than the distance threshold T, and setting the minimum element as MpqMixing C withpAnd CqAre combined into a new cluster, which is marked as CrI.e. Cr={Cp,Cq}。
4. Computing a new cluster CrWith other clusters CkThe shortest distance Dkr=min{Dkp,DkqUpdating the distance matrix M (t), merging the p and q rows and the p and q columns into a new row and a new column corresponding to the class CrNew distance on new row and new column according to Dkr=min{Dkp,DkqThe calculation is performed and the resulting matrix is denoted as M (t + 1).
5. And assigning T +1 to T, and returning to the step 3 until all elements in M (T) are larger than T or all samples are gathered into a cluster, and finishing clustering.
To pair
Figure BDA0001297726310000071
After clustering, it is assumed that the largest cluster contains all the correspondences related to the object of interest, while the correspondences in other clusters are related to the background. Therefore, when the tracking result of each frame is estimated, only the largest cluster is needed to be used, wherein the largest cluster is
Figure BDA0001297726310000072
(II) state estimation of three data targets including center position, scale change and rotation angle
In the target tracking process, the tracking result of each frame consists of three data volumes:
(1) the center position of the target.
Using the target displacement parameter mu
To show the change situation of the target center position, i.e. assuming the center position of the target in the initial frame as x0Then, the center position of the target in the current frame can be expressed as: x is the number oft=x0
In order to accurately measure the central position of the target, the maximum cluster output after clustering is performed
Figure BDA0001297726310000073
Taking the average value of the sum of the displacement parameters represented by all the matching relations, namely:
Figure BDA0001297726310000074
in the formula:
num is the largest cluster
Figure BDA0001297726310000081
The number of corresponding relations therein, i.e.
Figure BDA0001297726310000082
μiEstimation of target displacement for each matching relationship, i.e.
Figure BDA0001297726310000083
In summary, the displacement parameter of the target center position of each frame is estimated as follows:
Figure BDA0001297726310000084
(2) dimensions of the target
Using a scaling factor s
To characterize the scale change situation of the target, i.e. assuming the target scale size in the initial frame to be S0The size of the target in the current frame is StAnd then: st=s×S0
The method for estimating the scale coefficient is as follows:
Figure BDA0001297726310000085
in the formula:
Figure BDA0001297726310000086
is shown in
Figure BDA0001297726310000087
The target dimension of the current frame estimated by the matching relationship, i.e.
Figure BDA0001297726310000088
Is shown at L0The target dimension of the current frame estimated by the matching relationship, i.e.
Figure BDA0001297726310000089
med denotes the median.
In summary, the target scale change factorThe calculation method is as follows:
Figure BDA00012977263100000810
(3) rotation angle of target (radian)
Expressed by using alpha, i.e. assuming that the rotation angle of the target in the initial frame is alpha0When it is 0, the rotation angle of the target in the current frame is: alpha is alphat=α0+α=α
The method for estimating the target rotation angle is as follows:
Figure BDA00012977263100000811
in the formula:
Figure BDA00012977263100000812
representing the angle of the target with respect to the horizontal direction calculated from the relative positions of the keypoints in the initial frame,
namely, it is
Figure BDA00012977263100000813
Representing the angle of the target with respect to the horizontal direction calculated from the relative positions of the keypoints in the current frame,
namely, it is
Figure BDA0001297726310000091
med denotes the median.
In summary, the rotation angle of the target in the current frame is calculated as follows, relative to the initial frame:
Figure BDA0001297726310000092
the third, detailed procedure
As shown in fig. 2 to 5, the specific process is divided into three parts, i.e., input, target tracking, and output.
Inputting: the position of the target in the initial frame t; the position of the target is represented by (x, y, w, h), which respectively refers to the horizontal and vertical coordinates of the target in the image frame, and the width and height of the rectangular frame of the target.
And (3) target tracking process:
(1) static appearance model-extracting the center of a rectangular frame in an initial frame, and dividing an image into target feature points and background feature points by using SURF (speeded up robust features) to form a feature library as shown in FIG. 2; (2) an adaptive appearance model, namely feature points of a t frame target are obtained through a forward optical flow (t-1 th to t frame) and a backward optical flow (t th to t-l frame) of a pyramid LK optical flow, and various noise information is inevitably included in the process, as shown in FIG. 3; (3) correcting the target characteristics obtained by the adaptive appearance model by the static appearance model-1) carrying out global matching on the characteristic points of the t frame target and the characteristic library; 2) the t frame target features are locally matched with the initial frame target feature points, unstable feature points are removed, and final target feature points are determined and used as the output of an appearance model for estimating appearance information and scale information, as shown in fig. 4; (4) denoising in single connection hierarchical clustering, namely performing denoising processing on the output of an appearance model, taking the obtained result as an initial condition for calculating an affine transformation matrix, and effectively estimating the scale and the rotation angle of a target to realize the robustness and the real-time performance of a tracking algorithm, as shown in fig. 5;
And (3) outputting: next frame target location.

Claims (2)

1. A target tracking method of self-adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the method comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
the static module can always keep the initial information of the target, which is a matching relation between the static appearance model of the initial frame target and the static appearance model of the current frame targetA set of system components, the matching relationship is for each key point in the initial frame
Figure FDA0002806677300000011
A global search is conducted in the current frame for candidate keypoints that match it
Figure FDA0002806677300000012
Require that
Figure FDA0002806677300000013
And the key point in the initial frame
Figure FDA0002806677300000014
The following relationship is satisfied:
Figure FDA0002806677300000015
candidate keypoints matched with the keypoint descriptors of the background in the initial frame are also removed from the static appearance model of the current frame, and finally a static correspondence set is obtained, wherein,
Figure FDA0002806677300000016
And the key point in the initial frame
Figure FDA0002806677300000017
Is less than a specified threshold, theta, while in the tth frame,
Figure FDA0002806677300000018
and
Figure FDA0002806677300000019
is closest to the distance of other key points, wherein theta and gamma are empirical values;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
the denoising module removes uncertain noise of the types of the static appearance model and the self-adaptive appearance model by using a hierarchical clustering method, and obtains three data information of the center position, the scale change and the rotation angle of the tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model for candidate search;
and comparing the static corresponding relation set with the self-adaptive corresponding relation set, if different results appear for the same key point in the two sets, uniformly rejecting the self-adaptive corresponding relation in the two sets, merging the filtered static corresponding relation set and the self-adaptive corresponding relation set, and denoising by using a denoising module after merging.
2. The adaptive self-correcting target tracking method according to claim 1, characterized in that: the static appearance model and the self-adaptive appearance model are characterized in that the characteristic description mode is a local invariant characteristic SURF, the time complexity and the actual effect are integrated, and the Euclidean distance is used as a distance calculation method of SURF characteristic points.
CN201710354666.0A 2017-05-18 2017-05-18 Target tracking method with self-adaptive self-correction function Active CN107194947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710354666.0A CN107194947B (en) 2017-05-18 2017-05-18 Target tracking method with self-adaptive self-correction function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710354666.0A CN107194947B (en) 2017-05-18 2017-05-18 Target tracking method with self-adaptive self-correction function

Publications (2)

Publication Number Publication Date
CN107194947A CN107194947A (en) 2017-09-22
CN107194947B true CN107194947B (en) 2021-04-02

Family

ID=59875643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710354666.0A Active CN107194947B (en) 2017-05-18 2017-05-18 Target tracking method with self-adaptive self-correction function

Country Status (1)

Country Link
CN (1) CN107194947B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349177B (en) * 2019-07-03 2021-08-03 广州多益网络股份有限公司 Method and system for tracking key points of human face of continuous frame video stream
CN115276799B (en) * 2022-07-27 2023-07-11 西安理工大学 Decision threshold self-adaption method for undersampling modulation demodulation in optical imaging communication

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203423A (en) * 2016-06-26 2016-12-07 广东外语外贸大学 A kind of weak structure perception visual target tracking method of integrating context detection
CN106651909A (en) * 2016-10-20 2017-05-10 北京信息科技大学 Background weighting-based scale and orientation adaptive mean shift method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203423A (en) * 2016-06-26 2016-12-07 广东外语外贸大学 A kind of weak structure perception visual target tracking method of integrating context detection
CN106651909A (en) * 2016-10-20 2017-05-10 北京信息科技大学 Background weighting-based scale and orientation adaptive mean shift method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进Adaboost特征检测的感知哈希跟踪算法;齐苏敏;《通信技术》;20170330;第50卷(第3期);第430-435页 *

Also Published As

Publication number Publication date
CN107194947A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
JP4644248B2 (en) Simultaneous positioning and mapping using multi-view feature descriptors
Jiang et al. Multiscale locality and rank preservation for robust feature matching of remote sensing images
US11138742B2 (en) Event-based feature tracking
WO2019057179A1 (en) Visual slam method and apparatus based on point and line characteristic
CN104517275A (en) Object detection method and system
Dou et al. Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues
CN109323697B (en) Method for rapidly converging particles during starting of indoor robot at any point
Michot et al. Bi-objective bundle adjustment with application to multi-sensor slam
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN110910421A (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
CN111914878A (en) Feature point tracking training and tracking method and device, electronic equipment and storage medium
Jiang et al. High speed long-term visual object tracking algorithm for real robot systems
CN107194947B (en) Target tracking method with self-adaptive self-correction function
GB2599947A (en) Visual-inertial localisation in an existing map
CN110147768B (en) Target tracking method and device
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
US11830218B2 (en) Visual-inertial localisation in an existing map
CN106934818B (en) Hand motion tracking method and system
CN115170621A (en) Target tracking method and system under dynamic background based on relevant filtering framework
CN113920155A (en) Moving target tracking algorithm based on kernel correlation filtering
CN113129332A (en) Method and apparatus for performing target object tracking
CN111563489A (en) Target tracking method and device and computer storage medium
CN108346158B (en) Multi-target tracking method and system based on main block data association
Yan et al. Real-time tracking of deformable objects based on MOK algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant