CN107194947A - A kind of method for tracking target of adaptive self-correction - Google Patents
A kind of method for tracking target of adaptive self-correction Download PDFInfo
- Publication number
- CN107194947A CN107194947A CN201710354666.0A CN201710354666A CN107194947A CN 107194947 A CN107194947 A CN 107194947A CN 201710354666 A CN201710354666 A CN 201710354666A CN 107194947 A CN107194947 A CN 107194947A
- Authority
- CN
- China
- Prior art keywords
- target
- self
- appearance model
- tracking
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 18
- 238000012937 correction Methods 0.000 title claims abstract description 8
- 230000003068 static effect Effects 0.000 claims abstract description 71
- 230000007613 environmental effect Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000007774 longterm Effects 0.000 abstract description 5
- 230000006978 adaptation Effects 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012549 training Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of method for tracking target of adaptive self-correction, this method is that the disturbing factor in target following is divided into environmental factor and target factor itself, static appearance model and adaptive display model are proposed according to the disturbing factor of classification respectively, denoising is carried out to static appearance model and adaptive display model again and then merged, finally by the tracking framework of self-correction to improve the accuracy of tracking, it includes static module, adaptation module, denoising module and target tracking algorism module;The present invention is solved the problems, such as due to tracking loss caused by environment and object variations during long-term follow, and proposes the tracking framework for being capable of self-correction on this basis.
Description
Technical Field
The invention relates to a self-adaptive self-correcting target tracking method, belonging to the technical field of robust target tracking.
Background
Target tracking refers to estimating the motion trajectory of a specified target in a video image sequence, and in 2012, ZdenekK and the like propose a target tracking algorithm [1] combining tracking, Detection and Learning, in the tracking process, data observed by the target is used for Training a classifier, meanwhile, the algorithm uses a tracker based on optical flow to continuously correct the Detection result of the classifier, and during Detection, TLD (tracking-Learning Detection) uses a sample discrimination strategy based on structural constraint conditions to ensure that the sample at the Training stage of the classifier is close to the real situation enough. Although the TLD as a single-target tracking algorithm can solve the problem that the target is difficult to track in the long-term tracking process to some extent, the algorithm has poor real-time performance and relies on a priori knowledge of the target in the initial tracking stage.
The key of long-term tracking is that the algorithm can resist various interference factors which may appear in the actual environment, the difficulty is that the goal disappears and then appears, the change of the goal during the disappearance of the goal can not be obtained from a video image sequence, the motion condition of the goal can not be estimated through a motion estimation algorithm (such as an optical flow method), and meanwhile, the goal drifts in the tracking process, so based on the above situation, a static appearance model and a self-adaptive appearance model are respectively provided for different types of interference factors, then a tracking algorithm frame capable of self-correcting is formed by combining with an auxiliary tracking algorithm, and the high-efficiency and real-time tracking is realized under various interference factors.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a self-adaptive self-correcting target tracking method, solve the problem of tracking loss caused by environment and target change in the long-term tracking process, provide a tracking framework capable of self-correcting on the basis, and overcome the defects of the prior art.
The technical scheme of the invention is as follows: a target tracking method of self-adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the tracking accuracy is improved through the self-correcting tracking frame, and the tracking accuracy comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
the static module can always keep the initial information of the target, and is a set consisting of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
the denoising module removes noises of a static appearance model and a self-adaptive appearance model with uncertain types by using a hierarchical clustering method, and obtains three data information of the central position, the scale change and the rotation angle of a tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model and used for candidate search.
The static appearance model and the self-adaptive appearance model are characterized in that the characteristic description mode is a local invariant characteristic SURF, the time complexity and the actual effect are integrated, and the Euclidean distance is used as a distance calculation method of SURF characteristic points.
Compared with the prior art, the self-adaptive self-correcting target tracking method of the invention divides interference factors in target tracking into environmental factors and target self factors, respectively provides a static appearance model and a self-adaptive appearance model according to the classified interference factors, then carries out denoising and fusion on the static appearance model and the self-adaptive appearance model, and finally improves the tracking accuracy through a self-correcting tracking frame, wherein the self-adaptive self-correcting target tracking method comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module; the result of the method and the four modules shows that compared with a classical or current popular algorithm, the algorithm has good tracking precision, which is particularly shown in the combination of a static module and an adaptive module, because in the process of keeping target tracking, the adaptive appearance model needs to be updated in each frame in order to adapt to the change of a target, but also brings bad influence, which shows that background information is inevitably brought in to cause tracking drift, and in order to solve the problem, the static appearance model needs to be used, is relatively stable in the tracking process, can always keep the initial information of the target, so that the tracking result of the adaptive appearance model can be corrected, and the tracking accuracy is improved; the use of a denoising module: because the matching results of the static appearance model and the self-adaptive appearance model are directly fused, noise is continuously introduced into the appearance model to influence the expression of target characteristics, therefore, the appearance model noise with uncertain types is removed by using a hierarchical clustering method, the clustered appearance model can be used for effectively estimating the scale change and rotation information of a target in the tracking process, and the accuracy of a tracking algorithm is improved; the auxiliary tracking algorithm has the characteristic of high real-time performance, can be mutually corrected with a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module, can realize long-term stable tracking, and can perform global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, so that the problem of low probability of re-capturing when a moving target disappears and reappears can be effectively solved; a static appearance model is arranged to provide candidate search areas, so that the time complexity is reduced; the static appearance model and the self-adaptive appearance model have the characteristic description mode of local invariant characteristic SURF, the time complexity and the actual effect are integrated, the Euclidean distance is used as a distance calculation method of the sURF characteristic point, and the good tracking effect can be ensured in consideration of poor robustness of the global characteristic when the target is locally shielded.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Fig. 2 an initial frame feature point map.
Fig. 3 is a feature point diagram for different frames.
FIG. 4 corrected different frame feature maps.
FIG. 5 is a graph of the tracking results.
Detailed Description
Example 1.
As shown in fig. 1 below, a target tracking method with adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the tracking accuracy is improved through the self-correcting tracking frame, and the tracking accuracy comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
the static module can always keep the initial information of the target, and is a set consisting of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
the denoising module removes noises of a static appearance model and a self-adaptive appearance model with uncertain types by using a hierarchical clustering method, and obtains three data information of the central position, the scale change and the rotation angle of a tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model and used for candidate search.
Considering that the robustness of global features is poor when a target is locally shielded, the method adopts the local invariant feature SURF as a feature description mode of an appearance model, integrates time complexity and practical effect, and uses the Euclidean distance as a distance calculation method of SURF feature points.
One, in the design of appearance model
No matter the static appearance model or the self-adaptive appearance model sets an active region for the key points, each key point has enough freedom degree, and all changes of the variable object without prior information are sensed as much as possible.
Since the initial condition of tracking is the position and scale of the target in the initial frame, we use a rectangular box to represent this information, assuming the initial bounding box is b0. Firstly, feature point detection is carried out on a boundary frame area to obtain a key point setCombining:
where x is the location of the keypoint in the image.
Then in the t-th frame the object information can be used with bounding box btTo indicate, the set of keypoints in the target region can be represented by PtTo show that:
because the number of frames between the initial frame and the current frame can be any non-negative value, in order to describe the relationship between the target in the initial frame and any frame after the initial frame, a matching relationship is defined for each key point of the target in the initial frame, which reflects the change condition of the target in the two frames:
wherein,is a key point in the initial frameThe position in the t-th frame.
In the tracking problem, the objective is to find the corresponding relation set of the change information of the characterization target as accurately as possible in each frame:
Lt={m1,m2,m3,...,mn}
how to obtain the corresponding relation set is the static state that we propose-the task of adapting the appearance model. Obtaining a static corresponding relation set corresponding to the static appearance model through the static appearance modelObtaining a set of adaptive correspondences for an expected correspondence by an adaptive appearance model Finally, the two are combined according to a certain rule to obtain Lt。
(1) Design of static appearance model
The static appearance model is built based on the target appearance of the initial frame, which is composed of all key points of the target region of interest in the initial frame (i.e., the) The corresponding feature descriptor component.
We design a static corresponding relation set for the static appearance model, which is a set composed of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target. (supplementary description)
In the tracking process, we aim at each key point in the initial frameA global search is conducted in the current frame for candidate keypoints that match itRequire thatAnd the key point in the initial frameThe following relationship is satisfied:
namely:
require thatAnd the key point in the initial frameIs less than a specified threshold, theta, while in the tth frame,andis closest to other keypoints, where θ and γ are empirical values.
In addition, candidate keypoints matched with the keypoint descriptors of the background in the initial frame are removed from the static appearance model of the current frame, and finally, a static corresponding relation set can be obtained
Representing the target appearance correspondence provided by the static appearance model in the initial frame and the current frame.
(2) Design of adaptive appearance model
The adaptive appearance model of the current frame is not built by the image information of the current frame, but is composed of region blocks near all key points in the target region of the previous frame. In order to unify the representation mode in the static corresponding relation set, after the adaptive corresponding relation of each frame is obtained and combined, the adaptive corresponding relation is corresponding to the initial frame, namely the adaptive corresponding relation of the previous frame (t-1 th frame)Derived from the current frame (t-th frame)
Can obtain
The static corresponding relation set and the self-adaptive corresponding relation set are compared, and the static model cannot drift due to environmental interference factors in the tracking process, so that the self-adaptive corresponding relation between the static model and the self-adaptive corresponding relation set is denied uniformly if different results appear for the same key point in the two sets.
Combining the filtered static corresponding relation set and the self-adapting corresponding relation set, and recording the combined set as a corresponding relation setNamely:
denoising aiming at target appearance model
Formula (a) and principle
After the corresponding relation output by the static appearance model and the self-adaptive appearance model is combined, the representation area of the target is inevitably expanded, and some areas which should be the background are also brought into the target area.
According to the difference measure D, aiming at the corresponding relation set
And (3) setting a threshold T of the difference measurement by using a single connection hierarchical clustering method to obtain a clustering result.
Suppose dijSample m representing correspondenceiAnd mjMeasure of dissimilarity between, DijRepresents a sample miAnd mjCluster C ofiAnd CjMeasure of dissimilarity therebetween, then DijShould be the distance of the nearest sample between two clusters, i.e.
Output results to appearance modelAnd (4) clustering:
1. first, each sample is treated as a cluster, cluster CiAnd CjThe shortest distance D betweenij=dij。
2. Let t be 0, the shortest distance between all clusters may form a distance matrix m (t).
3. Finding the minimum element on the off-diagonal line in M (T) which is not more than the distance threshold T, and setting the minimum element as MpqMixing C withpAnd CqAre combined into a new cluster, which is marked as CrI.e. Cr={Cp,Cq}。
4. Computing a new cluster CrWith other clusters CkThe shortest distance Dkr=min{Dkp,DkqUpdating the distance matrix M (t), merging the p and q rows and the p and q columns into a new row and a new column corresponding to the class CrNew distance on new row and new column according to Dkr=min{Dkp,DkqThe calculation is performed and the resulting matrix is denoted as M (t + 1).
5. And assigning T +1 to T, and returning to the step 3 until all elements in M (T) are larger than T or all samples are gathered into a cluster, and finishing clustering.
To pair
After clustering, it is assumed that the largest cluster contains all the correspondences related to the object of interest, while the correspondences in other clusters are related to the background. Therefore, when the tracking result of each frame is estimated, only the largest cluster is needed to be used, wherein the largest cluster is
(II) state estimation of three data targets including center position, scale change and rotation angle
In the target tracking process, the tracking result of each frame consists of three data volumes:
(1) the center position of the target.
Using the target displacement parameter mu
To indicate the change situation of the target center position, i.e. assuming the target in the initial frameThe heart position is x0Then, the center position of the target in the current frame can be expressed as: x is the number oft=x0+μ
In order to accurately measure the central position of the target, the maximum cluster output after clustering is performedTaking the average value of the sum of the displacement parameters represented by all the matching relations, namely:
in the formula:
num is the largest clusterThe number of corresponding relations therein, i.e.μiEstimation of target displacement for each matching relationship, i.e.
In summary, the displacement parameter of the target center position of each frame is estimated as follows:
(2) dimensions of the target
Using a scaling factor s
To characterize the scale change situation of the target, i.e. assuming the target scale size in the initial frame to be S0The size of the target in the current frame is StAnd then: st=s×S0
The method for estimating the scale coefficient is as follows:
in the formula:
is shown inThe target dimension of the current frame estimated by the matching relationship, i.e.Is shown at L0The target dimension of the current frame estimated by the matching relationship, i.e.med denotes the median.
In summary, the calculation method of the target scale variation coefficient is as follows:
(3) rotation angle of target (radian)
Using α, assume that the rotation angle of the target in the initial frame is α0When the rotation angle of the target in the current frame is 0, αt=α0+α=α
The method for estimating the target rotation angle is as follows:
in the formula:
representing the angle of the target with respect to the horizontal direction calculated from the relative positions of the keypoints in the initial frame,
namely, it is
Representing the angle of the target with respect to the horizontal direction calculated from the relative positions of the keypoints in the current frame,
namely, it is
med denotes the median.
In summary, the rotation angle of the target in the current frame is calculated as follows, relative to the initial frame:
the third, detailed procedure
As shown in fig. 2 to 5, the specific process is divided into three parts, i.e., input, target tracking, and output.
Inputting: the position of the target in the initial frame t; the position of the target is represented by (x, y, w, h), which respectively refers to the horizontal and vertical coordinates of the target in the image frame, and the width and height of the rectangular frame of the target.
And (3) target tracking process:
(1) static appearance model-extracting the center of a rectangular frame in an initial frame, and dividing an image into target feature points and background feature points by using SURF (speeded up robust features) to form a feature library as shown in FIG. 2; (2) an adaptive appearance model, namely feature points of a t frame target are obtained through a forward optical flow (t-1 th to t frame) and a backward optical flow (t th to t-l frame) of a pyramid LK optical flow, and various noise information is inevitably included in the process, as shown in FIG. 3; (3) correcting the target characteristics obtained by the adaptive appearance model by the static appearance model-1) carrying out global matching on the characteristic points of the t frame target and the characteristic library; 2) the t frame target features are locally matched with the initial frame target feature points, unstable feature points are removed, and final target feature points are determined and used as the output of an appearance model for estimating appearance information and scale information, as shown in fig. 4; (4) denoising in single connection hierarchical clustering, namely performing denoising processing on the output of an appearance model, taking the obtained result as an initial condition for calculating an affine transformation matrix, and effectively estimating the scale and the rotation angle of a target to realize the robustness and the real-time performance of a tracking algorithm, as shown in fig. 5;
and (3) outputting: next frame target location.
Claims (2)
1. A target tracking method of self-adaptive self-correction is characterized in that: the method comprises the steps of dividing interference factors in target tracking into environmental factors and target self factors, respectively providing a static appearance model and a self-adaptive appearance model according to the classified interference factors, denoising the static appearance model and the self-adaptive appearance model, then fusing the static appearance model and the self-adaptive appearance model, and finally improving the tracking accuracy through a self-correcting tracking frame, wherein the tracking accuracy is improved through the self-correcting tracking frame, and the tracking accuracy comprises a static module, a self-adaptive module, a denoising module and a target tracking algorithm module;
the static module can always keep the initial information of the target, and is a set consisting of all matching relations in the static appearance model of the initial frame target and the static appearance model of the current frame target;
the self-adaptive module updates each frame for adapting to the change of the target, and the self-adaptive appearance model of the current frame consists of area blocks near all key points in the target area of the previous frame;
the denoising module removes noises of a static appearance model and a self-adaptive appearance model with uncertain types by using a hierarchical clustering method, and obtains three data information of the central position, the scale change and the rotation angle of a tracking target through hierarchical clustering;
the target tracking algorithm module comprises a main body tracking algorithm based on data information of a static module, a self-adaptive module and a denoising module and an auxiliary tracking algorithm based on perceptual hash, the main body tracking algorithm provides a proper updating sample for the auxiliary tracking algorithm, the auxiliary tracking algorithm carries out global search and fast target paving and grabbing in an area formed by the static appearance model and the self-adaptive appearance model, and meanwhile the auxiliary tracking algorithm is provided with an area provided with the static appearance model and used for candidate search.
2. The adaptive self-correcting target tracking method according to claim 1, characterized in that: the static appearance model and the self-adaptive appearance model are characterized in that the characteristic description mode is a local invariant characteristic SURF, the time complexity and the actual effect are integrated, and the Euclidean distance is used as a distance calculation method of SURF characteristic points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710354666.0A CN107194947B (en) | 2017-05-18 | 2017-05-18 | Target tracking method with self-adaptive self-correction function |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710354666.0A CN107194947B (en) | 2017-05-18 | 2017-05-18 | Target tracking method with self-adaptive self-correction function |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194947A true CN107194947A (en) | 2017-09-22 |
CN107194947B CN107194947B (en) | 2021-04-02 |
Family
ID=59875643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710354666.0A Active CN107194947B (en) | 2017-05-18 | 2017-05-18 | Target tracking method with self-adaptive self-correction function |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194947B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349177A (en) * | 2019-07-03 | 2019-10-18 | 广州多益网络股份有限公司 | A kind of the face key point-tracking method and system of successive frame video flowing |
CN115276799A (en) * | 2022-07-27 | 2022-11-01 | 西安理工大学 | Decision threshold self-adapting method for undersampling modulation and demodulation in optical imaging communication |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203423A (en) * | 2016-06-26 | 2016-12-07 | 广东外语外贸大学 | A kind of weak structure perception visual target tracking method of integrating context detection |
CN106651909A (en) * | 2016-10-20 | 2017-05-10 | 北京信息科技大学 | Background weighting-based scale and orientation adaptive mean shift method |
-
2017
- 2017-05-18 CN CN201710354666.0A patent/CN107194947B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203423A (en) * | 2016-06-26 | 2016-12-07 | 广东外语外贸大学 | A kind of weak structure perception visual target tracking method of integrating context detection |
CN106651909A (en) * | 2016-10-20 | 2017-05-10 | 北京信息科技大学 | Background weighting-based scale and orientation adaptive mean shift method |
Non-Patent Citations (1)
Title |
---|
齐苏敏: "基于改进Adaboost特征检测的感知哈希跟踪算法", 《通信技术》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349177A (en) * | 2019-07-03 | 2019-10-18 | 广州多益网络股份有限公司 | A kind of the face key point-tracking method and system of successive frame video flowing |
CN110349177B (en) * | 2019-07-03 | 2021-08-03 | 广州多益网络股份有限公司 | Method and system for tracking key points of human face of continuous frame video stream |
CN115276799A (en) * | 2022-07-27 | 2022-11-01 | 西安理工大学 | Decision threshold self-adapting method for undersampling modulation and demodulation in optical imaging communication |
Also Published As
Publication number | Publication date |
---|---|
CN107194947B (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4644248B2 (en) | Simultaneous positioning and mapping using multi-view feature descriptors | |
Jiang et al. | Multiscale locality and rank preservation for robust feature matching of remote sensing images | |
US11138742B2 (en) | Event-based feature tracking | |
Dou et al. | Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues | |
Choi et al. | Robust 3D visual tracking using particle filtering on the SE (3) group | |
Michot et al. | Bi-objective bundle adjustment with application to multi-sensor slam | |
US20230117498A1 (en) | Visual-inertial localisation in an existing map | |
Jiang et al. | High speed long-term visual object tracking algorithm for real robot systems | |
CN110942473A (en) | Moving target tracking detection method based on characteristic point gridding matching | |
CN107194947B (en) | Target tracking method with self-adaptive self-correction function | |
CN113129332A (en) | Method and apparatus for performing target object tracking | |
CN117870659A (en) | Visual inertial integrated navigation algorithm based on dotted line characteristics | |
Zhang et al. | Target tracking for mobile robot platforms via object matching and background anti-matching | |
CN113838072B (en) | High-dynamic star map image segmentation method | |
CN106934818B (en) | Hand motion tracking method and system | |
CN108596950B (en) | Rigid body target tracking method based on active drift correction | |
CN108346158B (en) | Multi-target tracking method and system based on main block data association | |
CN111563489A (en) | Target tracking method and device and computer storage medium | |
Lin et al. | Breaking of brightness consistency in optical flow with a lightweight CNN network | |
CN110264508B (en) | Vanishing point estimation method based on convex quadrilateral principle | |
CN117495900B (en) | Multi-target visual tracking method based on camera motion trend estimation | |
CN111462181B (en) | Video single-target tracking method based on rectangular asymmetric inverse layout model | |
Gomez-Ojeda et al. | Accurate stereo visual odometry with gamma distributions | |
Yan et al. | Real-time tracking of deformable objects based on MOK algorithm | |
Persson | Online monocular slam: Rittums |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |