CN111145221A - Target tracking algorithm based on multi-layer depth feature extraction - Google Patents

Target tracking algorithm based on multi-layer depth feature extraction Download PDF

Info

Publication number
CN111145221A
CN111145221A CN201911419269.2A CN201911419269A CN111145221A CN 111145221 A CN111145221 A CN 111145221A CN 201911419269 A CN201911419269 A CN 201911419269A CN 111145221 A CN111145221 A CN 111145221A
Authority
CN
China
Prior art keywords
target
layer depth
target tracking
depth feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911419269.2A
Other languages
Chinese (zh)
Inventor
许廷发
吴零越
吴凡
张语珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Chongqing Innovation Center of Beijing University of Technology filed Critical Beijing Institute of Technology BIT
Priority to CN201911419269.2A priority Critical patent/CN111145221A/en
Publication of CN111145221A publication Critical patent/CN111145221A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target tracking algorithm based on multi-layer depth feature extraction, which relates to the technical field of image processing and comprises the following steps of S1: inputting an image; s2: extracting a characteristic graph; s3: obtaining an optimal matching template; s4: updating the best matching template; s5: repeating the step S4 until the target tracking of the current video is completed; the S2 is specifically that, according to the position and size information of the target in the first frame image, the deep neural network is used to extract the multi-layer sample feature map, and the deep neural network is used to extract the multi-layer depth feature as the appearance expression of the sample to perform target tracking, so as to obtain the multi-layer depth feature of the target, thereby reducing the number of parameters and improving the accuracy and robustness of the target tracking process.

Description

Target tracking algorithm based on multi-layer depth feature extraction
Technical Field
The invention relates to the technical field of image processing, in particular to a target tracking algorithm based on multi-layer depth feature extraction.
Background
The target tracking technology is one of main research directions in the field of computer vision in recent years, and is widely applied to aspects such as automatic driving, intelligent monitoring, human-computer interaction and the like. The target tracking mainly refers to finding out the target position in each frame according to the target position information given by the first frame of the video.
In recent years, as deep learning continues to be studied in computer vision, the research on a target tracking algorithm using a deep neural network in combination with the target tracking algorithm is increasing.
Many algorithms based on depth feature extraction do not take advantages from deeper depth neural networks, the number of extracted feature parameters is too large, the risk of overfitting is greater, and therefore the tracking accuracy of the target tracking algorithm is reduced.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to reduce the number of parameters and improve the accuracy and robustness of the target tracking process.
The invention provides a target tracking algorithm based on multi-layer depth feature extraction, which aims at the technical problem and comprises the following steps,
s1: inputting an image;
s2: extracting a characteristic graph;
s3: obtaining an optimal matching template;
s4: updating the best matching template;
s5: repeating the step S4 until the target tracking of the current video is completed;
the S2 is specifically that, according to the position and size information of the target in the first frame image, a multi-layer sample feature map is extracted by using a deep neural network, and a more accurate tracking effect is obtained by using the advantage of the deep neural network.
Further, in S1, specifically,
inputting a video sequence to be tracked;
position and size information of the object is initialized in the first frame image.
Further, the S2 further includes performing adaptive PCA processing on the feature map, reducing feature parameters by adopting an adaptive PCA algorithm, adaptively selecting an appropriate feature dimension according to different video sequences, and selecting an effective feature dimension by the PCA, which can reduce parameters, thereby reducing the risk of algorithm overfitting and improving robustness.
Further, in S3, specifically,
taking the center of the feature map target as a Gaussian label peak value;
and obtaining the best matching template through an ADMM algorithm.
Further, in S4, specifically,
s41: inputting a next frame image;
s42: obtaining a multi-dimension multi-layer depth feature map;
s43: matching with the optimal template to obtain the target position and the target scale of the image;
s44: the best matching template is updated.
Further, in S42, specifically,
taking the target central position of the previous frame of image as the center, and extracting sample characteristic graphs of images with different sizes;
and carrying out adaptive PCA processing on the characteristic diagram.
Further, in S43, specifically,
performing relevant matching on the multi-size characteristic image and the optimal matching template to obtain a multi-scale confidence score image;
selecting a confidence score image corresponding to the scale with the highest confidence score;
taking the point with the highest score of the confidence score map as the tracked target center position of the frame;
and obtaining the size of the frame target according to the scale factor corresponding to the confidence score map.
Further, in S44, specifically,
according to the target characteristic diagram obtained from the frame, the target center is taken as a Gaussian label peak value, and online passive attack learning is carried out by using the matching template of the previous frame; the template is updated and the similarity between the template of the current frame and the template of the previous frame is ensured by utilizing the matching template of the previous frame to carry out online passive attack learning, so that the drift of the template is effectively reduced;
and the optimal matching template is updated through the ADMM algorithm, so that the optimization process of the optimal template can be rapidly and efficiently carried out.
By adopting the technical scheme, the invention has the beneficial effects that: the method has the advantages that the target tracking is carried out by using the deep neural network to extract the multilayer depth features as the appearance expression of the sample, so that the multilayer depth features of the target are obtained, the number of parameters is reduced, and the accuracy and the robustness of the target tracking process are improved.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of a target tracking algorithm based on multi-layer depth feature extraction according to the present invention.
Detailed Description
To clearly illustrate the objects, technical solutions and advantages of the present invention, the following detailed description of the present invention is provided in conjunction with the accompanying drawings and embodiments to facilitate understanding of the present invention by those skilled in the art. It should be understood that the embodiments described herein are illustrative only and are not limiting upon the scope of the invention, which is to be given the full breadth of the appended claims and any and all modifications that may occur to one skilled in the art and which fall within the true spirit and scope of the invention as defined and defined in the following claims are intended to be protected.
As shown in fig. 1, a flowchart of a target tracking algorithm based on multi-layer depth feature extraction according to the present invention includes the following steps:
s1, image input: inputting a video sequence to be tracked, and initializing the position and size information of the target in the first frame image.
S2, extracting characteristics: according to the position and size information of the target in the first frame image, a deep neural network is used for extracting multilayer sample features, the feature map is subjected to self-adaptive PCA processing, and parameters are reduced.
S3, acquiring a template: and (4) according to the processed target characteristic diagram, taking the target center as a Gaussian label peak value, and obtaining the best matching template through optimization calculation of an ADMM algorithm.
And S4, inputting the next frame of image, taking the target central position of the previous frame of image as the center, extracting sample feature maps of images with different sizes, and carrying out self-adaptive PCA processing on the feature maps.
And S5, performing relevant matching on the matching template and the multi-size characteristic graph to obtain a multi-scale confidence score graph, and selecting a confidence score graph corresponding to the scale with the highest confidence score.
In the confidence score maps obtained in S6 and S5, the point with the highest score is the center position of the target tracked by the frame, and the size of the target of the frame can be calculated according to the scale factor corresponding to the confidence score map.
And S7, performing online passive attack learning by using the matching template of the previous frame with the target center as a Gaussian label peak value according to the target feature map obtained by the frame, and optimizing, calculating and updating the optimal matching template through an ADMM algorithm.
And S8, repeating S4 to S7 until the video is finished, and completing target tracking of the current video.
Further, in the video sequence that needs to be tracked in S1, the specific step of initializing the target that needs to be tracked in the first frame image is as follows:
a first frame image of a video is input, a target position is input, and a rectangular frame is displayed.
Further, the specific process in S2 is as follows:
and obtaining a multilayer characteristic diagram x of the D-dimensional target sample with the length and the width of M and N respectively by using the deep neural network. Using a matrix of size D × C (P ═ P)d,c) Converting feature x to P in C dimensionTx。
Further, the specific process in S3 is as follows:
and (3) obtaining a Gaussian label y by taking the target center as a Gaussian label peak value, and calculating to obtain an optimal matching template f by optimizing the following formula through an ADMM algorithm:
Figure BDA0002351934330000051
where w is the spatial weight matrix.
Further, the specific process in S5 is as follows:
carrying out relevant matching operation on the feature map x subjected to PCA processing and the template f to obtain a confidence score map Sf(x):
Sf(x)=PTx*f
Confidence score map Sf(x) The point with the highest score is the center position of the target tracked by the frame, and the size of the target of the frame can be calculated according to the scale factor corresponding to the confidence score map.
Further, the specific process in S7 is as follows:
the target feature map P obtained from this frameTx, obtaining a Gaussian label y by taking the target center as the peak value of the Gaussian label, and utilizing the matching template f of the previous framet-1And (3) carrying out online passive attack learning, introducing a time regular term, wherein mu is a time regular parameter, optimizing the following formula through an ADMM algorithm, and updating an optimal matching template f:
Figure BDA0002351934330000052
to understand the above formula, the auxiliary variable g is introduced and converted into an equal constraint optimization form:
Figure BDA0002351934330000061
s.t.f=g
the above equation may be iteratively optimized using the ADMM algorithm. Converting the above formula to an augmented lagrange form:
Figure BDA0002351934330000062
where h and y are the lagrange multiplier and step size parameters, respectively. The above formula can be converted into:
Figure BDA0002351934330000063
the closed-form solution of the above equation can be obtained by iteratively solving the following three sub-problems:
Figure BDA0002351934330000064
the detailed solution process for each sub-problem is as follows:
(1) solving of sub-problem f
According to the Parseval theorem, the first sub-problem can be represented in the frequency domain as:
Figure BDA0002351934330000071
considering the solution process at each pixel of the full channel, the above equation can be decomposed into MN sub-problems, each of which can be defined as:
Figure BDA0002351934330000072
by combining the above formulas
Figure BDA0002351934330000073
Is set to zero and can be solved
Figure BDA0002351934330000074
Closed form solution of (2):
Figure BDA0002351934330000075
due to the fact that
Figure BDA0002351934330000076
Is a rank 1 matrix, it can be described according to the Sherman-Morrison formula
Figure BDA0002351934330000077
To calculate quickly
Figure BDA0002351934330000078
This gives:
Figure BDA0002351934330000079
(2) solving of sub-problem g
According to the second subproblem g, a closed solution of g can be solved:
g=(WTW+γI)-1(γf+h)
where W ═ diag (W) denotes a diagonal matrix.
(3) Lagrange multiplier update
The lagrange multiplier is updated by:
h(i+1)=h(i)+γ(f(i+1)-g(i+1))
wherein f is(i+1)And g(i+1)Are the solutions obtained by the two sub-problems described above in the (i +1) th iteration.
The step size parameter updating mode is as follows:
γ(i+1)=min(γmax,ργ(i))
wherein gamma ismaxAnd ρ represent the maximum value and the scale parameter, respectively.
The tracking model of the invention is a convex function and meets the Eckstein-Bertsekas condition. Thus, it can converge to a global optimum and have a closed form solution.
Further, the specific process in S8 is as follows:
and processing all frames in the video by using S2 to S8 in sequence until all frames in the video are processed, namely completing the tracking of the target in the video.
The experimental hardware environment of the invention is Intel i 54570 CPU, 16GB memory and NVIDIA GTX 1080 GPU. The software environment was the Windows7x64 operating system and MATLAB2014b, using MatConvNet and AutoNN toolset. The present invention was tested in conjunction with other algorithms. A total of 2 published computer vision test videos were used in this test to validate the algorithm. The public video main information used is shown in the following table:
Figure BDA0002351934330000081
the tracking results of five different tracking algorithms on the CarScale video sequence show that as the scale of the target automobile is continuously increased, other algorithms cannot be well adapted to the change of the target scale, and the tracking effect of the invention is better.
The tracking results of the Skating2 video sequence by the five different tracking algorithms show that other algorithms cannot accurately track the target in the continuous rapid movement and deformation process of the target, and the method can be well adapted to the deformation of the target, and has better tracking effect.
The invention utilizes the deep neural network to extract the characteristics of the multilayer samples, and fully utilizes the advantages of the deep neural network; the feature map is subjected to self-adaptive PCA processing, parameters are reduced, and a proper feature dimension can be selected in a self-adaptive mode according to different video sequences, so that the risk of algorithm overfitting is reduced, robustness is improved, and a more accurate tracking effect is obtained; the matching template of the previous frame is used for on-line passive attack learning, the similarity between the template of the current frame and the template of the previous frame can be guaranteed while the template is updated, the drift of the template is effectively reduced, the optimization process of the optimal template is rapidly and efficiently carried out by adopting an ADMM algorithm, and the accuracy and the robustness of target tracking are improved.
While the foregoing description shows and describes a preferred embodiment of the invention, it is to be understood, as noted above, that the invention is not limited to the form disclosed herein, but is not intended to be exhaustive or to exclude other embodiments and may be used in various other combinations, modifications, and environments and may be modified within the scope of the inventive concept described herein by the above teachings or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A target tracking algorithm based on multi-layer depth feature extraction is characterized in that: comprises the following steps of (a) carrying out,
s1: inputting an image;
s2: extracting a characteristic graph;
s3: obtaining an optimal matching template;
s4: updating the best matching template;
s5: repeating the step S4 until the target tracking of the current video is completed;
specifically, in S2, according to the position and size information of the target in the first frame image, a multi-layer sample feature map is extracted by using a deep neural network.
2. The multi-layer depth feature extraction-based target tracking algorithm of claim 1, wherein: specifically, the step S1 is,
inputting a video sequence to be tracked;
position and size information of the object is initialized in the first frame image.
3. The multi-layer depth feature extraction-based target tracking algorithm of claim 1, wherein: said S2 further includes the step of,
and carrying out adaptive PCA processing on the characteristic diagram.
4. The multi-layer depth feature extraction-based target tracking algorithm of claim 1, wherein: specifically, the step S3 is,
taking the center of the feature map target as a Gaussian label peak value;
and obtaining the best matching template through an ADMM algorithm.
5. The multi-layer depth feature extraction-based target tracking algorithm of claim 1, wherein: specifically, the step S4 is,
s41: inputting a next frame image;
s42: obtaining a multi-dimension multi-layer depth feature map;
s43: matching with the optimal matching template to obtain the target position and the target scale of the image;
s44: the best matching template is updated.
6. The multi-layer depth feature extraction-based target tracking algorithm of claim 5, wherein: specifically, the step S42 is,
taking the target central position of the previous frame of image as the center, and extracting sample characteristic graphs of images with different sizes;
and carrying out adaptive PCA processing on the characteristic diagram.
7. The multi-layer depth feature extraction-based target tracking algorithm of claim 5, wherein: specifically, the step S43 is,
performing relevant matching on the multi-size characteristic image and the optimal matching template to obtain a multi-scale confidence score image;
selecting a confidence score image corresponding to the scale with the highest confidence score;
taking the point with the highest score of the confidence score map as the tracked target center position of the frame;
and obtaining the size of the frame target according to the scale factor corresponding to the confidence score map.
8. The multi-layer depth feature extraction-based target tracking algorithm of claim 5, wherein: specifically, the step S44 is,
according to the target characteristic diagram obtained from the frame, the target center is taken as a Gaussian label peak value, and online passive attack learning is carried out by using the matching template of the previous frame;
the best matching template is updated by the ADMM algorithm.
CN201911419269.2A 2019-12-31 2019-12-31 Target tracking algorithm based on multi-layer depth feature extraction Pending CN111145221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911419269.2A CN111145221A (en) 2019-12-31 2019-12-31 Target tracking algorithm based on multi-layer depth feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911419269.2A CN111145221A (en) 2019-12-31 2019-12-31 Target tracking algorithm based on multi-layer depth feature extraction

Publications (1)

Publication Number Publication Date
CN111145221A true CN111145221A (en) 2020-05-12

Family

ID=70522843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911419269.2A Pending CN111145221A (en) 2019-12-31 2019-12-31 Target tracking algorithm based on multi-layer depth feature extraction

Country Status (1)

Country Link
CN (1) CN111145221A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862863A (en) * 2021-03-04 2021-05-28 广东工业大学 Target tracking and positioning method based on state machine
CN115546219A (en) * 2022-12-05 2022-12-30 广州镭晨智能装备科技有限公司 Detection board type generation method, board card defect detection method, device and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN107016689A (en) * 2017-02-04 2017-08-04 中国人民解放军理工大学 A kind of correlation filtering of dimension self-adaption liquidates method for tracking target
CN109410247A (en) * 2018-10-16 2019-03-01 中国石油大学(华东) A kind of video tracking algorithm of multi-template and adaptive features select
CN110349190A (en) * 2019-06-10 2019-10-18 广州视源电子科技股份有限公司 Method for tracking target, device, equipment and the readable storage medium storing program for executing of adaptive learning
CN110533689A (en) * 2019-08-08 2019-12-03 河海大学 Core correlation filtering Method for Underwater Target Tracking based on space constraint adaptive scale
CN110555864A (en) * 2019-08-02 2019-12-10 电子科技大学 self-adaptive target tracking method based on PSPCE

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN107016689A (en) * 2017-02-04 2017-08-04 中国人民解放军理工大学 A kind of correlation filtering of dimension self-adaption liquidates method for tracking target
CN109410247A (en) * 2018-10-16 2019-03-01 中国石油大学(华东) A kind of video tracking algorithm of multi-template and adaptive features select
CN110349190A (en) * 2019-06-10 2019-10-18 广州视源电子科技股份有限公司 Method for tracking target, device, equipment and the readable storage medium storing program for executing of adaptive learning
CN110555864A (en) * 2019-08-02 2019-12-10 电子科技大学 self-adaptive target tracking method based on PSPCE
CN110533689A (en) * 2019-08-08 2019-12-03 河海大学 Core correlation filtering Method for Underwater Target Tracking based on space constraint adaptive scale

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FENG LI 等: "Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking", 《CVPR 2018》 *
KOBY CRAMMER: "Online Passive-Aggressive Algorithms", 《JOURNAL OF MACHINE LEARNING RESEARCH 7 (2006) 》 *
博博有个大大大的DREAM: "CVPR2018跟踪算法STRCF原理及代码解析", 《网页公开:HTTPS://BLOG.CSDN.NET/QQ_17783559/ARTICLE/DETAILS/89333509》 *
博博有个大大大的DREAM: "相关滤波跟踪算法中ADMM的使用", 《网页公开:HTTPS://BLOG.CSDN.NET/QQ_17783559/ARTICLE/DETAILS/82965747》 *
张海南: "基于视频序列的运动目标跟踪算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
李晓艳: "一种连续卷积与时空正则项的相关滤波目标跟踪算法", 《西北工业大学学报》 *
蔡自兴: "基于时序特性的自适应增量主成分分析的视觉跟踪", 《电子与信息学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862863A (en) * 2021-03-04 2021-05-28 广东工业大学 Target tracking and positioning method based on state machine
CN115546219A (en) * 2022-12-05 2022-12-30 广州镭晨智能装备科技有限公司 Detection board type generation method, board card defect detection method, device and product
CN115546219B (en) * 2022-12-05 2023-10-20 广州镭晨智能装备科技有限公司 Detection plate type generation method, plate card defect detection method, device and product

Similar Documents

Publication Publication Date Title
CN111192292B (en) Target tracking method and related equipment based on attention mechanism and twin network
CN108256562B (en) Salient target detection method and system based on weak supervision time-space cascade neural network
CN108229479B (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
CN109064514B (en) Projection point coordinate regression-based six-degree-of-freedom pose estimation method
US20220366576A1 (en) Method for target tracking, electronic device, and storage medium
CN107529650B (en) Closed loop detection method and device and computer equipment
CN109165735B (en) Method for generating sample picture based on generation of confrontation network and adaptive proportion
CN107633226B (en) Human body motion tracking feature processing method
CN110120064B (en) Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning
CN105678338B (en) Target tracking method based on local feature learning
CN111340824B (en) Image feature segmentation method based on data mining
US10943352B2 (en) Object shape regression using wasserstein distance
CN111709909A (en) General printing defect detection method based on deep learning and model thereof
CN111709435A (en) Countermeasure sample generation method based on discrete wavelet transform
Lu et al. Learning transform-aware attentive network for object tracking
CN110084201B (en) Human body action recognition method based on convolutional neural network of specific target tracking in monitoring scene
WO2019136591A1 (en) Salient object detection method and system for weak supervision-based spatio-temporal cascade neural network
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
Grudic et al. Outdoor Path Labeling Using Polynomial Mahalanobis Distance.
CN110276784B (en) Correlation filtering moving target tracking method based on memory mechanism and convolution characteristics
CN111178261A (en) Face detection acceleration method based on video coding technology
CN111310768A (en) Saliency target detection method based on robustness background prior and global information
CN111145221A (en) Target tracking algorithm based on multi-layer depth feature extraction
CN112364881B (en) Advanced sampling consistency image matching method
CN107798329B (en) CNN-based adaptive particle filter target tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512

RJ01 Rejection of invention patent application after publication