CN107274437A - A kind of visual tracking method based on convolutional neural networks - Google Patents

A kind of visual tracking method based on convolutional neural networks Download PDF

Info

Publication number
CN107274437A
CN107274437A CN201710488018.4A CN201710488018A CN107274437A CN 107274437 A CN107274437 A CN 107274437A CN 201710488018 A CN201710488018 A CN 201710488018A CN 107274437 A CN107274437 A CN 107274437A
Authority
CN
China
Prior art keywords
neural networks
convolutional neural
layer
mrow
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710488018.4A
Other languages
Chinese (zh)
Inventor
胡硕
赵银妹
孙翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201710488018.4A priority Critical patent/CN107274437A/en
Publication of CN107274437A publication Critical patent/CN107274437A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of visual tracking method based on convolutional neural networks.Its content comprises the following steps:1st, off-line training:Convolutional neural networks are carried out with off-line training using the data sets of CIFAR 10, acquisition can express the ability of depth characteristic;2nd, multiple features fusion:The characteristic pattern after every layer of convolutional layer is extracted, various features are obtained, multilayer feature fusion is carried out;3rd, track:On the basis of step one and step 2 tracking is completed using particle filter method.It is blocked and the problem such as illumination variation instant invention overcomes target during tracking, feature description disclosure satisfy that diversified complicated change during tracking, and tracker will not be caused to lose target, the degree of accuracy of feature is improved, so as to improve tracking accuracy.

Description

A kind of visual tracking method based on convolutional neural networks
Technical field
The invention belongs to moving target Visual Tracking field, it is related to a kind of vision tracking based on convolutional neural networks Method.
Background technology
With the development of society, video monitoring plays more and more important effect, such as military field, Aero-Space, In terms of man-machine interaction, traffic safety, in order to preferably complete monitor task in field of traffic, using the method for computer vision As solve the problem an important channel, and during tracking background numerous and complicated, mesh can occur mark be blocked, deformation And situations such as illumination variation, using common tracking, feature descriptive power is difficult to varied during satisfaction is tracked Complicated change, so as to cause tracker to lose target.
Therefore people are highly desirable finds a kind of new method to solve a variety of difficulties in object tracking process, with The development of deep learning, convolutional neural networks, can using convolutional neural networks into one irreplaceable part of visual field To obtain the Structural Characteristics of image, the feature such as texture, color before these features are compared can more preferable description object.Such as Chinese Patent Application No. is a kind of patent Shen of 201610579388.4 " trackings and system for merging convolutional neural networks " Please in, pre-training is carried out to convolutional neural networks by predetermined training set and obtains rudimentary model CNN1, user's input is received Video flowing with tracking target, is finely adjusted to CNN1 by fine setting technology, obtains CNN2, final mask CNN2 is replaced Grader in TLD algorithms, so that the tracking target in monitoring video flow is identified and tracked automatically.Also Chinese patent Application No. 201610371378.1 " method for tracking target and system based on depth convolutional neural networks Fusion Features " it is special In profit applications, various features are obtained by convolutional neural networks, the filter weight of every kind of feature is calculated by filtered method, According to the current tracking position of object of Weight Acquisition target, the precision of prediction loss of every kind of feature present frame is calculated, to every kind of spy Levy, set up the stable model in time t, stability of each feature in present frame is calculated by stable model, according to every kind of The stability of feature and accumulative precision of prediction loss, update the weight of every kind of feature, repeat above step and complete tracking.Thus It can be seen that convolutional neural networks play critically important effect in vision tracking field.
The present invention proposes a kind of visual tracking method based on convolutional neural networks, enters with traditional convolutional neural networks Unlike the tracking of row vision, the present invention carries out M2DPCA using characteristic pattern is extracted after each convolutional layer of convolutional neural networks After dimensionality reduction, it is input to after extracting multifaceted feature, multiple features fusion in linear classifier, then enter under the framework of particle filter Line trace, due to being to extract multifaceted feature, the description to feature can be more accurate, therefore be largely overcoming with Target is blocked and the problem such as illumination variation during track, the degree of accuracy of feature is improved, so as to improve tracking accuracy.
The content of the invention
It is an object of the invention to overcome deficiency of the prior art, there is provided a kind of vision based on convolutional neural networks Tracking, target is blocked and the problem such as illumination variation during overcoming tracking, the degree of accuracy of feature is improved, so as to carry High tracking accuracy.
In order to solve above-mentioned technical problem, the present invention is achieved by the following technical solutions:
A kind of visual tracking method based on convolutional neural networks, this method particular content comprises the following steps:
Step one, off-line training:Off-line training is carried out to convolutional neural networks using CIFAR-10 data sets, acquisition can Express the ability of depth characteristic;
Step 2, multiple features fusion:The characteristic pattern after every layer of convolutional layer is extracted, various features are obtained, multilayer feature is carried out Fusion;
Step 3, tracking:On the basis of step one and step 2 tracking is completed using particle filter method.
Further, it is described that off-line training is carried out to convolutional neural networks using CIFAR-10 data sets in step one It is exactly to input CIFAR-10 data sets in convolutional neural networks, is trained using the preceding method to transmission and error reverse conduction Network obtains depth characteristic, and network is finely adjusted, and its particular content comprises the following steps:
(1) input data set picture is inputted in 6 layers of convolutional neural networks;
(2) in 6 layers of convolutional neural networks, wherein first 5 layers are convolutional layer, last layer is full articulamentum, and every layer all obtains To several characteristic patterns;The size of convolution kernel is set as 5*5;
(3) using maximum pond method;
(4) the activation primitive selection Sigmoid functions after first four layers of activation primitive selection ReLU functions, layer 5.
Further, in step 2, the characteristic pattern extracted after every layer of convolutional layer obtains various features, carried out many Layer Fusion Features, its content includes following two steps:
(1) because the characteristic pattern dimension of extraction is higher, dimension-reduction treatment is carried out to characteristic pattern, using M2DPCA dimensionality reductions;
(2) multiple features fusion is carried out to the data after dimensionality reduction.
The use M2DPCA dimensionality reductions are exactly that dimension-reduction treatment is carried out while keeping characteristics to greatest extent;Its specific steps It is as follows:
(1) each width characteristic pattern after each convolutional layer is divided into m × n subgraph;
(2) image covariance matrix of subgraph is directly calculated;
(3) optimal projection direction collection { X is found out from the angle of maximum variance1,X2,…,Xd};
(4) according to formula Wk=(A-Ai)XkK=1,2 ... d obtain projection vector Wk, the compression vector as obtained, its Middle A is sample, AiFor sample average;
(5) subgraph vector that modules compress is stitched together and completes compression process.
Further, the data to after dimensionality reduction carry out multiple features fusion, are exactly by each convolution of convolutional neural networks The depth characteristic that layer is obtained carries out multilayer feature fusion;Characteristic pattern after each convolutional layer is carried out after dimension-reduction treatment according to formula (1) a big multidimensional characteristic vectors are obtained to be input in SVM classifier, the classification of target and background is carried out;
Wherein M(i)For the characteristic vector after dimensionality reduction.
Due to using above-mentioned technical proposal, a kind of visual tracking method based on convolutional neural networks that the present invention is provided, There is such beneficial effect compared with prior art:
Convolutional neural networks are to carry out propagated forward by progressive method, and the present invention extracts many after each convolutional layer Hierarchy characteristic figure carries out Fusion Features again after carrying out M2DPCA dimensionality reductions, with the hair that Chinese Patent Application No. is 201610371378.1 Bright to compare, the present invention only extracts the characteristic pattern after convolutional layer and carries out dimension-reduction treatment, so as to reduce the quantity of characteristic pattern Dimension is reduced simultaneously, amount of calculation is reduced, and compared with invention of the Chinese Patent Application No. for 201610579388.4, the present invention is extracted The feature of multi-layer, includes color, the Texture eigenvalue of low layer, includes the architectural feature of high-level, in reply target following Translation, rotation occur for target and during dimensional variation, or in illumination, block the feature than simple layer when being disturbed with complex background Description effect will get well, so as to improve the precision of tracking.
It is blocked and the problem such as illumination variation instant invention overcomes target during tracking, feature description disclosure satisfy that tracking During diversified complicated change, tracker will not be caused to lose target, the degree of accuracy of feature is improved, so as to improve Tracking accuracy.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be attached to what is used required in embodiment Figure is briefly described, it should be apparent that, drawings in the following description are only embodiments of the invention, common for this area For technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of flow signal of visual tracking method based on convolutional neural networks according to embodiments of the present invention Figure;
Fig. 2 is the structure chart of convolutional neural networks multiple features fusion.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only the section Example of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, the every other embodiment that those of ordinary skill in the art are obtained belongs to what the present invention was protected Scope.
A kind of visual tracking method based on convolutional neural networks proposed by the invention, its flow chart are as shown in figure 1, existing Introduce the method that each step is implemented:
Step one, off-line training:Off-line training is carried out to convolutional neural networks using CIFAR-10 data sets, acquisition can be with Express the ability of depth characteristic.
Step 2, multiple features fusion:The characteristic pattern after every layer of convolutional layer is extracted, various features are obtained, multilayer feature is carried out Fusion.
Step 3, tracking:On the basis of step one and step 2 tracking is completed using particle filter method.
First, off-line training
CIFAR-10 data sets are inputted in convolutional neural networks, come using the preceding method to transmission and error reverse conduction Training network obtains depth characteristic, and network is finely adjusted, and particular content includes following part:
(1) input data set picture is inputted in 6 layers of convolutional neural networks;
(2) wherein preceding 5 layers are convolutional layer, and last layer is full articulamentum, and every layer all obtains several characteristic patterns;Convolution kernel Size be set as 5*5;
(3) using maximum pond method;
(4) the activation primitive selection Sigmoid functions after first four layers of activation primitive selection ReLU functions, layer 5.
2nd, online vision tracking
1 obtains training sample
Whole data set is obtained since the frame of sequence of video images first to be tracked first, is selected near first frame tracing area Some negative samples are taken while constituting sample set is input to convolutional neural networks.
2 online tracking
Determine whether the first two field picture, if the first two field picture, then whole sequence sets are inputted into convolutional neural networks In be finely adjusted training, extract the characteristic pattern after each convolutional layer, its structure chart as shown in Fig. 2 carrying out multiple features fusion, and point Carried out for following two steps:
(1) because the characteristic pattern dimension of extraction is higher, dimension-reduction treatment is carried out to characteristic pattern, using M2DPCA dimensionality reductions; M2DPCA is a kind of new method for merging Moudle PCA and 2DPCA, for the sample that dimension is larger, after each convolutional layer Each width characteristic pattern be divided into m × n subgraph;Directly calculate the image covariance matrix of subgraph;From maximum variance Angle find out optimal projection direction collection { X1,X2,…,Xd};According to formula Wk=(A-Ai)XkK=1,2 ... d obtain projection to Measure Wk, the compression vector as obtained, wherein A is sample, AiFor sample average;The subgraph vector that modules compress is spelled It is connected together and completes compression process.
(2) multiple features fusion is carried out to the data after dimensionality reduction.
Characteristic pattern is carried out after dimension-reduction treatment according to linear fusion formula
Wherein M(i)For the characteristic vector after dimensionality reduction
Obtain a big multidimensional characteristic vectors to be input in SVM classifier, carry out the classification of target and background.In particle Particle is sowed around target under the framework of filtering, whether judge confidence level maximum is less than threshold alpha, and if it is explanation occurs Larger deviation, which needs to re-enter convolutional neural networks, to be handled, and is sentenced if it is not, then reacquiring a frame picture It is disconnected;If it is determined that what is inputted is not the first frame of sequence, directly inputs and classification learning is carried out in SVM classifier, in particle filter Framework judge track target position.
When translation, rotation occur for target and during dimensional variation or illumination, block and answer in place of the main innovation of the present invention During miscellaneous ambient interferences, the influence that single features are tracked to vision is overcome, is carried out using the feature extracted after each convolutional layer more special The fusion levied, can well adapt to the change of target, make the accuracy of tracking and strengthen.

Claims (5)

1. a kind of visual tracking method based on convolutional neural networks, it is characterised in that:This method particular content includes following step Suddenly:
Step one, off-line training:Off-line training is carried out to convolutional neural networks using CIFAR-10 data sets, acquisition can be expressed The ability of depth characteristic;
Step 2, multiple features fusion:The characteristic pattern after every layer of convolutional layer is extracted, various features are obtained, multilayer feature fusion is carried out;
Step 3, tracking:On the basis of step one and step 2 tracking is completed using particle filter method.
2. a kind of visual tracking method based on convolutional neural networks according to claim 1, it is characterised in that:In step It is described that convolutional neural networks progress off-line training is exactly inputted CIFAR-10 data sets using CIFAR-10 data sets in one In convolutional neural networks, carry out training network using the preceding method to transmission and error reverse conduction and obtain depth characteristic, and to net Network is finely adjusted, and its particular content comprises the following steps:
(1) input data set picture is inputted in 6 layers of convolutional neural networks;
(2) in 6 layers of convolutional neural networks, wherein first 5 layers are convolutional layer, last layer is full articulamentum, if every layer all obtains Dry characteristic pattern;The size of convolution kernel is set as 5*5;
(3) using maximum pond method;
(4) the activation primitive selection Sigmoid functions after first four layers of activation primitive selection ReLU functions, layer 5.
3. a kind of visual tracking method based on convolutional neural networks according to claim 1, it is characterised in that:In step In two, it is described extract every layer of convolutional layer after characteristic pattern, obtain various features, carry out multilayer feature fusion, its content include with Lower two steps:
(1) because the characteristic pattern dimension of extraction is higher, dimension-reduction treatment is carried out to characteristic pattern, using M2DPCA dimensionality reductions;
(2) multiple features fusion is carried out to the data after dimensionality reduction.
4. a kind of visual tracking method based on convolutional neural networks according to claim 3, it is characterised in that:It is described to adopt It is exactly that dimension-reduction treatment is carried out while keeping characteristics to greatest extent with M2DPCA dimensionality reductions;It is comprised the following steps that:
(1) each width characteristic pattern after each convolutional layer is divided into m × n subgraph;
(2) image covariance matrix of subgraph is directly calculated;
(3) optimal projection direction collection { X is found out from the angle of maximum variance1,X2,…,Xd};
(4) according to formula Wk=(A-Ai)XkK=1,2 ... d obtain projection vector Wk, the compression vector as obtained, wherein A is Sample, AiFor sample average;
(5) subgraph vector that modules compress is stitched together and completes compression process.
5. a kind of visual tracking method based on convolutional neural networks according to claim 3, it is characterised in that:It is described right Data after dimensionality reduction carry out multiple features fusion, are exactly that the depth characteristic of each convolutional layer acquisition of convolutional neural networks is carried out into multilayer Fusion Features;A big multidimensional characteristic will be obtained according to formula (1) after characteristic pattern progress dimension-reduction treatment after each convolutional layer Vector is input in SVM classifier, carries out the classification of target and background;
<mrow> <msup> <mi>Y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein M(i)For the characteristic vector after dimensionality reduction.
CN201710488018.4A 2017-06-23 2017-06-23 A kind of visual tracking method based on convolutional neural networks Withdrawn CN107274437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710488018.4A CN107274437A (en) 2017-06-23 2017-06-23 A kind of visual tracking method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710488018.4A CN107274437A (en) 2017-06-23 2017-06-23 A kind of visual tracking method based on convolutional neural networks

Publications (1)

Publication Number Publication Date
CN107274437A true CN107274437A (en) 2017-10-20

Family

ID=60069421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710488018.4A Withdrawn CN107274437A (en) 2017-06-23 2017-06-23 A kind of visual tracking method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107274437A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944386A (en) * 2017-11-22 2018-04-20 天津大学 Visual scene recognition methods based on convolutional neural networks
CN109325972A (en) * 2018-07-25 2019-02-12 深圳市商汤科技有限公司 Processing method, device, equipment and the medium of laser radar sparse depth figure
CN109522844A (en) * 2018-11-19 2019-03-26 燕山大学 It is a kind of social activity cohesion determine method and system
CN109947963A (en) * 2019-03-27 2019-06-28 山东大学 A kind of multiple dimensioned Hash search method based on deep learning
TWI675328B (en) * 2018-02-09 2019-10-21 美商耐能股份有限公司 Method of compressing convolution parameters, convolution operation chip and system
CN112040834A (en) * 2018-02-22 2020-12-04 因诺登神经科学公司 Eyeball tracking method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648103A (en) * 2016-12-28 2017-05-10 歌尔科技有限公司 Gesture tracking method for VR headset device and VR headset device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648103A (en) * 2016-12-28 2017-05-10 歌尔科技有限公司 Gesture tracking method for VR headset device and VR headset device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭晏飞 等: "一种基于GLRAM的掌纹识别改进算法", 《计算机应用与软件》 *
胡丹 等: "基于深度特征与LBP纹理融合的视觉跟踪", 《计算机工程》 *
胡正平 等: "卷积神经网络分类模型在模式识别中的新进展", 《燕山大学学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944386A (en) * 2017-11-22 2018-04-20 天津大学 Visual scene recognition methods based on convolutional neural networks
CN107944386B (en) * 2017-11-22 2019-11-22 天津大学 Visual scene recognition methods based on convolutional neural networks
TWI675328B (en) * 2018-02-09 2019-10-21 美商耐能股份有限公司 Method of compressing convolution parameters, convolution operation chip and system
CN112040834A (en) * 2018-02-22 2020-12-04 因诺登神经科学公司 Eyeball tracking method and system
CN109325972A (en) * 2018-07-25 2019-02-12 深圳市商汤科技有限公司 Processing method, device, equipment and the medium of laser radar sparse depth figure
CN109325972B (en) * 2018-07-25 2020-10-27 深圳市商汤科技有限公司 Laser radar sparse depth map processing method, device, equipment and medium
CN109522844A (en) * 2018-11-19 2019-03-26 燕山大学 It is a kind of social activity cohesion determine method and system
CN109522844B (en) * 2018-11-19 2020-07-24 燕山大学 Social affinity determination method and system
CN109947963A (en) * 2019-03-27 2019-06-28 山东大学 A kind of multiple dimensioned Hash search method based on deep learning

Similar Documents

Publication Publication Date Title
CN107274437A (en) A kind of visual tracking method based on convolutional neural networks
CN110929578B (en) Anti-shielding pedestrian detection method based on attention mechanism
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN107168527B (en) The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN104615983B (en) Activity recognition method based on recurrent neural network and human skeleton motion sequence
CN109948526A (en) Image processing method and device, detection device and storage medium
CN107808132A (en) A kind of scene image classification method for merging topic model
CN110070074A (en) A method of building pedestrian detection model
CN108052884A (en) A kind of gesture identification method based on improvement residual error neutral net
CN104463191A (en) Robot visual processing method based on attention mechanism
CN101587591B (en) Visual accurate tracking technique based on double parameter thresholds dividing
CN106373143A (en) Adaptive method and system
CN102184551A (en) Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN103440667B (en) The automaton that under a kind of occlusion state, moving target is stably followed the trail of
CN108986075A (en) A kind of judgment method and device of preferred image
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN110110719A (en) A kind of object detection method based on attention layer region convolutional neural networks
Yang et al. MGC-VSLAM: A meshing-based and geometric constraint VSLAM for dynamic indoor environments
CN108182447A (en) A kind of adaptive particle filter method for tracking target based on deep learning
CN111209811B (en) Method and system for detecting eyeball attention position in real time
CN104050488A (en) Hand gesture recognition method based on switching Kalman filtering model
CN104835182A (en) Method for realizing dynamic object real-time tracking by using camera
CN108681718A (en) A kind of accurate detection recognition method of unmanned plane low target
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
Zhang et al. Multimodal spatiotemporal networks for sign language recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20171020

WW01 Invention patent application withdrawn after publication