CN105654509A - Motion tracking method based on composite deep neural network - Google Patents

Motion tracking method based on composite deep neural network Download PDF

Info

Publication number
CN105654509A
CN105654509A CN201511000393.7A CN201511000393A CN105654509A CN 105654509 A CN105654509 A CN 105654509A CN 201511000393 A CN201511000393 A CN 201511000393A CN 105654509 A CN105654509 A CN 105654509A
Authority
CN
China
Prior art keywords
network
training
target
sample
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511000393.7A
Other languages
Chinese (zh)
Inventor
闻佳
卢海涛
赵纪炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201511000393.7A priority Critical patent/CN105654509A/en
Publication of CN105654509A publication Critical patent/CN105654509A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a motion tracking method based on a composite deep neutral network. The motion tracking method comprises the steps of performing offline training on a weight by means of a large number of samples; using a neural network of which the number of nodes reduce gradually; adding a logistic classifier at the top of the network which is obtained through offline training so that a target and a background can be distinguished, and setting the parameter of an adjusting network; adjusting an observation model in a tracking process; at the first frame, performing adaptive adjustment on the first frame, adjusting the network by means of the target and a background sample so that the network can identify the target; combining the observation model with a dynamic model in which a particle filtering algorithm is used; acquiring particles around the target of a previous frame in a new frame in particle filtering, transmitting the acquired particles to the observation model, determining the confidence of the particles by the observation model, and determining the particle with highest confidence as the target.

Description

Based on the motion tracking method of compound deep neural network
Technical field
The present invention relates to the motion tracking method of a kind of object, in particular to a kind of motion tracking method based on compound deep neural network, it is a kind of tracking method utilizing deep neural network detection target.
Background technology
Motion tracking owing to being blocked, intensity of illumination change, rotations, face inner rotary, background clutter etc. affect and are considered as the task that has challenge in face, are also the important component parts of computer vision. Motion tracking can be widely used and multiple field, such as video monitoring, intelligent transportation, Product checking, unusual checking. It is suggested despite a large amount of model, but several crucial problems are not still solved fully.
A motion tracking system is generally made up of two models: observation model and dynamicmodel. Observation model is the model describing target object. Dynamicmodel is used for determining the conversion of object state and state. In tracking system, observation model is very important, and model accuracy is on tracking effect impact greatly. Accurate observation model can promote follows the tracks of success ratio (successrate), reduces central position error (center-of-locationerror).
Degree of depth study tracker (deeplearningtracker) uses deep neural network as observation model in the track. Owing to using complete vector base in deep neural network, the calculated amount in regulate process is huge. Therefore the speed of degree of depth study tracker often is difficult to accept. In tracking process, degree of depth study tracker uses BP algorithm to regulate. Due to excessively dark Web vector graphic BP algorithm training can produce diffusion problem (diffusion is when with BP algorithm regulating networks, loss partially leading of function significantly reduce along with backpropagation. When the network number of plies is very slow compared with the network weight of Shen Shi lower floor adjustment speed), the weights of network lower floor often can not get sufficient adjustment.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, it is proposed to a kind of motion tracking method based on compound deep neural network, the method can reduce calculated amount, accelerates tracking speed, ensures to follow the tracks of success ratio, reduces central position error.Thus make motion tracking more accurate, more fast.
In order to solve the technical problem of above-mentioned existence, the present invention is achieved by the following technical solutions: a kind of motion tracking method based on compound deep neural network, and the method content comprises and comprising the following steps:
Step 1 off-line training:
Owing to different objects followed the tracks of by needs, need, before using observation model, the weights that different target can be adapted to fast, it is thus desirable to use great amount of samples that its weights are carried out off-line training; In training and tracking process, a large amount of calculated amount is consumed, so removing the structure that complete base vector carrys out simplified network, it may also be useful to the neural network that node quantity is successively decreased owing to crossing complete base vector;
Step 2 model initialization:
Logistic sorter is added so that it is object and background can be distinguished, and the parameter of regulating networks is set with off-line training gained network top;
Step 3 Automatic adjusument:
Often causing following the tracks of when object outward appearance changes in tracking process and drift about, therefore, observation model needs to regulate in tracking process; When the first frame, the first frame is carried out Automatic adjusument, with target and background sample, network is regulated, enable network identify target;
Step 4 target tracking:
Observation model and dynamicmodel being combined, dynamicmodel uses particle filter algorithm; Particle filter gathers particle in a new frame centered by previous frame target, gives observation model by the particle collected, and observation model judges the degree of confidence of particle, and the maximum particle of degree of confidence is target.
Wherein, in step 3, described first frame being carried out Automatic adjusument, its regulate process step is as follows:
1) input: the positive negative sample of collection and label S={ (x1,y1),......,(xn,yn), wherein yi={ 0,1} is corresponding positive sample and negative sample respectively;
2) parameter setting: the minimum threshold alpha of reconstructed error of make an uproar automatic coding and BP algorithm falls in settingDAnd ��B, maximum training number of times ��DAnd ��B, study rate ��DAnd ��B, momentum mDAnd mBWith weight penalty coefficient wDAnd wB, the Noise figure �� of automatic coding of making an uproar falls;
3) compound training: perform DCTtrain (NN, X, Y, opts) process, it may also be useful to fall make an uproar automatic coding training lowest layer network and the network using the training of BP algorithm more high-rise; As reconstructed error E < �� or training number of times T >=�� time, training terminates;
Automatic adjusument process in tracking process only performs 3).
The present invention and prior art:
Owing to adopting technique scheme, a kind of motion tracking method based on compound deep neural network provided by the invention, compared with prior art has such advantage:
(1) two kinds of network combined methods are used to eliminate the diffusion problem in network self-adapting to a certain extent so that bottom-layer network obtains more effective training.
(2) simplified network structure, it may also be useful to the neural network that node quantity is successively decreased replaced complete base vector so that network training and tracking speed are accelerated greatly. The application of composite network can extract more effective feature quasi-complement removed complete base vector after the problem that affected of precision so that the present invention has higher real-time and robustness.
Accompanying drawing explanation
Fig. 1 is the schema using the motion tracking method based on compound deep neural network of the present invention;
Fig. 2 is Automatic adjusument schema;
Fig. 3 is compound training schema in Automatic adjusument process;
Automatic coding training schema of making an uproar falls in Fig. 4 in Automatic adjusument process;
Fig. 5 is BP algorithm flow figure in Automatic adjusument process.
Embodiment
The inventive method is further described by the motion tracking process below in conjunction with video sequence " woman ":
The present invention uses the motion tracking method based on compound deep neural network, and as shown in Figure 1, its particular content comprises following step:
Step 1 off-line training:
Owing to different objects followed the tracks of by needs, need, before using observation model, the weights that different target can be adapted to fast, it is thus desirable to use great amount of samples that its weights are carried out off-line training; In training and tracking process, a large amount of calculated amount is consumed, so removing the structure that complete base vector carrys out simplified network, it may also be useful to the neural network that node quantity is successively decreased owing to crossing complete base vector;
Step 2 model initialization:
Logistic sorter is added so that it is object and background can be distinguished, and the parameter of regulating networks is set with off-line training gained network top;
Step 3 Automatic adjusument:
Often causing following the tracks of when object outward appearance changes in tracking process and drift about, therefore, observation model needs to regulate in tracking process; When the first frame, the first frame is carried out Automatic adjusument, with target and background sample, network is regulated, enable network identify target;
Step 4 target tracking:
Observation model and dynamicmodel being combined, dynamicmodel uses particle filter algorithm; Particle filter gathers particle in a new frame centered by previous frame target, gives observation model by the particle collected, and observation model judges the degree of confidence of particle, and the maximum particle of degree of confidence is target.
Wherein, in step 3, described first frame being carried out Automatic adjusument, as shown in Figure 2, its regulate process step is as follows for its schema:
1) input: the positive negative sample of collection and label S={ (x1,y1),......,(xn,yn), wherein yi={ 0,1} is corresponding positive sample and negative sample respectively;
2) parameter setting: the minimum threshold alpha of reconstructed error of make an uproar automatic coding and BP algorithm falls in settingDAnd ��B, maximum training number of times ��DAnd ��B, study rate ��DAnd ��B, momentum mDAnd mBWith weight penalty coefficient wDAnd wB, the Noise figure �� of automatic coding of making an uproar falls;
3) compound training: perform DCTtrain (NN, X, Y, opts) process, namely uses and falls make an uproar automatic coding training lowest layer network and the network using the training of BP algorithm more high-rise; As reconstructed error E < �� or training number of times T >=�� time, training terminates;
Automatic adjusument process in tracking process only performs 3).
Above step 1) step 3) described in be Automatic adjusument training overall process. In step 3) compound train in, described execution DCTtrain (NN, X, Y, opts) process, will illustrate that the concrete implementation step of DCTtrain (NN, X, Y, opts) is as follows here in detail, as shown in Figure 3:
(1) input: input variable is DCTtrain (NN, X, Y, opts), shown in, wherein NN represents the neural network after off-line training, and X represents target and background sample set, Y represents the label of sample, its order and sample one_to_one corresponding, and opts represents the parameter of setting;
(2) in lowest layer use, automatic coding training of making an uproar is fallen, as shown in Figure 4:
1. parameter setting: network parameter is arranged by the parameter in opts
2. network training:
A. according to Noise figure contaminated sample
B. reconstruct image z is obtained by contaminated sample by network forward direction:
y = f &theta; ( x ~ ) Z=g��'(y)
And obtain reconstructed error:
m i n W , W &prime; , b , b &prime; &Sigma; i = 1 k | | x i - z i | | 2 2 + &lambda; ( | | W | | F 2 + | | W &prime; | | F 2 ) ;
C. weights are upgraded by reconstructed error through backpropagation;
D. above process is repeated until reconstructed error is less than threshold alphaDOr reach maximum training number of times ��D;
E. by network, sample being obtained the first hidden layer and export a, it is using as more high-rise input;
(3) in the training of more high-rise use BP algorithm, as shown in Figure 5:
1. parameter setting: network parameter is arranged by the parameter in opts;
2. network training: use the training of BP algorithm;
3. repetition training process is until reconstructed error is less than threshold alphaBOr reach maximum training number of times ��B��
The content not being described in detail in specification sheets of the present invention belongs to the known prior art of professional and technical personnel in the field.
Although disclosing most preferred embodiment and the accompanying drawing of the present invention for the purpose of illustration, but it will be appreciated by those skilled in the art that: without departing from the spirit and scope of the invention and the appended claims, various replacement, change and amendment are all possible. Therefore the present invention should not be limited to the content disclosed in most preferred embodiment and accompanying drawing.

Claims (3)

1. the motion tracking method based on compound deep neural network, it is characterised in that: the method content comprises and comprising the following steps:
Step 1 off-line training:
Owing to different objects followed the tracks of by needs, need, before using observation model, the weights that different target can be adapted to fast, it is thus desirable to use great amount of samples that its weights are carried out off-line training; In training and tracking process, a large amount of calculated amount is consumed, so removing the structure that complete base vector carrys out simplified network, it may also be useful to the neural network that node quantity is successively decreased owing to crossing complete base vector;
Step 2 model initialization:
Logistic sorter is added so that it is object and background can be distinguished, and the parameter of regulating networks is set with off-line training gained network top;
Step 3 Automatic adjusument:
Often causing following the tracks of when object outward appearance changes in tracking process and drift about, therefore, observation model needs to regulate in tracking process; When the first frame, the first frame is carried out Automatic adjusument, with target and background sample, network is regulated, enable network identify target;
Step 4 target tracking:
Observation model and dynamicmodel being combined, dynamicmodel uses particle filter algorithm; Particle filter gathers particle in a new frame centered by previous frame target, gives observation model by the particle collected, and observation model judges the degree of confidence of particle, and the maximum particle of degree of confidence is target.
2. a kind of motion tracking method based on compound deep neural network according to claim 1, it is characterised in that: in step 3, described first frame being carried out Automatic adjusument, its regulate process step is as follows:
1) input: the positive negative sample of collection and label S={ (x1,y1),......,(xn,yn), wherein yi={ 0,1} is corresponding positive sample and negative sample respectively;
2) parameter setting: the minimum threshold alpha of reconstructed error of make an uproar automatic coding and BP algorithm falls in settingDAnd ��B, maximum training number of times ��DAnd ��B, study rate ��DAnd ��B, momentum mDAnd mBWith weight penalty coefficient wDAnd wB, the Noise figure �� of automatic coding of making an uproar falls;
3) compound training: perform DCTtrain (NN, X, Y, opts) process, it may also be useful to fall make an uproar automatic coding training lowest layer network and the network using the training of BP algorithm more high-rise; As reconstructed error E < �� or training number of times T >=�� time, training terminates;
Automatic adjusument process in tracking process only performs 3).
3. a kind of motion tracking method based on compound deep neural network according to claim 1, it is characterized in that: in step 3) compound train in, described execution DCTtrain (NN, X, Y, opts) process, DCTtrain (NN will be described here in detail, X, Y, opts) concrete implementation step as follows:
(1) input: input variable is DCTtrain (NN, X, Y, opts), shown in, wherein NN represents the neural network after off-line training, and X represents target and background sample set, Y represents the label of sample, its order and sample one_to_one corresponding, and opts represents the parameter of setting;
(2) in lowest layer use, automatic coding training of making an uproar is fallen:
1. parameter setting: network parameter is arranged by the parameter in opts
2. network training:
A. according to Noise figure contaminated sample x��x��;
B. reconstruct image z is obtained by contaminated sample by network forward direction:
y = f &theta; ( x ~ ) z = g &theta; &prime; ( y )
And obtain reconstructed error:
m i n W , W &prime; , b , b &prime; &Sigma; i = 1 k | | x i - z i | | 2 2 + &lambda; ( | | W | | F 2 + | | W &prime; | | F 2 ) ;
C. weights are upgraded by reconstructed error through backpropagation;
D. above process is repeated until reconstructed error is less than threshold alphaDOr reach maximum training number of times ��D;
E. by network, sample being obtained the first hidden layer and export a, it is using as more high-rise input;
(3) train at more high-rise use BP algorithm:
1. parameter setting: network parameter is arranged by the parameter in opts;
2. network training: use the training of BP algorithm;
3. repetition training process is until reconstructed error is less than threshold alphaBOr reach maximum training number of times ��B��
CN201511000393.7A 2015-12-25 2015-12-25 Motion tracking method based on composite deep neural network Pending CN105654509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511000393.7A CN105654509A (en) 2015-12-25 2015-12-25 Motion tracking method based on composite deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511000393.7A CN105654509A (en) 2015-12-25 2015-12-25 Motion tracking method based on composite deep neural network

Publications (1)

Publication Number Publication Date
CN105654509A true CN105654509A (en) 2016-06-08

Family

ID=56478034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511000393.7A Pending CN105654509A (en) 2015-12-25 2015-12-25 Motion tracking method based on composite deep neural network

Country Status (1)

Country Link
CN (1) CN105654509A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203350A (en) * 2016-07-12 2016-12-07 北京邮电大学 A kind of moving target is across yardstick tracking and device
CN106651917A (en) * 2016-12-30 2017-05-10 天津大学 Image target tracking algorithm based on neural network
CN107403222A (en) * 2017-07-19 2017-11-28 燕山大学 A kind of motion tracking method based on auxiliary more new model and validity check
CN107463870A (en) * 2017-06-07 2017-12-12 西安工业大学 A kind of motion recognition method
WO2019033541A1 (en) * 2017-08-14 2019-02-21 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
CN109559329A (en) * 2018-11-28 2019-04-02 陕西师范大学 A kind of particle filter tracking method based on depth denoising autocoder
CN109684953A (en) * 2018-12-13 2019-04-26 北京小龙潜行科技有限公司 The method and device of pig tracking is carried out based on target detection and particle filter algorithm
CN111931368A (en) * 2020-08-03 2020-11-13 哈尔滨工程大学 UUV target state estimation method based on GRU particle filter
CN112349150A (en) * 2020-11-19 2021-02-09 飞友科技有限公司 Video acquisition method and system for airport flight guarantee time node

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090324010A1 (en) * 2008-06-26 2009-12-31 Billy Hou Neural network-controlled automatic tracking and recognizing system and method
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090324010A1 (en) * 2008-06-26 2009-12-31 Billy Hou Neural network-controlled automatic tracking and recognizing system and method
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NAIYAN WANG等: "Learning a Deep Compact Image Representation for Visual Tracking", 《PROCEEDINGS OF ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
李寰宇 等: "基于深度特征表达与学习的视觉跟踪算法研究", 《电子与信息学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203350A (en) * 2016-07-12 2016-12-07 北京邮电大学 A kind of moving target is across yardstick tracking and device
CN106651917A (en) * 2016-12-30 2017-05-10 天津大学 Image target tracking algorithm based on neural network
CN107463870A (en) * 2017-06-07 2017-12-12 西安工业大学 A kind of motion recognition method
CN107403222A (en) * 2017-07-19 2017-11-28 燕山大学 A kind of motion tracking method based on auxiliary more new model and validity check
WO2019033541A1 (en) * 2017-08-14 2019-02-21 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
US10592786B2 (en) 2017-08-14 2020-03-17 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
CN109559329A (en) * 2018-11-28 2019-04-02 陕西师范大学 A kind of particle filter tracking method based on depth denoising autocoder
CN109684953A (en) * 2018-12-13 2019-04-26 北京小龙潜行科技有限公司 The method and device of pig tracking is carried out based on target detection and particle filter algorithm
CN111931368A (en) * 2020-08-03 2020-11-13 哈尔滨工程大学 UUV target state estimation method based on GRU particle filter
CN112349150A (en) * 2020-11-19 2021-02-09 飞友科技有限公司 Video acquisition method and system for airport flight guarantee time node

Similar Documents

Publication Publication Date Title
CN105654509A (en) Motion tracking method based on composite deep neural network
CN103259962B (en) A kind of target tracking method and relevant apparatus
CN107284442B (en) A kind of longitudinally controlled method of negotiation of bends for automatic driving vehicle
CN103870845B (en) Novel K value optimization method in point cloud clustering denoising process
CN107290741B (en) Indoor human body posture identification method based on weighted joint distance time-frequency transformation
CN104281853A (en) Behavior identification method based on 3D convolution neural network
CN110210621A (en) A kind of object detection method based on residual error network improvement
CN104299229A (en) Infrared weak and small target detection method based on time-space domain background suppression
CN107403222A (en) A kind of motion tracking method based on auxiliary more new model and validity check
CN104020466A (en) Maneuvering target tracking method based on variable structure multiple models
CN105488456A (en) Adaptive rejection threshold adjustment subspace learning based human face detection method
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN103440495A (en) Method for automatically identifying hydrophobic grades of composite insulators
CN105045091B (en) Dredging technique intelligent decision analysis method based on Fuzzy Neural Control system
CN103902819A (en) Particle optimizing probability hypothesis density multi-target tracking method based on variation filtering
CN101908213B (en) SAR image change detection method based on quantum-inspired immune clone
CN102148921A (en) Multi-target tracking method based on dynamic group division
CN104463359A (en) Dredging operation yield prediction model analysis method based on BP neural network
CN108197566A (en) Monitoring video behavior detection method based on multi-path neural network
CN107506794A (en) Ground moving object sorting algorithm based on decision tree
CN104299243A (en) Target tracking method based on Hough forests
CN107945210A (en) Target tracking algorism based on deep learning and environment self-adaption
CN103955951B (en) The fast-moving target tracking method decomposed with reconstruction error based on regularization template
CN106096246A (en) Aerosol optical depth method of estimation based on PM2.5 and PM10
CN105894008A (en) Target motion track method through combination of feature point matching and deep nerve network detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160608