CN109559329A - A kind of particle filter tracking method based on depth denoising autocoder - Google Patents

A kind of particle filter tracking method based on depth denoising autocoder Download PDF

Info

Publication number
CN109559329A
CN109559329A CN201811433093.1A CN201811433093A CN109559329A CN 109559329 A CN109559329 A CN 109559329A CN 201811433093 A CN201811433093 A CN 201811433093A CN 109559329 A CN109559329 A CN 109559329A
Authority
CN
China
Prior art keywords
particle
sample
weight
training
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811433093.1A
Other languages
Chinese (zh)
Other versions
CN109559329B (en
Inventor
李良福
宋睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN201811433093.1A priority Critical patent/CN109559329B/en
Publication of CN109559329A publication Critical patent/CN109559329A/en
Application granted granted Critical
Publication of CN109559329B publication Critical patent/CN109559329B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to computer vision analysis technical fields, more particularly to a kind of particle filter tracking method based on depth denoising autocoder, including initial phase, using the manual spotting position of video first frame, it is tracked in process and tracking processing in first frame, it needs respectively in target background and prospect a certain number of positive negative samples selected around, initialize trained network model, second step, carry out importance sampling, third step, calculate observation probability, 4th step, update weight, 5th step, judge and selects the maximum particle of weight in weight information, think the particle be we next the target to be tracked, tracking new samples are updated for next frame, process of the circulation second step to the 5th step, until video playing finishes, this method can efficiently differentiate target signature and background, improve track algorithm Precision.

Description

A kind of particle filter tracking method based on depth denoising autocoder
Technical field
The invention belongs to computer vision analysis technical fields, and in particular to a kind of to denoise autocoder based on depth Particle filter tracking method.
Background technique
Visual target tracking is an important research direction of computer vision and visual analysis.Typical visual analysis needs Consistent and stable tracking is carried out to interested object.For monocular vision target following, numerous scholars propose worth The theory and algorithm of reference.In practical applications, due to complex background, target occlusion, target quickly move, illumination variation etc. because The influence of element, the problem still suffer from huge challenge.
Deep neural network has very strong learning ability in terms of target detection and target classification.Deep learning framework is more It is suitble to learning classification feature rather than specific objective.In addition, deep-neural-network algorithm usually requires longer repetitive exercise mistake Cheng Caineng convergence, it is difficult to meet the requirement of real time of on-line study.Therefore, it is difficult to which current deep learning network architecture is expanded Open up target tracking domain.
Summary of the invention
In order to solve in object tracking process the interference problems such as background complexity, light variation, target occlusion and it is existing with The poor problem of track algorithm anti-interference ability, the present invention provides it is a kind of based on depth denoising autocoder particle filter with Track method.The technical problem to be solved in the present invention is achieved through the following technical solutions:
A kind of particle filter tracking method based on depth denoising autocoder, comprising: step 1: training depth network It is more steady to obtain that noise is added by the unsupervised each layer network of layer-by-layer greed training and in training data for model Feature representation is carried out the study for having supervision to these features by Classification Neural, advanced optimizes the parameter of network;
Step 2: using the manual spotting position of video first frame, positive negative training sample is chosen from sequence, initialize Depth network model in step 1;
Step 3: importance sampling particle collection is used, each particle is then propagated forward by trained network model, And the confidence level of each particle during online tracking is calculated by Classification Neural;
Step 4: according to the observation probability of particle confidence calculations particle in step 3;
Step 5: according in step 4 observe probability updating particle weight, to determine target position, for next frame update with Track new samples, circulation step 3 arrives the process of step 5, until video playing finishes.
Further, depth network model is superimposed by automatic noise reduction codes device in the step 1, and next layer of use is defeated Input as upper layer out;The automatic noise reduction codes device, including encoder, decoder and implicit layer three parts, the decoding Device needs to predict original unspoiled data according to noise characteristic, finally export it is immediate be originally inputted, Gaussian noise is usual As Decay vector, expression formula are as follows:
Wherein, x is that noise jamming is not originally inputted,It is the data after noise pollution, and σ indicates autocoder Regularization degree.
Further, training process is as follows in the step 1: assuming that being directed to the training sample set x ∈ R of unmarked classificationd, Hidden layer is mapped to obtain z ∈ R by x is inputted by activation primitive fd
z∈fθ(x)=σ (Wx+b) (1)
Wherein, θ={ W, b }, W are weight matrix, and b is coding layer vector, and σ is nonlinear activation function, and decoder is again The coded representation of input is mapped to form the y of reconstruct
y∈fθ′(h)=σ (W ' h+b ') (2)
Wherein, θ '={ W ', b ' }, W ' are the transposition of weight matrix W, and σ is decoded activation primitive;Automatic noise reduction codes device Y is set to be approximately equal to x by the above process;
Assuming that training set { (x(1), y(1)) ..., (x(m), y(m)) it include m training sample, x indicates single sample feature, Y indicates the corresponding input of sample, and its cost function is defined using single sample (x, y);
Wherein hW, b(x) output valve of the sample x of network is corresponded to, therefore the cost function of m sample training collection is:
λ is loss of weight coefficient, controls two-part relative importance;The process of the automatic noise reduction codes device of training is adjusting training The minimum reconstruction error J (W, b) of sample lumped parameter { θ, θ ' }, J (W, b) are a raised functions, usually pass through alternative manner Optimization.
Further, the Classification Neural includes that automatic noise reduction codes device coded portion is connect with k sparse constraint Classification layer composition.
Further, Classification Neural learning method is as follows in the step 1: setting Z is swashing for self-encoding encoder hidden layer Function living.In the propagated forward stage, activation primitive Z is:
Z=WTx+b (6)
Wherein, x is input vector;W is weight;B is biasing (bias).
It keeps K maximum value before activation primitive and all sets zero for remaining:
Wherein, (Γ)cIt is the supplement of z, (Γ)c=sup pk(z).Sparse z is for calculating network reconnection error:
Wherein, x is training sample set, and W represents weight, and b ' representative biases the transposition of (bias), and weight is defeated by activation primitive Preceding K maximum value backpropagation out is with reconstruction error iteration adjustment.
Further, the algorithm of confidence level is as follows in the step 3: setting oiCorrespond to class kiNeural network output, Then output valve is contemplated to be posterior probability.
E{oi}=P (ki|x) (9)
Wherein, x is network inputs.In general, using the respective classes of maximum output as decision, therefore can be from neural network Posterior probability obtain confidence level, and using the maximum output of Classification Neural as confidence level:
C (x)=E { max oi} (10)
Further, importance sampling method is as follows in the step 3:
When new frame image reaches, q (s is distributed according to different degreet|st-1, y1:t) and motion model, from the grain at t-1 moment SubsetObtain n particle of t momentWherein, it is weighed corresponding to the importance of particle collection WeightSummation StIt is 1;Dbjective state stBy six affine parameter horizontal translations, vertical translation, scaling, width/ Height ratio, rotation and deflection indicate st=(tx, ty, sxy, ra, ar, sa);Each dimension distribution of state transferIt is an independent zero-mean normal distribution model in motion model.
Further, it is as follows that method for calculating probability is observed in the step 4:
Each particle is propagated forward by Classification Neural to obtain its confidence levelAnd by maximum confidenceWith The threshold tau of setting is compared, ifReselect positive negative training sample, initialization classification nerve net Network;IfThe observation probability of particle is calculated, as follows:
Wherein ytRefer to the corresponding input of t moment sample,Refer to i-th of particle of t moment.
Further, in the step 5 more new particle weight method are as follows:
Wherein,Each dimension distribution that state shifts in importance sampling is represented,As Resulting particle probabilities distribution is calculated, general significance is distributed q (st|st-1, y1:t) use first-order Markov process q (st|st-1), I.e. state transformation is observed independently of model, then by right value update are as follows:
WhereinIndicate the weight of update previous moment,It represents previous step and calculates resulting particle observation Probability, for each frame, the particle with weight limit is tracking result;Each tracking frame updates a positive sample, then with The next positive sample of track;The state for corresponding to maximum particle is determined as the frame target position outside current vehicle
Beneficial effects of the present invention:
Automatic noise reduction codes device obtains higher-dimension by unsupervised layer-by-layer greedy training and parameter optimization multitiered network structure The distributed nature of complexity input indicates, for different tasks, it is only necessary to adjust network parameter;This method is denoised by depth Automatic noise reduction codes device, target signature and background can be efficiently differentiated;Classification Neural is introduced, point of network is improved Class ability improves the precision of track algorithm, finally, using particle filter for tracking target.
Detailed description of the invention
Fig. 1 is automatic noise reduction codes device schematic illustration.
Fig. 2 is Classification Neural structural schematic diagram.
Fig. 3 is indoor shielding phenomenon tracking result schematic diagram.
Fig. 4 is outdoor eclipse phenomena tracking result schematic diagram.
Fig. 5 is illumination variation target following result schematic diagram.
Fig. 6 is objective fuzzy target following result schematic diagram.
Specific embodiment
Further detailed description is done to the present invention combined with specific embodiments below, but embodiments of the present invention are not limited to This.
A kind of particle filter tracking method based on depth denoising autocoder, comprising the following steps:
Step 1: training depth network model, by the unsupervised each layer network of layer-by-layer greed training and in training data Middle addition noise carries out have supervision to obtain more steady feature representation, by Classification Neural to these features It practises, advanced optimizes the parameter of network;
Step 2: using the manual spotting position of video first frame, positive negative training sample is chosen from sequence, initialize Depth network model in step 1;
Step 3: importance sampling particle collection is used, each particle is then propagated forward by trained network model, And the confidence level of each particle during online tracking is calculated by Classification Neural;
Step 4: according to the observation probability of particle confidence calculations particle in step 3;
Step 5: according in step 4 observe probability updating particle weight, to determine target position, for next frame update with Track new samples, circulation step 3 arrives the process of step 5, until video playing finishes.
As shown in Figure 1, depth network model is superimposed by automatic noise reduction codes device in step 1, deepness auto encoder is one The typical unsupervised learning network of kind, it is a depth network model, is superimposed by self-encoding encoder, and uses next layer of output As the input on upper layer, the essence of autocoder is the identical function of study, the i.e. input of network and the output phase after reconstruction Deng trained and parameter optimisation procedure is to realize that output reproduces input;Automatic noise reduction codes device, including encoder, decoder and hidden Formula layer three parts;Automatic noise reduction codes device receives damage data as input, and does not damage data work by the way that training prediction is original For output.The purpose of noise reduction autocoder be allow using very big encoder, while prevent encoder and decoder it Between useless constant function, be based on statistical theory, the core concept of automatic noise reduction codes device is former according to certain rule interference Begin input and noise, makes to be originally inputted and is destroyed, damaged data is entered network, obtains the expression of hidden layer.Decoder needs Predict original unspoiled data according to noise characteristic, finally export it is immediate be originally inputted, this exactly removes the effect of interference Fruit, Gaussian noise are typically used as Decay vector, expression formula are as follows:
Wherein, x is that noise jamming is not originally inputted,It is the data after noise pollution, and σ indicates autocoder Regularization degree, for the generation problem of damage data, binomial random number is not only simple but also is easy to calculate, i.e., with identical It inputs shape and generates binomially distributed random number, be then multiplied with input.We use squared error function as reconstructed error, and To train it with other feedforward network exact same ways.
Training process is as follows in step 1: assuming that being directed to the training sample set x ∈ R of unmarked classificationd, pass through activation primitive f Input x is mapped to hidden layer to obtain z ∈ Rd
z∈fθ(x)=σ (Wx+b) (1)
Wherein, θ={ W, b }, W are weight matrix, and b is coding layer vector, and σ is nonlinear activation function, and decoder is again The coded representation of input is mapped to form the y of reconstruct
y∈fθ′(h)=σ (W ' h+b ') (2)
Wherein, θ '={ W ', b ' }, W ' are the transposition of weight matrix W, and σ is decoded activation primitive;Automatic noise reduction codes device Y is set to be approximately equal to x by the above process;
Assuming that training set { (x(1), y(1)..., (x(m), y(m)) it include m training sample, x indicates single sample feature, y It indicates the corresponding input of sample, and defines its cost function using single sample (x, y);
Wherein hW, b(x) output valve of the sample x of network is corresponded to, therefore the cost function of m sample training collection is:
As can be seen that the first part of equation is the average variance item of cost function.Second part is weight attenuation term, can To prevent weight variation too greatly, to prevent overfitting, λ is loss of weight coefficient, controls two-part relative importance;Training is certainly The process of dynamic noise reduction codes device is the minimum reconstruction error J (W, b) of adjusting training sample lumped parameter { θ, θ ' }, and J (W, b) is one A protrusion function, is usually optimized by alternative manner.
Classification Neural includes the classification layer composition that automatic noise reduction codes device coded portion is connect with k sparse constraint.
The purpose of structural classification neural network is the confidence level of each particle during calculating online tracking.It by dropping automatically The classification layer composition that the coded portion of encoder of making an uproar is connect with k sparse constraint, the schematic diagram of Classification Neural structure such as Fig. 2 It is shown;Introduce k sparse constraint can effectively learning objective invariant feature, improve the linear discriminant energy of Classification Neural Power solves overfitting problem to a certain extent.Neuroscience Research shows that the response of visual signal in cerebral cortex is sparse , therefore introducing sparse limitation in deep-neural-network can make the expression of original signal more meaningful, especially for point Generic task, the thought are verified in principal component analysis and sparse coding, and the K that K sparse constraint remains hidden layer is maximum Activation primitive, remaining is all set as zero, and compared with other sparse constraints, k sparse constraint can guarantee all tables of input data Show it is all sparse.
Classification Neural learning method is as follows in step 1: setting the activation primitive that Z is self-encoding encoder hidden layer.In forward direction Propagation stage, activation primitive Z is:
Z=WTx+b (6)
Wherein, x is input vector;W is weight;B is biasing (bias).
It keeps K maximum value before activation primitive and all sets zero for remaining:
Wherein, (Γ)cIt is the supplement of z, (Γ)c=sup pk(z).Sparse z is for calculating network reconnection error:
Wherein, x is training sample set, and W represents weight, and b ' representative biases the transposition of (bias), and weight is defeated by activation primitive Preceding K maximum value backpropagation out is with reconstruction error iteration adjustment.The confidence level of Classification Neural output is confidence water It is flat, the decision confidence level at some point that reflects it in characteristic vector space.
The algorithm of confidence level is as follows in step 3: setting oiCorrespond to class kiNeural network output, then phase of output valve Prestige is posterior probability.
E{oi}=P (ki|x) (9)
Wherein, x is network inputs.In general, using the respective classes of maximum output as decision, therefore can be from neural network Posterior probability obtain confidence level, and using the maximum output of Classification Neural as confidence level:
C (x)=E { max oi} (10)
Importance sampling method is as follows in step 3:
When new frame image reaches, q (s is distributed according to different degreet|st-1, y1:t) and motion model, from the grain at t-1 moment SubsetObtain n particle of t momentWherein, it is weighed corresponding to the importance of particle collection WeightSummation StIt is 1;Dbjective state stBy six affine parameter horizontal translations, vertical translation, scaling, width/ Height ratio, rotation and deflection indicate st=(tx, ty, sxy, ra, ar, sa);Each dimension distribution of state transferIt is an independent zero-mean normal distribution model in motion model.
It is as follows that method for calculating probability is observed in step 4:
Each particle is propagated forward by Classification Neural to obtain its confidence levelAnd by maximum confidenceWith The threshold tau of setting is compared, ifReselect positive negative training sample, initialization classification nerve net Network;IfThe observation probability of particle is calculated, as follows:
Wherein ytRefer to the corresponding input of t moment sample,Refer to i-th of particle of t moment.
The weight method of more new particle in step 5 are as follows:
Wherein,Each dimension distribution that state shifts in importance sampling is represented,As Resulting particle probabilities distribution is calculated, general significance is distributed q (st|st-1, y1:t) use first-order Markov process q (st|st-1), I.e. state transformation is observed independently of model, then by right value update are as follows:
WhereinIndicate the weight of update previous moment,It represents previous step and calculates resulting particle observation Probability, for each frame, the particle with weight limit is tracking result;Each tracking frame updates a positive sample, then with The next positive sample of track;The state for corresponding to maximum particle is determined as the frame target position outside current vehicle.
Test running environment: 3.8GHz, four core AMD processors, 8GB memory.There is employed herein the videos under a variety of environment Sequence is verified, including illumination variation, target occlusion and target quickly move.
Fig. 3 and Fig. 4 shows blocking for target.Eclipse phenomena refers to due to other objects of the complexity and surrounding of ambient enviroment The phenomenon that interference of body, target is at least partially obscured, tracker will not lose target during entire tracking;Outdoor photography is frequent Generate forceful rays variation., when light changes very greatly, will affect the performance of target following, from figure 5 it can be seen that working as When target enters tunnel, there are huge illumination variations in image, and still, from the point of view of tracking result, this paper algorithm is accurate Ground completes tracing task.
The problem of objective fuzzy, appears in Fig. 6, target it is fuzzy be due to the excessive velocities of target in moving process or Photograph it is unstable caused by, the fogging image of target, objective fuzzy in image influence tracking effect, and tracking herein is calculated Method has been accurately finished tracking, and loses without target.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that Specific implementation of the invention is only limited to these instructions.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to of the invention Protection scope.

Claims (9)

1. a kind of particle filter tracking method based on depth denoising autocoder, it is characterised in that:
Step 1: training depth network model adds by the unsupervised each layer network of layer-by-layer greed training and in training data Enter noise to obtain more steady feature representation, by Classification Neural these features are carried out with the study for having supervision, into The parameter of one-step optimization network;
Step 2: using the manual spotting position of video first frame, positive negative training sample, initialization step 1 are chosen from sequence Middle depth network model;
Step 3: using importance sampling particle collection, each particle is then propagated forward by trained network model, and lead to Cross the confidence level that Classification Neural calculates each particle during online tracking;
Step 4: according to the observation probability of particle confidence calculations particle in step 3;
Step 5: according to the weight for observing probability updating particle in step 4, to determine target position, it is new to update tracking for next frame Sample, circulation step 3 arrives the process of step 5, until video playing finishes.
2. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: depth network model is superimposed by automatic noise reduction codes device in the step 1, and uses next layer of output as upper layer Input;The automatic noise reduction codes device, including encoder, decoder and implicit layer three parts, the decoder need basis to make an uproar Sound characteristics predict original unspoiled data, finally export it is immediate be originally inputted, Gaussian noise is typically used as Decay vector, Its expression formula are as follows:
Wherein, x is that noise jamming is not originally inputted,It is the data after noise pollution, and σ is indicating autocoder just Then change degree.
3. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: training process is as follows in the step 1: assuming that being directed to the training sample set x ∈ R of unmarked classificationd, pass through activation primitive F is mapped to hidden layer for x is inputted to obtain z ∈ Rd
z∈fθ(x)=σ (Wx+b) (1)
Wherein, θ={ W, b }, W are weight matrix, and b is coding layer vector, and σ is nonlinear activation function, and decoder remaps The coded representation of input is to form the y of reconstruct
y∈fθ′(h)=σ (W ' h+b ') (2)
Wherein, θ '={ W ', b ' }, W ' are the transposition of weight matrix W, and σ is decoded activation primitive;Automatic noise reduction codes device passes through The above process makes y be approximately equal to x;
Assuming that training set { (x(1), y(1)) ..., (x(m), y(m)) it include m training sample, x indicates single sample feature, y table The corresponding input of sample sheet, and its cost function is defined using single sample (x, y);
Wherein hW, b(x) output valve of the sample x of network is corresponded to, therefore the cost function of m sample training collection is:
λ is loss of weight coefficient, controls two-part relative importance;The process of the automatic noise reduction codes device of training is adjusting training sample The minimum reconstruction error J (W, b) of lumped parameter { θ, θ ' }, J (W, b) are a raised functions, are usually optimized by alternative manner.
4. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: the Classification Neural includes the classification layer composition that automatic noise reduction codes device coded portion is connect with k sparse constraint.
5. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: Classification Neural learning method is as follows in the step 1: setting the activation primitive that Z is self-encoding encoder hidden layer.In forward direction Propagation stage, activation primitive Z is:
Z=WTx+b (6)
Wherein, x is input vector;W is weight;B is biasing (bias).
It keeps K maximum value before activation primitive and all sets zero for remaining:
Wherein, (Γ)cIt is the supplement of z, (Γ)c=sup pk(z).Sparse z is for calculating network reconnection error:
Wherein, x is training sample set, and W represents weight, and b ' representative biases the transposition of (bias), and weight is exported by activation primitive Preceding K maximum value backpropagation is with reconstruction error iteration adjustment.
6. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: the algorithm of confidence level is as follows in the step 3: setting oiCorrespond to class kiNeural network output, then phase of output valve Prestige is posterior probability.
E{oi}=P (ki|x) (9)
Wherein, x is network inputs.In general, using the respective classes of maximum output as decision, therefore can be after neural network It tests probability and obtains confidence level, and using the maximum output of Classification Neural as confidence level:
C (x)=E { maxoi} (10)
7. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature Be: importance sampling method is as follows in the step 3:
When new frame image reaches, q (s is distributed according to different degreet|st-1, y1∶t) and motion model, from the particle collection at t-1 momentObtain n particle of t momentWherein, the weights of importance corresponding to particle collectionSummation StIt is 1;Dbjective state StBy six affine parameter horizontal translations, vertical translation, scaling, width/height Ratio, rotation and deflection indicate st=(tx, ty, sxy, ra, ar, sa);Each dimension distribution of state transfer It is an independent zero-mean normal distribution model in motion model.
8. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature It is: it is as follows observes method for calculating probability in the step 4:
Each particle is propagated forward by Classification Neural to obtain its confidence levelAnd by maximum confidenceWith setting Threshold tau be compared, ifPositive negative training sample is reselected, Classification Neural is initialized;Such as FruitThe observation probability of particle is calculated, as follows:
Wherein ytRefer to the corresponding input of t moment sample,Refer to i-th of particle of t moment.
9. a kind of particle filter tracking method based on depth denoising autocoder according to claim 1, feature It is: the weight method of more new particle in the step 5 are as follows:
Wherein,Each dimension distribution that state shifts in importance sampling is represented,As calculate Resulting particle probabilities distribution, general significance are distributed q (st|st-1, y1:t) use first-order Markov process q (st|st-1), i.e. shape State transformation is observed independently of model, then by right value update are as follows:
WhereinIndicate the weight of update previous moment,It represents previous step and calculates resulting particle observation probability, For each frame, the particle with weight limit is tracking result;Each tracking frame updates a positive sample, then tracks next A positive sample;The state for corresponding to maximum particle is determined as the frame target position outside current vehicle.
CN201811433093.1A 2018-11-28 2018-11-28 Particle filter tracking method based on depth denoising automatic encoder Active CN109559329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811433093.1A CN109559329B (en) 2018-11-28 2018-11-28 Particle filter tracking method based on depth denoising automatic encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811433093.1A CN109559329B (en) 2018-11-28 2018-11-28 Particle filter tracking method based on depth denoising automatic encoder

Publications (2)

Publication Number Publication Date
CN109559329A true CN109559329A (en) 2019-04-02
CN109559329B CN109559329B (en) 2023-04-07

Family

ID=65867657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811433093.1A Active CN109559329B (en) 2018-11-28 2018-11-28 Particle filter tracking method based on depth denoising automatic encoder

Country Status (1)

Country Link
CN (1) CN109559329B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473557A (en) * 2019-08-22 2019-11-19 杭州派尼澳电子科技有限公司 A kind of voice signal decoding method based on depth self-encoding encoder
CN110825123A (en) * 2019-10-21 2020-02-21 哈尔滨理工大学 Control system and method for automatic following loading vehicle based on motion algorithm
CN110889459A (en) * 2019-12-06 2020-03-17 北京深境智能科技有限公司 Learning method based on edge and Fisher criterion
CN111552322A (en) * 2020-04-29 2020-08-18 东南大学 Unmanned aerial vehicle tracking method based on LSTM-particle filter coupling model
CN111563423A (en) * 2020-04-17 2020-08-21 西北工业大学 Unmanned aerial vehicle image target detection method and system based on depth denoising automatic encoder
CN111735458A (en) * 2020-08-04 2020-10-02 西南石油大学 Navigation and positioning method of petrochemical inspection robot based on GPS, 5G and vision
CN111931368A (en) * 2020-08-03 2020-11-13 哈尔滨工程大学 UUV target state estimation method based on GRU particle filter
CN111950503A (en) * 2020-06-16 2020-11-17 中国科学院地质与地球物理研究所 Aviation transient electromagnetic data processing method and device and computing equipment
CN112396635A (en) * 2020-11-30 2021-02-23 深圳职业技术学院 Multi-target detection method based on multiple devices in complex environment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002171436A (en) * 2000-12-04 2002-06-14 Sony Corp Image processing apparatus
CN103211563A (en) * 2005-05-16 2013-07-24 直观外科手术操作公司 Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery
CN105654509A (en) * 2015-12-25 2016-06-08 燕山大学 Motion tracking method based on composite deep neural network
CN105894008A (en) * 2015-01-16 2016-08-24 广西卡斯特动漫有限公司 Target motion track method through combination of feature point matching and deep nerve network detection
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN106203350A (en) * 2016-07-12 2016-12-07 北京邮电大学 A kind of moving target is across yardstick tracking and device
CN107403222A (en) * 2017-07-19 2017-11-28 燕山大学 A kind of motion tracking method based on auxiliary more new model and validity check
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002171436A (en) * 2000-12-04 2002-06-14 Sony Corp Image processing apparatus
CN103211563A (en) * 2005-05-16 2013-07-24 直观外科手术操作公司 Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery
CN105894008A (en) * 2015-01-16 2016-08-24 广西卡斯特动漫有限公司 Target motion track method through combination of feature point matching and deep nerve network detection
CN105654509A (en) * 2015-12-25 2016-06-08 燕山大学 Motion tracking method based on composite deep neural network
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN106203350A (en) * 2016-07-12 2016-12-07 北京邮电大学 A kind of moving target is across yardstick tracking and device
CN107403222A (en) * 2017-07-19 2017-11-28 燕山大学 A kind of motion tracking method based on auxiliary more new model and validity check
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
季顺平 等: "基于HSV颜色特征和贡献度重构的行人跟踪", 《激光与光电子学进展》 *
杨红红等: "基于稀疏约束深度学习的交通目标跟踪", 《中国公路学报》 *
焦婷 等: "存在运动目标时的图像镶嵌方法研究", 《计算机应用研究》 *
程帅 等: "多示例深度学习目标跟踪", 《电子与信息学报》 *
邓俊锋: "基于稀疏自动编码器和边缘降噪自动编码器的深度学习算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473557A (en) * 2019-08-22 2019-11-19 杭州派尼澳电子科技有限公司 A kind of voice signal decoding method based on depth self-encoding encoder
CN110473557B (en) * 2019-08-22 2021-05-28 浙江树人学院(浙江树人大学) Speech signal coding and decoding method based on depth self-encoder
CN110825123A (en) * 2019-10-21 2020-02-21 哈尔滨理工大学 Control system and method for automatic following loading vehicle based on motion algorithm
CN110889459A (en) * 2019-12-06 2020-03-17 北京深境智能科技有限公司 Learning method based on edge and Fisher criterion
CN110889459B (en) * 2019-12-06 2023-04-28 北京深境智能科技有限公司 Learning method based on edge and Fisher criteria
CN111563423A (en) * 2020-04-17 2020-08-21 西北工业大学 Unmanned aerial vehicle image target detection method and system based on depth denoising automatic encoder
CN111552322A (en) * 2020-04-29 2020-08-18 东南大学 Unmanned aerial vehicle tracking method based on LSTM-particle filter coupling model
CN111950503A (en) * 2020-06-16 2020-11-17 中国科学院地质与地球物理研究所 Aviation transient electromagnetic data processing method and device and computing equipment
CN111950503B (en) * 2020-06-16 2024-01-30 中国科学院地质与地球物理研究所 Aviation transient electromagnetic data processing method and device and computing equipment
CN111931368A (en) * 2020-08-03 2020-11-13 哈尔滨工程大学 UUV target state estimation method based on GRU particle filter
CN111735458A (en) * 2020-08-04 2020-10-02 西南石油大学 Navigation and positioning method of petrochemical inspection robot based on GPS, 5G and vision
CN112396635A (en) * 2020-11-30 2021-02-23 深圳职业技术学院 Multi-target detection method based on multiple devices in complex environment

Also Published As

Publication number Publication date
CN109559329B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109559329A (en) A kind of particle filter tracking method based on depth denoising autocoder
CN111401132B (en) Pedestrian attribute identification method guided by high-level semantics under monitoring scene
CN107657204A (en) The construction method and facial expression recognizing method and system of deep layer network model
CN106651915A (en) Target tracking method of multi-scale expression based on convolutional neural network
Zhang et al. AIDEDNet: Anti-interference and detail enhancement dehazing network for real-world scenes
CN113642621A (en) Zero sample image classification method based on generation countermeasure network
Li et al. Optimizing convolutional neural network performance by mitigating underfitting and overfitting
Jeon et al. Artificial intelligence in deep learning algorithms for multimedia analysis
Shi et al. Faster detection method of driver smoking based on decomposed YOLOv5
CN108985382A (en) The confrontation sample testing method indicated based on critical data path
CN112232145A (en) Intelligent identification method for pain expression of old people on nursing bed
Zhu A face recognition system using ACO-BPNN model for optimizing the teaching management system
Xiang et al. Semi-supervised image classification via attention mechanism and generative adversarial network
CN112053386B (en) Target tracking method based on depth convolution characteristic self-adaptive integration
Wang et al. LSTM wastewater quality prediction based on attention mechanism
CN113887208A (en) Method and system for defending against text based on attention mechanism
Cheng et al. Infrared image denoising based on convolutional neural network
CN112966499A (en) Question and answer matching method based on self-adaptive fusion multi-attention network
Gong et al. Image denoising with GAN based model
Wang et al. Image denoising using an improved generative adversarial network with Wasserstein distance
Guo et al. Fine-grained image classification of red tide algae based on feature pyramid networks and computer aided technique
Liu et al. Anomaly detection based on semi-supervised generative adversarial networks
CN112085678B (en) Method and system suitable for raindrop removal of power equipment machine inspection image
Timchenko et al. A method of organization of a parallel-hierarchical network for image recognition
Kumar et al. A new image restoration approach by combining empirical wavelet transform and total variation using chaotic squirrel search optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant