CN109410251A - Method for tracking target based on dense connection convolutional network - Google Patents

Method for tracking target based on dense connection convolutional network Download PDF

Info

Publication number
CN109410251A
CN109410251A CN201811374073.1A CN201811374073A CN109410251A CN 109410251 A CN109410251 A CN 109410251A CN 201811374073 A CN201811374073 A CN 201811374073A CN 109410251 A CN109410251 A CN 109410251A
Authority
CN
China
Prior art keywords
target
frame
network
input
convolution feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811374073.1A
Other languages
Chinese (zh)
Other versions
CN109410251B (en
Inventor
范保杰
陈会志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201811374073.1A priority Critical patent/CN109410251B/en
Publication of CN109410251A publication Critical patent/CN109410251A/en
Application granted granted Critical
Publication of CN109410251B publication Critical patent/CN109410251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

Present invention discloses a kind of method for tracking target based on dense connection convolutional network, the size and location for including the following steps: S1, determining targets of interest;S2, the convolution feature for extracting input frame are simultaneously judged, if input frame is initial frame, it then seeks PCA projection matrix and dimensionality reduction is carried out to convolution feature, target following model of the training of convolution feature obtained by use based on dense connection network, into S7, otherwise using having trained PCA projection matrix to carry out dimensionality reduction to input frame convolution feature, into S3;S3, the position that convolution feature input trace model is predicted to targets of interest;S4, scale sampling is carried out in target predicted position, estimates target sizes;S5, the network weight for updating target following model;S6, output target predicted position and scale;S7, input next frame, until completing the prediction of all frames of video.The present invention realizes the end-to-end study of trace model, effectively reduces the training time, improves service efficiency.

Description

Method for tracking target based on dense connection convolutional network
Technical field
The present invention relates to a kind of method for tracking target, in particular to a kind of mesh based on dense connection convolutional network Tracking is marked, target following technical field is belonged to.
Background technique
Target following is an important research field in computer vision, it is widely used in safety monitoring, nothing People's driving, human-computer interaction etc..The main purpose of target following is to estimate the motion state of given targets of interest in video. Target following achieved many outstanding research achievements as a hot issue in recent years.Nevertheless, due to use process In illumination variation, target appearance variation and background situations such as blocking can all cause greatly to challenge to target tracking algorism, Therefore currently, still needing to be goed deep into for the research of target tracking algorism.
In recent years, it is inhaled based on the target tracking algorism of correlation filtering with its good tracking effect and efficient computational efficiency The sight of many researchers is drawn.Correlation filtering tracking, which converts filter Solve problems to, returns input feature vector for target Gaussian Profile problem.Calculating is projected into frequency domain to significantly with Fast Fourier Transform (FFT) in the solution procedure of regression problem Improve computational efficiency.At the same time, it constantly makes new breakthroughs with deep learning in other fields such as computer vision, based on deep The target tracking algorism of degree study also becomes new research hotspot.On the one hand, it can will characterize the stronger depth volume of ability Product feature is directly combined with convolutional filtering tracking frame, to improve tracking accuracy and robustness;On the other hand, it can make With video sequence training deep neural network trace model.
But the current target tracking algorism based on deep learning is not perfect ten in the actual use process yet Beauty.First.Since the pre-training model and correlation filtering that extract convolution feature are mutually indepedent, neural network end can not be embodied To the advantage of end study.Meanwhile boundary effect brought by circulating sampling also seriously limits the property of correlation filtering track algorithm Energy.In addition, needing using a large amount of data and spending the time cost of great number, and inconvenient in the training process of above-mentioned algorithm In daily use.
Although in conclusion in view of the above-mentioned problems, how to propose on the basis of existing technology a kind of completely new target with Track method retains the plurality of advantages of the prior art, overcomes the items of the prior art insufficient, also just becomes technology people in the art Member's urgent problem to be solved.
Summary of the invention
In view of the prior art there are drawbacks described above, the purpose of the present invention is to propose to a kind of based on dense connection convolutional network Method for tracking target includes the following steps:
S1, the size and location that targets of interest is determined in the initial frame of video, input trace model for initial frame;
Whether a frame of S2, input video extract the convolution feature of input frame, and be that initial frame is sentenced to input frame It is disconnected,
If input frame is initial frame, seeks PCA projection matrix and dimensionality reduction, convolution feature obtained by use are carried out to convolution feature Target following model of the training based on dense connection network, subsequently enters S7,
If the non-initial frame of input frame, use has trained PCA projection matrix to carry out dimensionality reduction to input frame convolution feature, with laggard Enter S3;
S3, the position that convolution feature input trace model is predicted to targets of interest;
S4, the predicted position progress scale sampling in target, estimate target sizes;
S5, the network weight for updating target following model;
S6, the predicted position and scale for exporting target;
The next frame of S7, input video return to S2, until completing the prediction of all frames of video.
Preferably, the size and location for determining targets of interest described in S1 in the initial frame of video, specifically includes: regarding In the initial frame of frequency, position and the size of targets of interest are given by manual or algorithm of target detection, determines targets of interest Information.
Preferably, the convolution feature that input frame is extracted described in S2, specifically includes: input frame is inputted pre-training nerve net Propagated forward calculating is carried out in network model VGG-19, wipes out the full articulamentum and output layer of model end, extracts VGG- after calculating In 19 third and fourth, the feature of five layers of convolutional layer Chi Huaqian.
Preferably, PCA projection matrix is sought described in S2, dimensionality reduction is carried out to convolution feature, specifically include:
If the original channel m convolution feature C={ x1,x2,…,xm, lower dimensional space digit is m ', carries out center to all samples Change, calculation formula is
Then calculate the covariance matrix XX of sampleT, matrix decomposition is carried out to covariance matrix;Take a feature of maximum m ' The corresponding feature vector ω of value12,…,ωm′, then projection matrix be
W=(ω12,…,ωm′), the low-dimensional feature after the original channel m convolution feature C dimensionality reduction to m ' dimension is C '=WC.
Preferably, in the target following model based on dense connection network described in S2, phase is realized with one layer of convolutional layer Filtering is closed to calculate,
If the convolutional layer weight, the i.e. weight of correlation filter are W, the feature of input sample is X, corresponding to input sample The soft label of Gaussian Profile be Y, then be to output before convolutional layerThe loss function of so convolutional layer is fixed Justice is
Sampling is carried out around target initial position and obtains training sample, by using backpropagation and gradient descent algorithm Above-mentioned loss function is minimized, the study formula of the weight W of network is
Wherein, η is learning rate,Partial derivative for loss function about weight W.
Preferably, in the target following model based on dense connection network described in S2, the realization side of dense articulamentum Formula are as follows: by the third in pre-training convolutional neural networks VGG-19, the 4th and layer 5 convolutional layer respectively via mappingWithWith convolutional layerOutput be connected, wherein mappingRespectively It is realized by three continuous convolutional layers.
Preferably, the position of convolution feature input trace model prediction targets of interest is specifically included: for defeated described in S3 Enter frame image X, the output response figure H (X) of trace model is that the correlation filtering response diagram of convolutional layer fitting and dense articulamentum are rung The sum of should scheme, i.e.,
The maximum position of response is the predicted position of target, predicted position of the target in t frame in response diagram H (X)
Preferably, scale sampling, estimation target sizes are carried out in the predicted position of target described in S4, specifically includes: passes through Abovementioned steps obtain target behind the position in t-1 frame, centered on the predicted position of target, choose the time of k different scale Select targetThe candidate target input network of different scale is subjected to forward calculation, chooses the maximum candidate mesh of response Target scale s*For optimal solution, then the target scale of present frame is
(wt,ht)=(1- β+β s*)(wt-1,ht-1),
Wherein, weight β is scale updating factor, (wt,ht) and (wt-1,ht-1) not Wei target in t frame and t-1 frame It is wide and high.
Preferably, the network weight that target following model is updated described in S5, specifically includes:
S51, N number of training sample is chosen in the target position of t frame
S52, by N number of training sampleNetwork is inputted, calculating trace model output phase by S2 the method should scheme and height The L of this label2Then loss function updates network weight with backpropagation and gradient descent algorithm.
Compared with prior art, advantages of the present invention is mainly reflected in the following aspects:
Method for tracking target of the present invention based on dense connection convolutional network is by feature extraction, model modification, phase It closes that filtering calculates and scale prediction incorporates in entire neural network, realizes the end-to-end study of trace model, and by result Feature is extracted for training pattern, the training time is effectively reduced, improves service efficiency of the invention, is reality of the invention Border use is provided convenience.
Meanwhile using convolutional calculation instead of the circulating sampling in conventional method in the present invention, evade boundary effect, has used The mode of dense connection realizes residual error study, to further improve practicability of the invention.
In addition, the present invention also provides reference for other relevant issues in same domain, can be opened up on this basis Extension is stretched, and is applied in the technical solution of other method for tracking target same domain Nei, has very wide application prospect.
Just attached drawing in conjunction with the embodiments below, the embodiment of the present invention is described in further detail, so that of the invention Technical solution is more readily understood, grasps.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is the structure chart of dense connection convolutional network in the present invention.
Specific embodiment
As shown in FIG. 1 to FIG. 2, present invention discloses a kind of method for tracking target based on dense connection convolutional network, packets Include following steps:
S1, the size and location that targets of interest is determined in the initial frame of video, input trace model for initial frame.Specifically For, in the initial frame of video, position and the size of targets of interest are given by manual or algorithm of target detection, is determined emerging The information of interesting target.
Whether a frame of S2, input video extract the convolution feature of input frame, and be that initial frame is sentenced to input frame It is disconnected.It is described extract input frame convolution feature, specifically include: by input frame input pre-training neural network model VGG-19 in into Row propagated forward calculates, and wipes out the full articulamentum and output layer of model end, extract after calculating in VGG-19 third and fourth, five The feature of layer convolutional layer Chi Huaqian.
If input frame is initial frame, seeks PCA projection matrix and dimensionality reduction, convolution feature obtained by use are carried out to convolution feature Target following model of the training based on dense connection network, subsequently enters S7,
If the non-initial frame of input frame, use has trained PCA projection matrix to carry out dimensionality reduction to input frame convolution feature, with laggard Enter S3.
The PCA projection matrix of seeking specifically includes convolution feature progress dimensionality reduction:
If the original channel m convolution feature C={ x1,x2,...,xm, lower dimensional space digit is m ', carries out center to all samples Change, calculation formula is
Then calculate the covariance matrix XX of sampleT, matrix decomposition is carried out to covariance matrix;Take a feature of maximum m ' The corresponding feature vector ω of value12,...,ωm′, then projection matrix is W=(ω12,...,ωm′), the original channel m volume Low-dimensional feature after product feature C dimensionality reduction to m ' dimension is C '=WC.
In the target following model based on dense connection network, realize that correlation filtering calculates with one layer of convolutional layer,
If the convolutional layer weight, the i.e. weight of correlation filter are W, the feature of input sample is X, corresponding to input sample The soft label of Gaussian Profile be Y, then be to output before convolutional layerThe loss function of so convolutional layer is fixed Justice is
Sampling is carried out around target initial position and obtains training sample, by using backpropagation and gradient descent algorithm Above-mentioned loss function is minimized, the study formula of the weight W of network is
Wherein, η is learning rate,Partial derivative for loss function about weight W.
Because in the present embodiment using convolutional layer be fitted convolution filter, the weight W of network described herein to it is related Both weight W of filter are identical.
In the target following model based on dense connection network, the implementation of dense articulamentum is as follows:
As shown in Fig. 2, the conv3 in figure, conv4, conv5 indicates the third of VGG-19, four, five layers of convolutional layer;WithIt indicates to realize the dense articulamentum of shallow-layer convolution feature forward mapping;It indicates Realize the convolutional layer that fitting correlation filtering calculates.
Specifically, by the third in pre-training convolutional neural networks VGG-19, the 4th and layer 5 convolutional layer pass through respectively By mappingWithWith convolutional layerOutput be connected, wherein mappingRespectively realized by three continuous convolutional layers.
S3, the position that convolution feature input trace model is predicted to targets of interest.
It is the correlation filtering response diagram of convolutional layer fitting for input frame image X, the output response figure H (X) of trace model The sum of with dense articulamentum response diagram, i.e.,
The maximum position of response is the predicted position of target, predicted position of the target in t frame in response diagram H (X)
S4, the predicted position progress scale sampling in target, estimate target sizes.
Target is obtained behind the position in t-1 frame by abovementioned steps, centered on the predicted position of target, chooses k The candidate target of different scaleThe candidate target input network of different scale is subjected to forward calculation, chooses response The scale s of maximum candidate target*For optimal solution, then the target scale of present frame is
(wt,ht)=(1- β+β s*)(wt-1,ht-1),
Wherein, weight β is scale updating factor, (wt,ht) and (wt-1,ht-1) not Wei target in t frame and t-1 frame It is wide and high.
S5, the network weight for updating target following model.It specifically includes:
S51, N number of training sample is chosen in the target position of t frame
S52, by N number of training sampleNetwork is inputted, calculating trace model output phase by S2 the method should scheme and height The L of this label2Then loss function updates network weight with backpropagation and gradient descent algorithm.
S6, the predicted position and scale for exporting target.
The next frame of S7, input video return to S2, until completing the prediction of all frames of video.
In conclusion the present invention has used pre-training model extraction multilayer convolution feature first, then it is fitted with convolutional layer Correlation filter generates response diagram, finally combines the state of dense articulamentum output prediction target.
Method for tracking target of the present invention based on dense connection convolutional network is by feature extraction, model modification, phase It closes that filtering calculates and scale prediction incorporates in entire neural network, realizes the end-to-end study of trace model, and by result Feature is extracted for training pattern, the training time is effectively reduced, improves service efficiency of the invention, is reality of the invention Border use is provided convenience.
Meanwhile using convolutional calculation instead of the circulating sampling in conventional method in the present invention, evade boundary effect, has used The mode of dense connection realizes residual error study, to further improve practicability of the invention.
In addition, the present invention also provides reference for other relevant issues in same domain, can be opened up on this basis Extension is stretched, and is applied in the technical solution of other method for tracking target same domain Nei, has very wide application prospect.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie In the case where without departing substantially from spirit and essential characteristics of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included within the present invention, and any reference signs in the claims should not be construed as limiting the involved claims.
In addition, it should be understood that although this specification is described in terms of embodiments, but not each embodiment is only wrapped Containing an independent technical solution, this description of the specification is merely for the sake of clarity, and those skilled in the art should It considers the specification as a whole, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art The other embodiments being understood that.

Claims (9)

1. a kind of method for tracking target based on dense connection convolutional network, which comprises the steps of:
S1, the size and location that targets of interest is determined in the initial frame of video, input trace model for initial frame;
Whether a frame of S2, input video extract the convolution feature of input frame, and be that initial frame judges to input frame,
If input frame is initial frame, seeks PCA projection matrix and dimensionality reduction, the training of convolution feature obtained by use are carried out to convolution feature Based on the target following model of dense connection network, S7 is subsequently entered,
If the non-initial frame of input frame, use has trained PCA projection matrix to carry out dimensionality reduction to input frame convolution feature, subsequently enters S3;
S3, the position that convolution feature input trace model is predicted to targets of interest;
S4, the predicted position progress scale sampling in target, estimate target sizes;
S5, the network weight for updating target following model;
S6, the predicted position and scale for exporting target;
The next frame of S7, input video return to S2, until completing the prediction of all frames of video.
2. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that institute in S1 The size and location for determining targets of interest in the initial frame of video is stated, is specifically included: in the initial frame of video, by manual Or algorithm of target detection gives position and the size of targets of interest, determines the information of targets of interest.
3. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that institute in S2 The convolution feature for extracting input frame is stated, is specifically included: input frame being inputted in pre-training neural network model VGG-19 before carrying out Calculated to propagating, wipe out the full articulamentum and output layer of model end, extract after calculating in VGG-19 third and fourth, five layers of volume The feature of lamination Chi Huaqian.
4. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that institute in S2 It states and seeks PCA projection matrix to convolution feature progress dimensionality reduction, specifically include:
If the original channel m convolution feature, lower dimensional space digit is m ', carries out centralization to all samples, calculation formula is
Then calculate the covariance matrix XX of sampleT, matrix decomposition is carried out to covariance matrix;Take a characteristic value institute of maximum m ' Corresponding feature vector ω12,…,ωm′, then projection matrix is W=(ω12,…,ωm′), the original channel m convolution feature Low-dimensional feature C '=WC after C dimensionality reduction to m ' dimension.
5. the method for tracking target according to claim 1 based on dense connection convolutional network, it is characterised in that: in S2 In the target following model based on dense connection network, realize that correlation filtering calculates with one layer of convolutional layer,
If the convolutional layer weight, the i.e. weight of correlation filter are W, the feature of input sample is X, height corresponding to input sample It is Y that this, which is distributed soft label, then is to output before convolutional layerThe loss function of so convolutional layer is defined as
Sampling is carried out around target initial position and obtains training sample, by minimum with backpropagation and gradient descent algorithm Change above-mentioned loss function, the study formula of the weight W of network is
Wherein, η is learning rate,Partial derivative for loss function about weight W.
6. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that in S2 In the target following model based on dense connection network, the implementation of dense articulamentum are as follows: by pre-training convolutional Neural Third in network VGG-19, the 4th and layer 5 convolutional layer respectively via mappingWithWith convolution LayerOutput be connected, wherein mappingI ∈ { 1,2,3 } is respectively realized by three continuous convolutional layers.
7. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that described in S3 By the position of convolution feature input trace model prediction targets of interest, specifically include: for input frame image X, trace model Output response figure H (X) is the correlation filtering response diagram and the sum of dense articulamentum response diagram of convolutional layer fitting, i.e.,
The maximum position of response is the predicted position of target, predicted position of the target in t frame in response diagram H (X)
8. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that described in S4 Scale sampling, estimation target sizes are carried out in the predicted position of target, specifically includes: obtaining target in t-1 by abovementioned steps Behind position in frame, centered on the predicted position of target, the candidate target of k different scale is chosenBy different rulers The candidate target input network of degree carries out forward calculation, chooses the scale s of the maximum candidate target of response*For optimal solution, then The target scale of present frame is
(wt,ht)=(1- β+β s*)(wt-1,ht-1),
Wherein, weight β is scale updating factor, (wt,ht) and (wt-1,ht-1) it is respectively width of the target in t frame and t-1 frame And height.
9. the method for tracking target according to claim 1 based on dense connection convolutional network, which is characterized in that described in S5 The network weight for updating target following model, specifically includes:
S51, N number of training sample is chosen in the target position of t frame
S52, by N number of training sampleNetwork is inputted, calculating trace model output phase by S2 the method should scheme and Gauss mark The L of label2Then loss function updates network weight with backpropagation and gradient descent algorithm.
CN201811374073.1A 2018-11-19 2018-11-19 Target tracking method based on dense connection convolution network Active CN109410251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811374073.1A CN109410251B (en) 2018-11-19 2018-11-19 Target tracking method based on dense connection convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811374073.1A CN109410251B (en) 2018-11-19 2018-11-19 Target tracking method based on dense connection convolution network

Publications (2)

Publication Number Publication Date
CN109410251A true CN109410251A (en) 2019-03-01
CN109410251B CN109410251B (en) 2022-05-03

Family

ID=65473892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811374073.1A Active CN109410251B (en) 2018-11-19 2018-11-19 Target tracking method based on dense connection convolution network

Country Status (1)

Country Link
CN (1) CN109410251B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060274A (en) * 2019-04-12 2019-07-26 北京影谱科技股份有限公司 The visual target tracking method and device of neural network based on the dense connection of depth
CN110188753A (en) * 2019-05-21 2019-08-30 北京以萨技术股份有限公司 One kind being based on dense connection convolutional neural networks target tracking algorism
CN110955259A (en) * 2019-11-28 2020-04-03 上海歌尔泰克机器人有限公司 Unmanned aerial vehicle, tracking method thereof and computer-readable storage medium
CN111145216A (en) * 2019-12-26 2020-05-12 电子科技大学 Tracking method of video image target
CN111488907A (en) * 2020-03-05 2020-08-04 浙江工业大学 Robust image identification method based on dense PCANet
CN112634330A (en) * 2020-12-28 2021-04-09 南京邮电大学 Full convolution twin network target tracking algorithm based on RAFT optical flow

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956560A (en) * 2016-05-06 2016-09-21 电子科技大学 Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN107154024A (en) * 2017-05-19 2017-09-12 南京理工大学 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN107808122A (en) * 2017-09-30 2018-03-16 中国科学院长春光学精密机械与物理研究所 Method for tracking target and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956560A (en) * 2016-05-06 2016-09-21 电子科技大学 Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN107154024A (en) * 2017-05-19 2017-09-12 南京理工大学 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN107808122A (en) * 2017-09-30 2018-03-16 中国科学院长春光学精密机械与物理研究所 Method for tracking target and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060274A (en) * 2019-04-12 2019-07-26 北京影谱科技股份有限公司 The visual target tracking method and device of neural network based on the dense connection of depth
CN110188753A (en) * 2019-05-21 2019-08-30 北京以萨技术股份有限公司 One kind being based on dense connection convolutional neural networks target tracking algorism
CN110955259A (en) * 2019-11-28 2020-04-03 上海歌尔泰克机器人有限公司 Unmanned aerial vehicle, tracking method thereof and computer-readable storage medium
CN110955259B (en) * 2019-11-28 2023-08-29 上海歌尔泰克机器人有限公司 Unmanned aerial vehicle, tracking method thereof and computer readable storage medium
CN111145216A (en) * 2019-12-26 2020-05-12 电子科技大学 Tracking method of video image target
CN111145216B (en) * 2019-12-26 2023-08-18 电子科技大学 Tracking method of video image target
CN111488907A (en) * 2020-03-05 2020-08-04 浙江工业大学 Robust image identification method based on dense PCANet
CN111488907B (en) * 2020-03-05 2023-07-14 浙江工业大学 Robust image recognition method based on dense PCANet
CN112634330A (en) * 2020-12-28 2021-04-09 南京邮电大学 Full convolution twin network target tracking algorithm based on RAFT optical flow
CN112634330B (en) * 2020-12-28 2022-09-13 南京邮电大学 Full convolution twin network target tracking algorithm based on RAFT optical flow

Also Published As

Publication number Publication date
CN109410251B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN109410251A (en) Method for tracking target based on dense connection convolutional network
CN108665481B (en) Self-adaptive anti-blocking infrared target tracking method based on multi-layer depth feature fusion
CN104615983B (en) Activity recognition method based on recurrent neural network and human skeleton motion sequence
CN105787439B (en) A kind of depth image human synovial localization method based on convolutional neural networks
CN107330357A (en) Vision SLAM closed loop detection methods based on deep neural network
CN107480704A (en) It is a kind of that there is the real-time vision method for tracking target for blocking perception mechanism
CN109191491A (en) The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
CN103400386B (en) A kind of Interactive Image Processing method in video
CN105512680A (en) Multi-view SAR image target recognition method based on depth neural network
CN105760849B (en) Target object behavioral data acquisition methods and device based on video
Nam et al. Online graph-based tracking
CN107689052A (en) Visual target tracking method based on multi-model fusion and structuring depth characteristic
CN106447696B (en) A kind of big displacement target sparse tracking that locomotion evaluation is flowed based on two-way SIFT
CN104599286B (en) A kind of characteristic tracking method and device based on light stream
CN110378208B (en) Behavior identification method based on deep residual error network
CN106991691A (en) A kind of distributed object tracking being applied under camera network
CN106846378B (en) A kind of across the video camera object matching and tracking of the estimation of combination topology of spacetime
CN108734095A (en) A kind of motion detection method based on 3D convolutional neural networks
CN109741366A (en) A kind of correlation filtering method for tracking target merging multilayer convolution feature
CN109062962A (en) A kind of gating cycle neural network point of interest recommended method merging Weather information
CN105228033A (en) A kind of method for processing video frequency and electronic equipment
CN110348383A (en) A kind of road axis and two-wire extracting method based on convolutional neural networks recurrence
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
CN107615272A (en) System and method for predicting crowd's attribute
CN111027586A (en) Target tracking method based on novel response map fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant