CN108665485A - A kind of method for tracking target merged with twin convolutional network based on correlation filtering - Google Patents

A kind of method for tracking target merged with twin convolutional network based on correlation filtering Download PDF

Info

Publication number
CN108665485A
CN108665485A CN201810342324.1A CN201810342324A CN108665485A CN 108665485 A CN108665485 A CN 108665485A CN 201810342324 A CN201810342324 A CN 201810342324A CN 108665485 A CN108665485 A CN 108665485A
Authority
CN
China
Prior art keywords
target
frame image
convolutional network
video sequence
present
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810342324.1A
Other languages
Chinese (zh)
Other versions
CN108665485B (en
Inventor
邹腊梅
李鹏
罗鸣
金留嘉
杨卫东
李晓光
熊紫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201810342324.1A priority Critical patent/CN108665485B/en
Publication of CN108665485A publication Critical patent/CN108665485A/en
Application granted granted Critical
Publication of CN108665485B publication Critical patent/CN108665485B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Abstract

The invention discloses a kind of method for tracking target merged with twin convolutional network based on correlation filtering, including:Using the target signature of (t 1) frame image of the first convolutional network extraction known target position, the search characteristics figure of t frame images is extracted using the second convolutional network;Fast Fourier Transform (FFT) is carried out to the target signature of (t 1) frame image and obtains the target area of (t 1) frame image, correlation filtering is carried out to the search characteristics figure of t frame images and obtains the region of search of t frame images, calculate the cross correlation between the region of search and the target area of (t 1) frame image of t frame images, the target score figure of t frame images is obtained, the target location of t frame images is obtained according to the target score figure of t frame images;And then the target location of each frame image in video sequence is obtained, realize the target following to video sequence.The present invention can overcome illumination, block, and the influence of posture and scale carries out real-time modeling method.

Description

A kind of method for tracking target merged with twin convolutional network based on correlation filtering
Technical field
The invention belongs to the crossing domain of computer vision, depth convolutional network and pattern-recognition, more particularly, to A kind of method for tracking target merged with twin convolutional network based on correlation filtering.
Background technology
Target following has very important status in computer vision, however due to the complexity of natural scene, mesh It marks to the sensibility of illumination variation, tracks the requirement to real-time and robustness, and block, the factors such as posture and dimensional variation Presence so that tracking problem is still highly difficult.Traditional method for tracking target, can not the feature abundant to Objective extraction make Target strict differences and background are susceptible to tracking drift phenomenon, therefore can not track target for a long time;With deep learning It rising, existing general convolutional neural networks can effectively extract the abundant feature of target, but network parameter is excessive, if It to track online, it is virtually impossible to meet the requirement of real-time performance, Practical Project utility value is limited.
The high performance calculating device such as raising and GPU due to hardware performance is popularized, and the real-time of tracking is no longer difficult To overcome the problems, such as, effective target appearance model is only vital during tracking.The essence of target following is one The process of a similarity measurement has naturally excellent due to the special construction of twin convolutional network in terms of similarity measurement Gesture, and there is convolutional coding structure, abundant feature can be extracted for target following.Pure is used based on twin convolutional network Off-line training, it is online to track, although real-time can be met the requirements in high performance computation equipment, not in line target mould The problems such as dynamic of plate updates, and is difficult to overcome illumination, blocks, posture and scale.
Invention content
For the disadvantages described above or Improvement requirement of the prior art, the present invention provides one kind based on correlation filtering and twin volume The method for tracking target of the product network integration, thus solving the prior art does not have the dynamic of online target template to update, and is difficult to Overcome illumination, block, the technical issues of posture and scale.
To achieve the above object, the present invention provides a kind of target merged with twin convolutional network based on correlation filtering with Track method, the twin convolutional network are 2 identical first convolutional networks and the second convolutional network, the target following Method includes:
(1) target signature for utilizing (t-1) frame image of the first convolutional network extraction known target position utilizes the Two convolutional networks extract the search characteristics figure of t frame images;
(2) Fast Fourier Transform (FFT) is carried out to the target signature of (t-1) frame image and obtains the mesh of (t-1) frame image Region is marked, carrying out correlation filtering to the search characteristics figure of t frame images obtains the region of search of t frame images, calculates t frame figures Cross correlation between the region of search and the target area of (t-1) frame image of picture, obtains the target score of t frame images Figure, the target location of t frame images is obtained according to the target score figure of t frame images;
Wherein, the 1st frame image in video sequence is demarcated when t is 2 in t >=2, executes step (1)-(2) and obtains To the target location of the 2nd frame image, when t is 3, executes step (1)-(2) and obtain the target location of the 3rd frame image, with such It pushes away, obtains the target location of each frame image in video sequence, realize the target following to video sequence.
Further, the correlation filtering in step (2) includes:
Smothing filtering is carried out using the search characteristics figure of cosine window function or sine-window function pair t frame images, Then Fast Fourier Transform (FFT) is used from spatial transform to frequency domain, to obtain the search characteristics figure of the t frame images after smothing filtering To the region of search of t frame images.
Further, the first convolutional network and the second convolutional network include five convolutional layers, five convolutional layers There are one down-sampling pond layers respectively after preceding two layers of convolution.
Further, twin convolutional network is trained convolutional network, and the training method of the twin convolutional network is:
Collecting sample video sequence is utilized to being marked per the target location of frame sample image in Sample video sequence Sample video sequence training convolutional network after label is joined in training with the minimum objective optimization network of logarithm loss function Number, obtains trained convolutional network.
Further, logarithm loss function is:
L (y, v)=log (1+exp (- yv))
Wherein, v is the confidence score of the target location of sample image, and y is the label of the target location of sample image, l (y, V) it is error amount.
Further, method for tracking target further includes:
When t is 2, the 1st frame image in video sequence is demarcated, step (1)-(2) is executed and obtains the 1st frame image Region of search and the target area of the 2nd frame image between cross correlation pass through and minimize logarithm loss using cross correlation Function backpropagation updates the network parameter of twin convolutional network.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, can obtain down and show Beneficial effect:
(1) the method for the present invention combines twin convolutional network using correlation filtering, and target following is improved using correlation filtering Real-time feature-rich and accurate measured similarity side are extracted, so that the method for the present invention can using twin convolutional network It effectively to extract the feature-rich of target, while being realized using correlation filtering and smoothly being updated in line template, reach efficient reality When target following.
(2) the logarithm loss function that the present invention uses accelerates the training speed of network, is effectively prevented in training process There is gradient disappearance or gradient disperse.Can it is accurate, robust, carry out target following in real time.The present invention uses cosine window Function or sine-window function carry out smothing filtering, eliminate the noise that Fourier transformation generates on the image.Using quick Convolution operation inside spatial domain can be become the operation of the dot product inside frequency domain by Fourier transformation, be significantly reduced calculation amount.
Description of the drawings
Fig. 1 is the flow chart of method for tracking target provided in an embodiment of the present invention;
Fig. 2 is the flow chart of method for tracking target in detail provided in an embodiment of the present invention;
Fig. 3 is the flow chart of correlation filtering provided in an embodiment of the present invention;
Fig. 4 (a) is the first frame image of the first video sequence provided in an embodiment of the present invention;
Fig. 4 (b) is the first frame image of the second video sequence provided in an embodiment of the present invention;
Fig. 4 (c) is the first frame image of third video sequence provided in an embodiment of the present invention;
Fig. 4 (d) is the first frame image of the 4th video sequence provided in an embodiment of the present invention;
Fig. 4 (e) is the first frame image of the 5th video sequence provided in an embodiment of the present invention;
Fig. 4 (f) is the first frame image of the 6th video sequence provided in an embodiment of the present invention;
Fig. 5 (a1) is provided in an embodiment of the present invention using the progress target following of the first video sequence of the method for the present invention pair 50th frame image;
Fig. 5 (a2) is provided in an embodiment of the present invention using the progress target following of the first video sequence of the method for the present invention pair 100th frame image;
Fig. 5 (a3) is provided in an embodiment of the present invention using the progress target following of the first video sequence of the method for the present invention pair 150th frame image;
Fig. 5 (b1) is provided in an embodiment of the present invention using the progress target following of the second video sequence of the method for the present invention pair 50th frame image;
Fig. 5 (b2) is provided in an embodiment of the present invention using the progress target following of the second video sequence of the method for the present invention pair 100th frame image;
Fig. 5 (b3) is provided in an embodiment of the present invention using the progress target following of the second video sequence of the method for the present invention pair 150th frame image;
Fig. 5 (c1) be it is provided in an embodiment of the present invention using the method for the present invention to third video sequence carry out target following 50th frame image;
Fig. 5 (c2) be it is provided in an embodiment of the present invention using the method for the present invention to third video sequence carry out target following 100th frame image;
Fig. 5 (c3) be it is provided in an embodiment of the present invention using the method for the present invention to third video sequence carry out target following 150th frame image;
Fig. 5 (d1) is provided in an embodiment of the present invention using the progress target following of the 4th video sequence of the method for the present invention pair 50th frame image;
Fig. 5 (d2) is provided in an embodiment of the present invention using the progress target following of the 4th video sequence of the method for the present invention pair 100th frame image;
Fig. 5 (d3) is provided in an embodiment of the present invention using the progress target following of the 4th video sequence of the method for the present invention pair 150th frame image;
Fig. 5 (e1) is provided in an embodiment of the present invention using the progress target following of the 5th video sequence of the method for the present invention pair 50th frame image;
Fig. 5 (e2) is provided in an embodiment of the present invention using the progress target following of the 5th video sequence of the method for the present invention pair 100th frame image;
Fig. 5 (e3) is provided in an embodiment of the present invention using the progress target following of the 5th video sequence of the method for the present invention pair 150th frame image;
Fig. 5 (f1) is provided in an embodiment of the present invention using the progress target following of the 6th video sequence of the method for the present invention pair 50th frame image;
Fig. 5 (f2) is provided in an embodiment of the present invention using the progress target following of the 6th video sequence of the method for the present invention pair 100th frame image;
Fig. 5 (f3) is provided in an embodiment of the present invention using the progress target following of the 6th video sequence of the method for the present invention pair 150th frame image.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below It does not constitute a conflict with each other and can be combined with each other.
As shown in Figure 1, a kind of method for tracking target merged with twin convolutional network based on correlation filtering, the twin volume Product network is 2 identical first convolutional networks and the second convolutional network, the method for tracking target include:
(1) target signature for utilizing (t-1) frame image of the first convolutional network extraction known target position utilizes the Two convolutional networks extract the search characteristics figure of t frame images;
(2) Fast Fourier Transform (FFT) is carried out to the target signature of (t-1) frame image and obtains the mesh of (t-1) frame image Region is marked, carrying out correlation filtering to the search characteristics figure of t frame images obtains the region of search of t frame images, calculates t frame figures Cross correlation between the region of search and the target area of (t-1) frame image of picture, obtains the target score of t frame images Figure, the target location of t frame images is obtained according to the target score figure of t frame images;
Wherein, the 1st frame image in video sequence is demarcated when t is 2 in t >=2, executes step (1)-(2) and obtains To the target location of the 2nd frame image, when t is 3, executes step (1)-(2) and obtain the target location of the 3rd frame image, with such It pushes away, obtains the target location of each frame image in video sequence, realize the target following to video sequence.
In detail, as shown in Fig. 2, a kind of method for tracking target merged with twin convolutional network based on correlation filtering, institute State twin convolutional network includes for 2 identical first convolutional networks and the second convolutional network, the method for tracking target:
Utilize the extensive visual identity challenge matches of ImageNet (ILSVRC, ImageNet, Large Scale Visual Recognition Challenge) in video database as Sample video sequence, in Sample video sequence per frame sample The target location of image is marked, and is damaged with logarithm in training using the Sample video sequence training convolutional network after label The minimum objective optimization network parameter of function is lost, trained convolutional network is obtained.
(1) target signature for utilizing (t-1) frame image of the first convolutional network extraction known target position utilizes the Two convolutional networks extract the search characteristics figure of t frame images;
(2) Fast Fourier Transform (FFT) is carried out to the target signature of (t-1) frame image and obtains the mesh of (t-1) frame image Region is marked, carrying out correlation filtering to the search characteristics figure of t frame images obtains the region of search of t frame images, calculates t frame figures Cross correlation between the region of search and the target area of (t-1) frame image of picture, obtains the target score of t frame images Figure, the target location of t frame images is obtained according to the target score figure of t frame images;
Wherein, the 1st frame image in video sequence is demarcated when t is 2 in t >=2, executes step (1)-(2) and obtains To the target location of the 2nd frame image, when t is 3, executes step (1)-(2) and obtain the target location of the 3rd frame image, with such It pushes away, obtains the target location of each frame image in video sequence, realize the target following to video sequence.
As shown in figure 3, being carried out using the search characteristics figure of cosine window function or sine-window function pair t frame images Then smothing filtering uses Fast Fourier Transform (FFT) by the search characteristics figure of the t frame images after smothing filtering from spatial transform To frequency domain, the region of search of t frame images is obtained.
When t is 2, the 1st frame image in video sequence is demarcated, step (1)-(2) is executed and obtains the 1st frame image Region of search and the target area of the 2nd frame image between cross correlation pass through and minimize logarithm loss using cross correlation Function backpropagation updates the network parameter of twin convolutional network, obtains target score figure (score as in figure is accordingly schemed), Accordingly, it predicts the target frame of the 2nd frame image, then carries out Fast Fourier Transform (FFT), slided using the 1st frame image of calibration Averaging model updates, and the target location of the 2nd obtained frame image is used to calculate the template of correlation as the 3rd frame.
Fig. 4 (a) is the first frame image of the first video sequence provided in an embodiment of the present invention;Fig. 4 (b) is implementation of the present invention The first frame image for the second video sequence that example provides;Fig. 4 (c) is the first of third video sequence provided in an embodiment of the present invention Frame image;Fig. 4 (d) is the first frame image of the 4th video sequence provided in an embodiment of the present invention;Fig. 4 (e) is implementation of the present invention The first frame image for the 5th video sequence that example provides;Fig. 4 (f) is the first of the 6th video sequence provided in an embodiment of the present invention Frame image;The position of target and size are wherein calibrated to the input for being used as convolutional network.
Fig. 5 (a1) is provided in an embodiment of the present invention using the progress target following of the first video sequence of the method for the present invention pair 50th frame image;Fig. 5 (a2) be it is provided in an embodiment of the present invention using the first video sequence of the method for the present invention pair carry out target with 100th frame image of track;Fig. 5 (a3) is provided in an embodiment of the present invention using the first video sequence of the method for the present invention pair progress mesh Mark the 150th frame image of tracking;As can be seen that method for tracking target proposed by the present invention can effectively trace into appearance deformation Target.
Fig. 5 (b1) is provided in an embodiment of the present invention using the progress target following of the second video sequence of the method for the present invention pair 50th frame image;Fig. 5 (b2) be it is provided in an embodiment of the present invention using the second video sequence of the method for the present invention pair carry out target with 100th frame image of track;Fig. 5 (b3) is provided in an embodiment of the present invention using the second video sequence of the method for the present invention pair progress mesh Mark the 150th frame image of tracking;As can be seen that method for tracking target proposed by the present invention can move mould effective against target Paste.
Fig. 5 (c1) be it is provided in an embodiment of the present invention using the method for the present invention to third video sequence carry out target following 50th frame image;Fig. 5 (c2) be it is provided in an embodiment of the present invention using the method for the present invention to third video sequence carry out target with 100th frame image of track;Fig. 5 (c3) is that use the method for the present invention provided in an embodiment of the present invention carries out mesh to third video sequence Mark the 150th frame image of tracking;As can be seen that method for tracking target proposed by the present invention can be dry effective against similar background It disturbs.
Fig. 5 (d1) is provided in an embodiment of the present invention using the progress target following of the 4th video sequence of the method for the present invention pair 50th frame image;Fig. 5 (d2) be it is provided in an embodiment of the present invention using the 4th video sequence of the method for the present invention pair carry out target with 100th frame image of track;Fig. 5 (d3) is provided in an embodiment of the present invention using the 4th video sequence of the method for the present invention pair progress mesh Mark the 150th frame image of tracking;As can be seen that method for tracking target proposed by the present invention can effectively trace into quick movement Target.
Fig. 5 (e1) is provided in an embodiment of the present invention using the progress target following of the 5th video sequence of the method for the present invention pair 50th frame image;Fig. 5 (e2) be it is provided in an embodiment of the present invention using the 5th video sequence of the method for the present invention pair carry out target with 100th frame image of track;Fig. 5 (e3) is provided in an embodiment of the present invention using the 5th video sequence of the method for the present invention pair progress mesh Mark the 150th frame image of tracking;As can be seen that method for tracking target proposed by the present invention can change effective against target scale And illumination variation.
Fig. 5 (f1) is provided in an embodiment of the present invention using the progress target following of the 6th video sequence of the method for the present invention pair 50th frame image;Fig. 5 (f2) be it is provided in an embodiment of the present invention using the 6th video sequence of the method for the present invention pair carry out target with 100th frame image of track;Fig. 5 (f3) is provided in an embodiment of the present invention using the 6th video sequence of the method for the present invention pair progress mesh Mark the 150th frame image of tracking.As can be seen that method for tracking target proposed by the present invention can be blocked effective against target.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, all within the spirits and principles of the present invention made by all any modification, equivalent and improvement etc., should all include Within protection scope of the present invention.

Claims (6)

1. a kind of method for tracking target merged with twin convolutional network based on correlation filtering, which is characterized in that the twin volume Product network is 2 identical first convolutional networks and the second convolutional network, the method for tracking target include:
(1) target signature for utilizing (t-1) frame image of the first convolutional network extraction known target position, utilizes volume Two The search characteristics figure of product network extraction t frame images;
(2) Fast Fourier Transform (FFT) is carried out to the target signature of (t-1) frame image and obtains the target area of (t-1) frame image Domain carries out correlation filtering to the search characteristics figure of t frame images and obtains the region of search of t frame images, calculates t frame images Cross correlation between region of search and the target area of (t-1) frame image obtains the target score figure of t frame images, root The target location of t frame images is obtained according to the target score figure of t frame images;
Wherein, the 1st frame image in video sequence is demarcated when t is 2 in t >=2, executes step (1)-(2) and obtains the 2nd The target location of frame image executes step (1)-(2) and obtains the target location of the 3rd frame image when t is 3, and so on, it obtains To the target location of each frame image in video sequence, the target following to video sequence is realized.
2. a kind of method for tracking target merged with twin convolutional network based on correlation filtering as described in claim 1, special Sign is that the correlation filtering in the step (2) includes:
Smothing filtering is carried out using the search characteristics figure of cosine window function or sine-window function pair t frame images, then Using Fast Fourier Transform (FFT) by the search characteristics figure of the t frame images after smothing filtering from spatial transform to frequency domain, obtain t The region of search of frame image.
3. a kind of method for tracking target merged with twin convolutional network based on correlation filtering as claimed in claim 1 or 2, It being characterized in that, first convolutional network and the second convolutional network include five convolutional layers, and preceding the two of five convolutional layers There are one down-sampling pond layers respectively after layer convolution.
4. a kind of method for tracking target merged with twin convolutional network based on correlation filtering as claimed in claim 1 or 2, It is characterized in that, the twin convolutional network is trained convolutional network, and the training method of the twin convolutional network is:
Collecting sample video sequence utilizes label to being marked per the target location of frame sample image in Sample video sequence Sample video sequence training convolutional network afterwards, with the minimum objective optimization network parameter of logarithm loss function, is obtained in training To trained convolutional network.
5. a kind of method for tracking target merged with twin convolutional network based on correlation filtering as claimed in claim 4, special Sign is that the logarithm loss function is:
L (y, v)=log (1+exp (- yv))
Wherein, v is the confidence score of the target location of sample image, and y is the label of the target location of sample image, and l (y, v) is Error amount.
6. a kind of method for tracking target merged with twin convolutional network based on correlation filtering as claimed in claim 4, special Sign is that the method for tracking target further includes:
When t is 2, the 1st frame image in video sequence is demarcated, step (1)-(2) is executed and obtains searching for the 1st frame image Cross correlation between rope region and the target area of the 2nd frame image, using cross correlation, by minimizing logarithm loss function Backpropagation updates the network parameter of twin convolutional network.
CN201810342324.1A 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion Expired - Fee Related CN108665485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810342324.1A CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810342324.1A CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Publications (2)

Publication Number Publication Date
CN108665485A true CN108665485A (en) 2018-10-16
CN108665485B CN108665485B (en) 2021-07-02

Family

ID=63783613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810342324.1A Expired - Fee Related CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Country Status (1)

Country Link
CN (1) CN108665485B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN109598684A (en) * 2018-11-21 2019-04-09 华南理工大学 In conjunction with the correlation filtering tracking of twin network
CN109712171A (en) * 2018-12-28 2019-05-03 上海极链网络科技有限公司 A kind of Target Tracking System and method for tracking target based on correlation filter
CN110210551A (en) * 2019-05-28 2019-09-06 北京工业大学 A kind of visual target tracking method based on adaptive main body sensitivity
CN110309835A (en) * 2019-06-27 2019-10-08 中国人民解放军战略支援部队信息工程大学 A kind of image local feature extracting method and device
CN110415271A (en) * 2019-06-28 2019-11-05 武汉大学 One kind fighting twin network target tracking method based on the multifarious generation of appearance
CN110473231A (en) * 2019-08-20 2019-11-19 南京航空航天大学 A kind of method for tracking target of the twin full convolutional network with anticipation formula study more new strategy
CN110807793A (en) * 2019-09-29 2020-02-18 南京大学 Target tracking method based on twin network
CN111260688A (en) * 2020-01-13 2020-06-09 深圳大学 Twin double-path target tracking method
CN111340850A (en) * 2020-03-20 2020-06-26 军事科学院系统工程研究院系统总体研究所 Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss
CN111415373A (en) * 2020-03-20 2020-07-14 北京以萨技术股份有限公司 Target tracking and segmenting method, system and medium based on twin convolutional network
CN112686957A (en) * 2019-10-18 2021-04-20 北京华航无线电测量研究所 Quick calibration method for sequence image
CN113592899A (en) * 2021-05-28 2021-11-02 北京理工大学重庆创新中心 Method for extracting correlated filtering target tracking depth features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and apparatus
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
US20170132472A1 (en) * 2015-11-05 2017-05-11 Qualcomm Incorporated Generic mapping for tracking target object in video sequence
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and apparatus
US20170132472A1 (en) * 2015-11-05 2017-05-11 Qualcomm Incorporated Generic mapping for tracking target object in video sequence
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JANGHOON CHOI等: "Deep Meta Learning for Real-Time Visual Tracking based on Target-Specific Feature Space", 《ARXIV》 *
QIANG WANG等: "DCFNet: discriminant correlation filters network for visual tracking", 《ARXIV》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN109543559B (en) * 2018-10-31 2021-12-28 东南大学 Target tracking method and system based on twin network and action selection mechanism
CN109598684A (en) * 2018-11-21 2019-04-09 华南理工大学 In conjunction with the correlation filtering tracking of twin network
CN109712171A (en) * 2018-12-28 2019-05-03 上海极链网络科技有限公司 A kind of Target Tracking System and method for tracking target based on correlation filter
CN109712171B (en) * 2018-12-28 2023-09-01 厦门瑞利特信息科技有限公司 Target tracking system and target tracking method based on correlation filter
CN110210551B (en) * 2019-05-28 2021-07-30 北京工业大学 Visual target tracking method based on adaptive subject sensitivity
CN110210551A (en) * 2019-05-28 2019-09-06 北京工业大学 A kind of visual target tracking method based on adaptive main body sensitivity
CN110309835A (en) * 2019-06-27 2019-10-08 中国人民解放军战略支援部队信息工程大学 A kind of image local feature extracting method and device
CN110309835B (en) * 2019-06-27 2021-10-15 中国人民解放军战略支援部队信息工程大学 Image local feature extraction method and device
CN110415271B (en) * 2019-06-28 2022-06-07 武汉大学 Appearance diversity-based method for tracking generation twin-resisting network target
CN110415271A (en) * 2019-06-28 2019-11-05 武汉大学 One kind fighting twin network target tracking method based on the multifarious generation of appearance
CN110473231A (en) * 2019-08-20 2019-11-19 南京航空航天大学 A kind of method for tracking target of the twin full convolutional network with anticipation formula study more new strategy
CN110807793A (en) * 2019-09-29 2020-02-18 南京大学 Target tracking method based on twin network
CN110807793B (en) * 2019-09-29 2022-04-22 南京大学 Target tracking method based on twin network
CN112686957A (en) * 2019-10-18 2021-04-20 北京华航无线电测量研究所 Quick calibration method for sequence image
CN111260688A (en) * 2020-01-13 2020-06-09 深圳大学 Twin double-path target tracking method
CN111340850A (en) * 2020-03-20 2020-06-26 军事科学院系统工程研究院系统总体研究所 Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss
CN111415373A (en) * 2020-03-20 2020-07-14 北京以萨技术股份有限公司 Target tracking and segmenting method, system and medium based on twin convolutional network
CN113592899A (en) * 2021-05-28 2021-11-02 北京理工大学重庆创新中心 Method for extracting correlated filtering target tracking depth features

Also Published As

Publication number Publication date
CN108665485B (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN108665485A (en) A kind of method for tracking target merged with twin convolutional network based on correlation filtering
CN108319972B (en) End-to-end difference network learning method for image semantic segmentation
CN104573731B (en) Fast target detection method based on convolutional neural networks
CN112184752A (en) Video target tracking method based on pyramid convolution
CN106355602B (en) A kind of Multi-target position tracking video frequency monitoring method
CN109191491A (en) The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN110135500A (en) Method for tracking target under a kind of more scenes based on adaptive depth characteristic filter
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN108090918A (en) A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth
CN110210551A (en) A kind of visual target tracking method based on adaptive main body sensitivity
CN107103326A (en) The collaboration conspicuousness detection method clustered based on super-pixel
CN108665481A (en) Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method
CN103886619B (en) A kind of method for tracking target merging multiple dimensioned super-pixel
CN108346159A (en) A kind of visual target tracking method based on tracking-study-detection
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106530340B (en) A kind of specified object tracking
CN110473231B (en) Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN110097575B (en) Target tracking method based on local features and scale pool
CN109299701A (en) Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN106599994A (en) Sight line estimation method based on depth regression network
CN108288282A (en) A kind of adaptive features select method for tracking target based on convolutional neural networks
CN104392223A (en) Method for recognizing human postures in two-dimensional video images
CN110827262B (en) Weak and small target detection method based on continuous limited frame infrared image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210702