CN108665485B - Target tracking method based on relevant filtering and twin convolution network fusion - Google Patents

Target tracking method based on relevant filtering and twin convolution network fusion Download PDF

Info

Publication number
CN108665485B
CN108665485B CN201810342324.1A CN201810342324A CN108665485B CN 108665485 B CN108665485 B CN 108665485B CN 201810342324 A CN201810342324 A CN 201810342324A CN 108665485 B CN108665485 B CN 108665485B
Authority
CN
China
Prior art keywords
frame image
target
video sequence
image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810342324.1A
Other languages
Chinese (zh)
Other versions
CN108665485A (en
Inventor
邹腊梅
李鹏
罗鸣
金留嘉
杨卫东
李晓光
熊紫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201810342324.1A priority Critical patent/CN108665485B/en
Publication of CN108665485A publication Critical patent/CN108665485A/en
Application granted granted Critical
Publication of CN108665485B publication Critical patent/CN108665485B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Abstract

The invention discloses a target tracking method based on relevant filtering and twin convolution network fusion, which comprises the following steps: extracting a target feature map of a (t-1) th frame image with a known target position by using a first convolution network, and extracting a search feature map of the t-th frame image by using a second convolution network; performing fast Fourier transform on a target characteristic diagram of the (t-1) th frame image to obtain a target area of the (t-1) th frame image, performing related filtering on a search characteristic diagram of the t-th frame image to obtain a search area of the t-th frame image, calculating the cross correlation between the search area of the t-th frame image and the target area of the (t-1) th frame image to obtain a target score diagram of the t-th frame image, and obtaining a target position of the t-th frame image according to the target score diagram of the t-th frame image; and further obtaining the target position of each frame of image in the video sequence, and realizing the target tracking of the video sequence. The invention can overcome the influences of illumination, shielding, posture and scale to track the target in real time.

Description

Target tracking method based on relevant filtering and twin convolution network fusion
Technical Field
The invention belongs to the cross field of computer vision, deep convolutional networks and pattern recognition, and particularly relates to a target tracking method based on fusion of correlation filtering and a twin convolutional network.
Background
Target tracking has a very important position in computer vision, however, due to the complexity of natural scenes, the sensitivity of the target to illumination changes, the requirements of tracking on real-time performance and robustness, and the existence of factors such as occlusion, posture and scale change, the tracking problem is still difficult. The traditional target tracking method cannot extract rich characteristics from a target, so that the target is strictly distinguished from a background, a tracking drift phenomenon easily occurs, and the target cannot be tracked for a long time; with the rise of deep learning, the existing general convolutional neural network can effectively extract the characteristics rich in targets, but the network parameters are too many, and if online tracking is needed, the requirement of real-time performance can hardly be met, and the practical engineering utilization value is limited.
Due to the improvement of hardware performance and the popularization of high-performance computing devices such as a GPU (graphics processing unit) and the like, the tracking instantaneity is not a problem which is difficult to overcome any more, and an effective target appearance model is of great importance in the tracking process. The essence of target tracking is a similarity measurement process, and due to the special structure of the twin convolution network, the target tracking has natural advantages in similarity measurement, and has a convolution structure, so that abundant features can be extracted for target tracking. The pure twin-based convolutional network adopts offline training and online tracking, and although the requirements can be met on high-performance computing equipment in real time, the problems of illumination, shielding, posture, scale and the like are difficult to overcome without dynamic updating of an online target template.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a target tracking method based on the fusion of correlation filtering and a twin convolution network, thereby solving the technical problems that the prior art does not have the dynamic update of an online target template and is difficult to overcome illumination, occlusion, posture and scale.
In order to achieve the above object, the present invention provides a target tracking method based on a fusion of correlation filtering and a twin convolutional network, where the twin convolutional network is a first convolutional network and a second convolutional network that are identical to each other in 2, and the target tracking method includes:
(1) extracting a target feature map of a (t-1) th frame image with a known target position by using a first convolution network, and extracting a search feature map of the t-th frame image by using a second convolution network;
(2) performing fast Fourier transform on a target characteristic diagram of the (t-1) th frame image to obtain a target area of the (t-1) th frame image, performing related filtering on a search characteristic diagram of the t-th frame image to obtain a search area of the t-th frame image, calculating the cross correlation between the search area of the t-th frame image and the target area of the (t-1) th frame image to obtain a target score diagram of the t-th frame image, and obtaining a target position of the t-th frame image according to the target score diagram of the t-th frame image;
when t is 2, calibrating the 1 st frame image in the video sequence, executing steps (1) - (2) to obtain the target position of the 2 nd frame image, when t is 3, executing steps (1) - (2) to obtain the target position of the 3 rd frame image, and so on to obtain the target position of each frame image in the video sequence, thereby realizing the target tracking of the video sequence.
Further, the correlation filtering in step (2) includes:
and performing smooth filtering on the search characteristic graph of the t frame image by adopting a cosine window function or a sine window function, and then transforming the search characteristic graph of the t frame image subjected to smooth filtering from a spatial domain to a frequency domain by adopting fast Fourier transform to obtain a search area of the t frame image.
Further, the first convolution network and the second convolution network each include five convolution layers, and the convolution of the first two layers of the five convolution layers is respectively followed by a downsampling pooling layer.
Further, the twin convolutional network is a trained convolutional network, and the training method of the twin convolutional network comprises the following steps:
the method comprises the steps of collecting a sample video sequence, marking a target position of each frame of sample image in the sample video sequence, training a convolutional network by using the marked sample video sequence, and optimizing network parameters by taking the minimum logarithmic loss function as a target during training to obtain the trained convolutional network.
Further, the log-loss function is:
l(y,v)=log(1+exp(-yv))
where v is the confidence score of the target location of the sample image, y is the marker of the target location of the sample image, and l (y, v) is an error value.
Further, the target tracking method further includes:
when t is 2, calibrating the 1 st frame image in the video sequence, executing the steps (1) - (2) to obtain the cross correlation between the search area of the 1 st frame image and the target area of the 2 nd frame image, and updating the network parameters of the twin convolutional network by back propagation of a minimized logarithmic loss function by utilizing the cross correlation.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) the method of the invention combines the relevant filtering with the twin convolution network, improves the real-time performance of target tracking by using the relevant filtering, extracts rich features and measures similarity parties accurately by using the twin convolution network, further effectively extracts the rich features of the target, and realizes the smooth update of an online template by using the relevant filtering, thereby achieving the high-efficiency real-time target tracking.
(2) The logarithmic loss function used by the invention accelerates the training speed of the network and effectively prevents the disappearance of the gradient or the dispersion of the gradient in the training process. The target tracking can be accurately, robustly and in real time. The invention adopts cosine window function or sine window function to carry out smooth filtering, and eliminates the noise generated on the image by Fourier transform. The convolution operation in the space domain can be changed into the dot product operation in the frequency domain by adopting the fast Fourier transform, so that the calculation amount is greatly reduced.
Drawings
FIG. 1 is a flow chart of a target tracking method provided by an embodiment of the invention;
FIG. 2 is a flow chart of a detailed target tracking method provided by an embodiment of the present invention;
FIG. 3 is a flow chart of correlation filtering provided by an embodiment of the present invention;
fig. 4(a) is a first frame image of a first video sequence provided by an embodiment of the present invention;
fig. 4(b) is a first frame image of a second video sequence provided by the embodiment of the present invention;
fig. 4(c) is a first frame image of a third video sequence provided by the embodiment of the present invention;
fig. 4(d) is a first frame image of a fourth video sequence provided by the embodiment of the present invention;
fig. 4(e) is a first frame image of a fifth video sequence provided by the embodiment of the present invention;
fig. 4(f) is a first frame image of a sixth video sequence provided by the embodiment of the present invention;
FIG. 5(a1) is a 50 th frame of image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(a2) is a 100 th frame of image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(a3) is a 150 th frame image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(b1) is a 50 th frame of image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(b2) is a 100 th frame of image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(b3) is a 150 th frame image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(c1) is a 50 th frame of image for object tracking of a third video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(c2) is a 100 th frame of image for object tracking of a third video sequence using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(c3) is a 150 th frame image of a third video sequence for object tracking using the method of the present invention provided by an embodiment of the present invention;
FIG. 5(d1) is a 50 th frame image of a fourth video sequence for object tracking according to an embodiment of the present invention;
FIG. 5(d2) is a 100 th frame of image for object tracking of a fourth video sequence using the method of the present invention provided by the present invention;
FIG. 5(d3) is a 150 th frame image of a fourth video sequence for object tracking according to an embodiment of the present invention;
FIG. 5(e1) is a 50 th frame of image for object tracking of a fifth video sequence using the method of the present invention according to an embodiment of the present invention;
FIG. 5(e2) is a 100 th frame of image for object tracking of a fifth video sequence using the method of the present invention according to an embodiment of the present invention;
FIG. 5(e3) is a 150 th frame image of a fifth video sequence for object tracking using the method of the present invention according to an embodiment of the present invention;
FIG. 5(f1) is a 50 th frame of image for object tracking of a sixth video sequence using the method of the present invention according to an embodiment of the present invention;
FIG. 5(f2) is a 100 th frame of image for object tracking of a sixth video sequence using the method of the present invention according to an embodiment of the present invention;
fig. 5(f3) is an image of frame 150 of a sixth video sequence for object tracking according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, a target tracking method based on the fusion of correlation filtering and a twin convolutional network, where the twin convolutional network is a first convolutional network and a second convolutional network that are identical to each other, and the target tracking method includes:
(1) extracting a target feature map of a (t-1) th frame image with a known target position by using a first convolution network, and extracting a search feature map of the t-th frame image by using a second convolution network;
(2) performing fast Fourier transform on a target characteristic diagram of the (t-1) th frame image to obtain a target area of the (t-1) th frame image, performing related filtering on a search characteristic diagram of the t-th frame image to obtain a search area of the t-th frame image, calculating the cross correlation between the search area of the t-th frame image and the target area of the (t-1) th frame image to obtain a target score diagram of the t-th frame image, and obtaining a target position of the t-th frame image according to the target score diagram of the t-th frame image;
when t is 2, calibrating the 1 st frame image in the video sequence, executing steps (1) - (2) to obtain the target position of the 2 nd frame image, when t is 3, executing steps (1) - (2) to obtain the target position of the 3 rd frame image, and so on to obtain the target position of each frame image in the video sequence, thereby realizing the target tracking of the video sequence.
In detail, as shown in fig. 2, a target tracking method based on the fusion of correlation filtering and a twin convolutional network, where the twin convolutional network is a first convolutional network and a second convolutional network that are identical to each other, includes:
the method comprises the steps of utilizing a video database in ImageNet Large-Scale Visual Recognition Challenge race (ILSVRC, ImageNet, Large Scale Visual Recognition Change) as a sample video sequence, marking the target position of each frame of sample image in the sample video sequence, utilizing the marked sample video sequence to train a convolutional network, and optimizing network parameters by taking the minimum logarithmic loss function as a target during training to obtain the trained convolutional network.
(1) Extracting a target feature map of a (t-1) th frame image with a known target position by using a first convolution network, and extracting a search feature map of the t-th frame image by using a second convolution network;
(2) performing fast Fourier transform on a target characteristic diagram of the (t-1) th frame image to obtain a target area of the (t-1) th frame image, performing related filtering on a search characteristic diagram of the t-th frame image to obtain a search area of the t-th frame image, calculating the cross correlation between the search area of the t-th frame image and the target area of the (t-1) th frame image to obtain a target score diagram of the t-th frame image, and obtaining a target position of the t-th frame image according to the target score diagram of the t-th frame image;
when t is 2, calibrating the 1 st frame image in the video sequence, executing steps (1) - (2) to obtain the target position of the 2 nd frame image, when t is 3, executing steps (1) - (2) to obtain the target position of the 3 rd frame image, and so on to obtain the target position of each frame image in the video sequence, thereby realizing the target tracking of the video sequence.
As shown in fig. 3, a cosine window function or a sine window function is used to perform smooth filtering on the search characteristic map of the t-th frame image, and then fast fourier transform is used to transform the search characteristic map of the t-th frame image after smooth filtering from a spatial domain to a frequency domain, so as to obtain a search area of the t-th frame image.
When t is 2, calibrating the 1 st frame image in the video sequence, executing the steps (1) - (2) to obtain the cross correlation between the search area of the 1 st frame image and the target area of the 2 nd frame image, updating the network parameters of the twin convolutional network by back propagation of a minimized logarithmic loss function by utilizing the cross correlation to obtain a target score map (namely a score corresponding map in the map), predicting the target frame of the 2 nd frame image, then performing fast Fourier transform, updating a sliding average model by utilizing the calibrated 1 st frame image, and taking the target position of the obtained 2 nd frame image as a3 rd frame template for calculating the correlation.
Fig. 4(a) is a first frame image of a first video sequence provided by an embodiment of the present invention; fig. 4(b) is a first frame image of a second video sequence provided by the embodiment of the present invention; fig. 4(c) is a first frame image of a third video sequence provided by the embodiment of the present invention; fig. 4(d) is a first frame image of a fourth video sequence provided by the embodiment of the present invention; fig. 4(e) is a first frame image of a fifth video sequence provided by the embodiment of the present invention; fig. 4(f) is a first frame image of a sixth video sequence provided by the embodiment of the present invention; where the position and size of the target is calibrated as an input to the convolutional network.
FIG. 5(a1) is a 50 th frame of image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(a2) is a 100 th frame of image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(a3) is a 150 th frame image for object tracking of a first video sequence using the method of the present invention provided by an embodiment of the present invention; therefore, the target tracking method provided by the invention can effectively track the target with deformed appearance.
FIG. 5(b1) is a 50 th frame of image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(b2) is a 100 th frame of image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(b3) is a 150 th frame image for object tracking of a second video sequence using the method of the present invention provided by an embodiment of the present invention; it can be seen that the target tracking method provided by the invention can effectively resist the motion blur of the target.
FIG. 5(c1) is a 50 th frame of image for object tracking of a third video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(c2) is a 100 th frame of image for object tracking of a third video sequence using the method of the present invention provided by an embodiment of the present invention; FIG. 5(c3) is a 150 th frame image of a third video sequence for object tracking using the method of the present invention provided by an embodiment of the present invention; it can be seen that the target tracking method provided by the invention can effectively resist similar background interference.
FIG. 5(d1) is a 50 th frame image of a fourth video sequence for object tracking according to an embodiment of the present invention; FIG. 5(d2) is a 100 th frame of image for object tracking of a fourth video sequence using the method of the present invention provided by the present invention; FIG. 5(d3) is a 150 th frame image of a fourth video sequence for object tracking according to an embodiment of the present invention; therefore, the target tracking method provided by the invention can effectively track the target which moves rapidly.
FIG. 5(e1) is a 50 th frame of image for object tracking of a fifth video sequence using the method of the present invention according to an embodiment of the present invention; FIG. 5(e2) is a 100 th frame of image for object tracking of a fifth video sequence using the method of the present invention according to an embodiment of the present invention; FIG. 5(e3) is a 150 th frame image of a fifth video sequence for object tracking using the method of the present invention according to an embodiment of the present invention; it can be seen that the target tracking method provided by the invention can effectively resist target scale change and illumination change.
FIG. 5(f1) is a 50 th frame of image for object tracking of a sixth video sequence using the method of the present invention according to an embodiment of the present invention; FIG. 5(f2) is a 100 th frame of image for object tracking of a sixth video sequence using the method of the present invention according to an embodiment of the present invention; fig. 5(f3) is an image of frame 150 of a sixth video sequence for object tracking according to an embodiment of the present invention. Therefore, the target tracking method provided by the invention can effectively resist the target from being shielded.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. A target tracking method based on relevant filtering and twin convolution network fusion is characterized in that the twin convolution network comprises a first convolution network and a second convolution network which are identical in number, and the target tracking method comprises the following steps:
(1) extracting a target feature map of a t-1 frame image of a known target position by using a first convolution network, and extracting a search feature map of the t frame image by using a second convolution network;
(2) performing fast Fourier transform on a target characteristic image of a t-1 frame image to obtain a target area of the t-1 frame image, performing related filtering on a search characteristic image of the t frame image to obtain a search area of the t frame image, calculating the cross correlation between the search area of the t frame image and the target area of the t-1 frame image to obtain a target score map of the t frame image, and obtaining a target position of the t frame image according to the target score map of the t frame image;
when t is 2, calibrating the 1 st frame image in the video sequence, executing the steps (1) - (2) to obtain the target position of the 2 nd frame image, when t is 3, executing the steps (1) - (2) to obtain the target position of the 3 rd frame image, and so on to obtain the target position of each frame image in the video sequence, thereby realizing the target tracking of the video sequence;
the correlation filtering in step (2) includes:
and performing smooth filtering on the search characteristic graph of the t frame image by adopting a cosine window function or a sine window function, and then transforming the search characteristic graph of the t frame image subjected to smooth filtering from a spatial domain to a frequency domain by adopting fast Fourier transform to obtain a search area of the t frame image.
2. The method as claimed in claim 1, wherein the first convolution network and the second convolution network each include five convolution layers, and the convolution of the first two convolution layers of the five convolution layers is followed by a downsampling pooling layer.
3. The target tracking method based on the fusion of the correlation filtering and the twin convolutional network as claimed in claim 1 or 2, wherein the twin convolutional network is a trained convolutional network, and the training method of the twin convolutional network is as follows:
the method comprises the steps of collecting a sample video sequence, marking a target position of each frame of sample image in the sample video sequence, training a convolutional network by using the marked sample video sequence, and optimizing network parameters by taking the minimum logarithmic loss function as a target during training to obtain the trained convolutional network.
4. The target tracking method based on the fusion of the correlation filtering and the twin convolutional network as claimed in claim 3, wherein the logarithmic loss function is:
l(y,v)=log(1+exp(-yv))
where v is the confidence score of the target location of the sample image, y is the marker of the target location of the sample image, and l (y, v) is an error value.
5. The target tracking method based on the fusion of the correlation filtering and the twin convolutional network as claimed in claim 3, wherein the target tracking method further comprises:
when t is 2, calibrating the 1 st frame image in the video sequence, executing the steps (1) - (2) to obtain the cross correlation between the search area of the 1 st frame image and the target area of the 2 nd frame image, and updating the network parameters of the twin convolutional network by back propagation of a minimized logarithmic loss function by utilizing the cross correlation.
CN201810342324.1A 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion Expired - Fee Related CN108665485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810342324.1A CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810342324.1A CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Publications (2)

Publication Number Publication Date
CN108665485A CN108665485A (en) 2018-10-16
CN108665485B true CN108665485B (en) 2021-07-02

Family

ID=63783613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810342324.1A Expired - Fee Related CN108665485B (en) 2018-04-16 2018-04-16 Target tracking method based on relevant filtering and twin convolution network fusion

Country Status (1)

Country Link
CN (1) CN108665485B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543559B (en) * 2018-10-31 2021-12-28 东南大学 Target tracking method and system based on twin network and action selection mechanism
CN109598684B (en) * 2018-11-21 2023-02-14 华南理工大学 Correlation filtering tracking method combined with twin network
CN109712171B (en) * 2018-12-28 2023-09-01 厦门瑞利特信息科技有限公司 Target tracking system and target tracking method based on correlation filter
CN110210551B (en) * 2019-05-28 2021-07-30 北京工业大学 Visual target tracking method based on adaptive subject sensitivity
CN110309835B (en) * 2019-06-27 2021-10-15 中国人民解放军战略支援部队信息工程大学 Image local feature extraction method and device
CN110415271B (en) * 2019-06-28 2022-06-07 武汉大学 Appearance diversity-based method for tracking generation twin-resisting network target
CN110473231B (en) * 2019-08-20 2024-02-06 南京航空航天大学 Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN110807793B (en) * 2019-09-29 2022-04-22 南京大学 Target tracking method based on twin network
CN112686957A (en) * 2019-10-18 2021-04-20 北京华航无线电测量研究所 Quick calibration method for sequence image
CN111260688A (en) * 2020-01-13 2020-06-09 深圳大学 Twin double-path target tracking method
CN111415373A (en) * 2020-03-20 2020-07-14 北京以萨技术股份有限公司 Target tracking and segmenting method, system and medium based on twin convolutional network
CN111340850A (en) * 2020-03-20 2020-06-26 军事科学院系统工程研究院系统总体研究所 Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss
CN113592899A (en) * 2021-05-28 2021-11-02 北京理工大学重庆创新中心 Method for extracting correlated filtering target tracking depth features

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and apparatus
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019631B2 (en) * 2015-11-05 2018-07-10 Qualcomm Incorporated Adapting to appearance variations when tracking a target object in video sequence

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and apparatus
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DCFNet: discriminant correlation filters network for visual tracking;Qiang Wang等;《Arxiv》;20170413;全文 *
Deep Meta Learning for Real-Time Visual Tracking based on Target-Specific Feature Space;Janghoon Choi等;《Arxiv》;20171226;全文 *

Also Published As

Publication number Publication date
CN108665485A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108665485B (en) Target tracking method based on relevant filtering and twin convolution network fusion
CN109816689B (en) Moving target tracking method based on adaptive fusion of multilayer convolution characteristics
CN107767405B (en) Nuclear correlation filtering target tracking method fusing convolutional neural network
CN112184752A (en) Video target tracking method based on pyramid convolution
Gao et al. Dynamic hand gesture recognition based on 3D hand pose estimation for human–robot interaction
CN108154118A (en) A kind of target detection system and method based on adaptive combined filter with multistage detection
CN110427839A (en) Video object detection method based on multilayer feature fusion
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108288282B (en) Adaptive feature selection target tracking method based on convolutional neural network
CN110827262B (en) Weak and small target detection method based on continuous limited frame infrared image
CN103729854A (en) Tensor-model-based infrared dim target detecting method
CN112232134A (en) Human body posture estimation method based on hourglass network and attention mechanism
Thalhammer et al. SyDPose: Object detection and pose estimation in cluttered real-world depth images trained using only synthetic data
CN105608457A (en) Histogram gray moment threshold segmentation method
CN110991278A (en) Human body action recognition method and device in video of computer vision system
CN111027586A (en) Target tracking method based on novel response map fusion
CN110084834B (en) Target tracking method based on rapid tensor singular value decomposition feature dimension reduction
CN113379788B (en) Target tracking stability method based on triplet network
CN111681263B (en) Multi-scale antagonistic target tracking algorithm based on three-value quantization
CN111539985A (en) Self-adaptive moving target tracking method fusing multiple features
CN106570536A (en) High-precision tracking and filtering method for time-difference positioning system target
CN109360223A (en) A kind of method for tracking target of quick spatial regularization
CN105701840A (en) System for real-time tracking of multiple objects in video and implementation method
CN112507940B (en) Bone action recognition method based on differential guidance representation learning network
CN114022510A (en) Target long-time tracking method based on content retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210702