CN110580713A - Satellite video target tracking method based on full convolution twin network and track prediction - Google Patents

Satellite video target tracking method based on full convolution twin network and track prediction Download PDF

Info

Publication number
CN110580713A
CN110580713A CN201910813189.9A CN201910813189A CN110580713A CN 110580713 A CN110580713 A CN 110580713A CN 201910813189 A CN201910813189 A CN 201910813189A CN 110580713 A CN110580713 A CN 110580713A
Authority
CN
China
Prior art keywords
target
full convolution
tracking
prediction
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910813189.9A
Other languages
Chinese (zh)
Inventor
杜博
邵佳
武辰
张乐飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910813189.9A priority Critical patent/CN110580713A/en
Publication of CN110580713A publication Critical patent/CN110580713A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a satellite video target tracking method based on a full convolution twin network and trajectory prediction, which supports tracking small targets in a satellite video data set and is characterized in that: obtaining the representation of the fine granularity of a small target by adopting the shallow layer information of a full convolution twin network, and carrying out similarity measurement and tracking, wherein the full convolution twin network comprises a full convolution module and a cross-correlation layer which share two weights, and the outputs of the two full convolution modules are connected to the cross-correlation layer; in order to deal with the situation that a small target is shielded and blurred in the tracking process, a Kalman filtering mechanism is introduced to perform self-adaptive prediction on a motion track, and finally robust and accurate tracking on the target in the satellite video data set is realized.

Description

satellite video target tracking method based on full convolution twin network and track prediction
Technical Field
The invention belongs to the technical field of satellite video target tracking, and particularly relates to a satellite video target tracking method based on a full convolution twin network and trajectory prediction.
Background
Target tracking is an important branch of computer vision science, and is widely applied to the fields of video monitoring, intelligent transportation, man-machine interaction, military, robot vision navigation and the like. The method aims to realize target tracking and positioning through a certain similarity measurement and matching search method. To date, the target tracking technology of the traditional video sequence is relatively well developed. Researchers have designed and developed various target tracking methods for different application scenarios and requirements. However, in the field of satellite video target tracking, due to the characteristics of the satellite video target tracking, a technical scheme more meeting the requirements needs to be provided urgently.
With the development of space imaging technology in recent years, the latest satellite remote sensing technology can obtain high-resolution earth observation video. In 2013, international space stations released earth observation videos with spatial resolution of up to 1 meter. In 2015, the Jilin satellite I launched in China can also provide a ground observation video with the spatial resolution of 0.7 m. Satellite video is becoming an important space big data resource, and can be widely applied to military and civil application fields such as resource general survey, disaster monitoring, ocean monitoring, dynamic target continuous tracking, dynamic event observation and the like.
compared with the traditional video tracking, the satellite video target tracking has the greatest characteristic that the 'staring' observation can be carried out on a certain area, more dynamic information than that of the traditional satellite can be obtained in a 'video recording' mode, and the satellite video target tracking is particularly suitable for observing the dynamic target. At the same time, new challenges are also faced:
(1) The satellite video single-frame image is large, and the real-time requirement on the tracking method is high.
(2) the tracking target is small, the spatial resolution is low, and the characterization features are few.
(3) The object is partially occluded or fully occluded.
(4) Motion blur, very similar to background.
At present, a small target with few robust tracking characteristics is urgently needed, and the requirement of satellite video tracking real-time performance can be met.
Disclosure of Invention
In order to overcome the requirements of the prior art, the invention provides a real-time and accurate satellite video target tracking method.
The technical scheme adopted by the invention provides a satellite video target tracking method based on a full convolution twin network and trajectory prediction, which supports tracking small targets in a satellite video data set and is characterized in that: obtaining the representation of the fine granularity of a small target by adopting the shallow layer information of a full convolution twin network, and carrying out similarity measurement and tracking, wherein the full convolution twin network comprises a full convolution module and a cross-correlation layer which share two weights, and the outputs of the two full convolution modules are connected to the cross-correlation layer; in order to deal with the situation that a small target is shielded and blurred in the tracking process, a Kalman filtering mechanism is introduced to perform self-adaptive prediction on a motion track, and finally robust and accurate tracking on the target in the satellite video data set is realized.
Moreover, the implementation process includes the following steps,
step 1, recording an image block of a first frame with a target as a center as a template Z, inputting the image block into a full convolution module, and taking shallow layer characteristics as fine-grained representation of the template Z
Step 2, recording the image blocks of the search area of the current frame as a search area X, inputting the search area X into another full convolution module shared by the weight, and taking the shallow feature as the fine-grained representation of the search area X
Step 3, in the cross-correlation layer, the fine granularity of the template Z is representedFine-grained characterization in search region XPerforming sliding matching to obtain a response graph, and taking the original graph position corresponding to the maximum value in the response graph as the position p of the target tracking;
step 4, inputting the search area X of the current frame into a Gaussian mixture model to obtain a detection image of a moving target, and using the detection image as a moving foreground mask;
Step 5, calculating the foreground mask obtained in the step 5 through cluster analysis, and taking the area and the center of mass of the maximum cluster as the area A of the target and the detection position p;
and 6, inputting the tracking position P obtained in the step 3 and the detection position P obtained in the step 5 into a Kalman filtering mechanism, and performing track self-adaptive prediction on the target when the target is shielded and blurred, so as to obtain a more stable and accurate tracking position P.
In step 6, the implementation of adaptive trajectory prediction for the target includes the following four cases:
a) In the initial N frames, directly taking a position P obtained by tracking the full convolution twin network as a position P where a final target is located, wherein N is a preset initial frame number;
b) When the detected area is larger than or equal to a preset threshold value K, the surface characteristic of the target is obvious, a Kalman filtering mechanism is not activated, and a position P obtained by tracking the full convolution twin network is directly used as a final target position P;
c) When the detected area is smaller than a preset threshold value K but larger than 0, judging that the target is partially shielded or blurred, and activating a prediction and correction mechanism of Kalman filtering;
d) and when the detected area is equal to 0, judging that the target is completely shielded or lost, and only activating prediction of Kalman filtering.
Also in case c), the prediction and correction mechanism of Kalman filtering is implemented as follows,
A prediction formula is established, and the prediction formula,
xk=Akxk-1+Bkuk+wk
A correction formula is established, and the correction formula is established,
zk=Hkxk+vk
Wherein the content of the first and second substances,
xkIs a k-frame state vector representing the predicted position of the target in the k-th frame, xk-1Is the final position of the target of the k-1 frame and represents the position of the final target in the k-1 frame; a. thekIs a matrix of state transitions that is,ukis an external control vector, BkIs an external control matrix; z is a radical ofkIs a k frame measurement vector representing a position correction value; hkIs an observation matrix, determined by the detected position; random variable wkand vkRepresenting state noise and measurement noise, respectively;
Inputting the final position of the target in the k-1 frame into a prediction formula, and using the detected position in the k-th frameUpdating an observation matrix HkX is to bekCorrection value z obtained by substituting correction formulakAnd obtaining the final target position P of the kth frame.
Also, in case d), the final position P of the k-1 frame target is setk-1Inputting a prediction formula to obtain a prediction position x of the k framekAnd obtaining the final target position P of the kth frame.
Moreover, the small targets in the satellite video data set comprise trains, trolleys and airplanes.
The method adopts a shallow layer full convolution twin network to obtain the surface characteristics (characteristics) of small target fine granularity, and performs similarity measurement and tracking; in order to solve the challenges of shielding, motion blurring and the like of the small target in the tracking process, a Kalman filtering mechanism is introduced to perform self-adaptive prediction on the motion trail of the small target, and finally robust and accurate tracking of the small target in the satellite video data set is realized. Compared with the first 12 methods in the current tracking field, the method is highest in Precision maps (Precision maps) and Success maps (Success maps), and the average frame rate also reaches 54.83 FPS.
Drawings
FIG. 1 is an overall flow chart of an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is explained by combining the drawings and the embodiment.
The method is different from the prior art, supports tracking small targets in satellite video data concentration, adopts shallow information of a full convolution twin network to obtain surface characteristics with fine granularity, and performs similarity measurement and tracking; in order to solve the challenges of shielding, motion blurring and the like of the small target in the tracking process, a Kalman filtering mechanism is introduced to perform self-adaptive prediction on the motion trail of the small target, and finally robust and accurate tracking of the small target in the satellite video data set is realized. The full convolution twin network comprises two weight sharing full convolution modules and a cross-correlation layer, and the outputs of the two full convolution modules are connected to the cross-correlation layer.
referring to fig. 1, an embodiment provides a satellite video target tracking method based on a full convolution twin network and trajectory prediction, including the following steps:
Step 1, recording an image block of a first frame with a target as a center as a template Z, and inputting the image block into a full convolution modulethe full convolution module of the example used the first five layers of the full convolution network [ 1 ] after the full tie layer was removed in AlexNet. And the shallow features (i.e., features of the first layer) are used as fine-grained characterization of the template Z
Step 2, recording the image blocks of the search area of the current frame as a search area X, and inputting the search area X into another full convolution moduleAnd the shallow features (i.e., the features of the first layer) are taken as fine-grained characterization of the search area X
Step 3, in the cross-correlation layer, the fine-grained characterization of the template Z is adoptedFine-grained characterization in search region X for kernelIs subjected to sliding matching, i.e. convolution, to obtainThe position of the original image corresponding to the maximum value in the Response Map (Response Map) [ 1 ] is set as the position p where the target is tracked.
In specific implementation, the implementation of step 1 and step 3 can refer to the prior art: 【1】 Bertonitto L, Valaddre J, Henriques,F,et al.Fully-Convolutional Siamese Networks for Object Tracking[J].2016.
And 4, inputting the search area X of the current frame into a Gaussian Mixture Model (GMM) to obtain a detection image of the moving object, namely a binary moving foreground Mask (Mask Map).
The Gaussian mixture model is the prior art, and during specific implementation, the motion foreground mask extraction can be realized by using the Gaussian mixture model in matlab.
And 5, calculating the moving foreground mask obtained in the step 4 through Blob Analysis (Blob Analysis), and taking the area and the centroid of the maximum Blob as the area A and the detection position p of the target.
Blob Analysis (Blob Analysis) is an Analysis of the connected domain of the same pixels in an image, called Blob, and is implemented in the prior art.
Preferably, when calculating the foreground mask through Blob Analysis (Blob Analysis) in step 5, the embodiment sets a mask with a value of 1 in a rectangular area B of 5 times the size of the target and a value of 0 in the rest positions, with the target tracked in the k-1 th frame as the center. The main purpose is to prevent noise interference, so that only the area and centroid of the maximum blob in the rectangular region B are used as the area a of the target and the detected position p.
And 6, inputting the tracking position P obtained in the step 3 and the detection position P obtained in the step 5 into a Kalman filtering mechanism, and performing track self-adaptive prediction on the target when the target is shielded and blurred, so as to obtain a more stable and accurate tracking position P.
further, in step 6, the adaptive trajectory prediction is implemented in consideration of the following four situations:
a) In the initial N frames, the gaussian mixture model is still in the modeling stage, and the moving target cannot be well detected when step 4 is executed. Therefore, the Kalman filtering mechanism is not activated at the stage, and the position P obtained by tracking the full convolution twin network is directly used as the position P of the final target. In specific implementation, the value of N can be preset to an empirical value. And starting from the N +1 th frame, finishing the establishment of the background and foreground models of the current frame by the Gaussian mixture model, and carrying out corresponding processing according to b), c) and d).
b) When the detected area is greater than or equal to a threshold value K, i.e., a ═ K, it indicates that the target surface features are significant and no occlusion or motion blur occurs. In the stage, a Kalman filtering mechanism is not activated, and the position P obtained by tracking the full convolution twin network is directly used as the position P of the final target.
c) When the detected area is smaller than the threshold K but larger than 0, i.e. 0< a x < K, it is determined that the target is partially occluded or blurred. This phase will therefore activate the prediction and correction mechanisms of Kalman filtering. The prediction formula (1) and the correction formula (2) are as follows:
xk=Akxk-1+Bkuk+wk (1)
zk=Hkxk+vk (2)
wherein x iskIs a k-frame state vector, i.e. the predicted position of the target in the k-th frame, xk-1Is the final position of the target of the k-1 frame, i.e. the position P of the final target in the k-1 frame, which can be marked as Pk-1。Akis a state transition matrix. u. ofkIs an external control vector, BkIs the external control matrix [ 2 ]. z is a radical ofkIs k frame measurement vectors, i.e. position correction values; hkis an observation matrix, defined by the detected position p [ 2 ]. Random variable wkAnd vkRepresenting state noise and measurement noise, respectively. They are independent of each other and obey normal distributions P (w) N (0, q) and P (v) N (0, r), where q and r represent the state noise variance and the measurement noise variance, respectively. In specific implementation, the value of K can be preset as an empirical value according to the target type.
Will k-1 final position P of the frame objectk-1Input prediction mechanisms, i.e. Pk-1As xk-1Substituting into the prediction formula (1), and using the detected position of the k-th frameUpdating an observation matrix HkX is to bekSubstitution into equation (2) to obtain the corrected value zkI.e. the final target position P of the k-th frame.
In specific implementation, step c) can be implemented by referring to the prior art: 【2】 Kalman, R.E.A. New Approach to Linear filtration and Prediction schemes [ J]Journal of Basic Engineering,1960,82(1):35. A can be calculated by the Kalman formulak、uk、Bk、wk、vk
d) When the detected area is equal to 0, i.e. a ═ 0, it is determined that the target is completely blocked or lost. Therefore, this stage only activates prediction of Kalman filtering. The final position P of the k-1 frame targetk-1Input prediction mechanisms, i.e. Pk-1as xk-1Substituting into the prediction formula (1) to obtain the predicted position x of the k framekand then is the position P of the final target of the k-th frame.
Through a large number of experiments, the initial frame number N is preferably recommended to be within the range of 5-10 frames, and the effect is good. In an embodiment, the parameters of the full convolution twin network directly adopt the results trained on the conventional trace data set, and in implementation, the parameters can be determined by those skilled in the art according to experience or debugging.
And after the current frame is processed, executing the steps 1-6 on the next current frame.
the invention relates to the implementation steps of satellite video target tracking of a full convolution twin network and trajectory prediction. Obtaining the representation of the fine granularity of the small target and tracking the representation through a shallow layer full convolution twin network; self-adaptive prediction is carried out on the motion trail of the target through Kalman filtering, so that the challenges of shielding, motion blurring and the like of a small target in the tracking process can be solved; and constructing a satellite video tracking method. In specific implementation, a computer software technology can be adopted to realize an automatic operation process, and a device for operating the process of the invention also needs to be in a protection range.
The invention has more purposes, and the small targets in the satellite video data set can be trains or trolleys, airplanes and the like.
Comparing Precision graphs (Precision plots) and Success graphs (Success plots) on 3 satellite video data (respectively aiming at trains, trolleys and airplanes) by the method and other tracking methods, a result graph of tracking a high-resolution satellite video sequence, and average Precision and frame rate (FPS) of the result. It can be seen that the present invention robustly and real-time tracks trains, planes, and vehicles moving in high resolution satellite video.
It should be emphasized that the described embodiments of the present invention are illustrative and not restrictive. Therefore, the present invention includes, but is not limited to, the examples described in the detailed description, and all other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art also belong to the protection scope of the present invention.

Claims (6)

1. a satellite video target tracking method based on a full convolution twin network and track prediction supports tracking of small targets in a satellite video data set, and is characterized in that: obtaining the representation of the fine granularity of a small target by adopting the shallow layer information of a full convolution twin network, and carrying out similarity measurement and tracking, wherein the full convolution twin network comprises a full convolution module and a cross-correlation layer which share two weights, and the outputs of the two full convolution modules are connected to the cross-correlation layer; in order to deal with the situation that a small target is shielded and blurred in the tracking process, a Kalman filtering mechanism is introduced to perform self-adaptive prediction on a motion track, and finally robust and accurate tracking on the target in the satellite video data set is realized.
2. The satellite video target tracking method based on the trajectory prediction and the full convolution twin network as claimed in claim 1, wherein: the implementation process comprises the following steps of,
step 1, recording an image block of a first frame with a target as a center as a template Z, and inputting the image block into a first frameThe module is fully rolled up, and shallow layer characteristics are used as fine-grained representation of the template Z
step 2, recording the image blocks of the search area of the current frame as a search area X, inputting the search area X into another full convolution module shared by the weight, and taking the shallow feature as the fine-grained representation of the search area X
step 3, in the cross-correlation layer, the fine granularity of the template Z is representedFine-grained characterization in search region XPerforming sliding matching to obtain a response graph, and taking the original graph position corresponding to the maximum value in the response graph as the position p of the target tracking;
Step 4, inputting the search area X of the current frame into a Gaussian mixture model to obtain a detection image of a moving target, and using the detection image as a moving foreground mask;
Step 5, calculating the foreground mask obtained in the step 5 through cluster analysis, and taking the area and the center of mass of the maximum cluster as the area A of the target and the detection position p;
And 6, inputting the tracking position P obtained in the step 3 and the detection position P obtained in the step 5 into a Kalman filtering mechanism, and performing track self-adaptive prediction on the target when the target is shielded and blurred, so as to obtain a more stable and accurate tracking position P.
3. The satellite video target tracking method based on the trajectory prediction and the full convolution twin network as claimed in claim 2, wherein: in step 6, the implementation of the adaptive trajectory prediction for the target includes the following four situations:
a) In the initial N frames, directly taking a position P obtained by tracking the full convolution twin network as a position P where a final target is located, wherein N is a preset initial frame number;
b) when the detected area is larger than or equal to a preset threshold value K, the surface characteristic of the target is obvious, a Kalman filtering mechanism is not activated, and a position P obtained by tracking the full convolution twin network is directly used as a final target position P;
c) When the detected area is smaller than a preset threshold value K but larger than 0, judging that the target is partially shielded or blurred, and activating a prediction and correction mechanism of Kalman filtering;
d) And when the detected area is equal to 0, judging that the target is completely shielded or lost, and only activating prediction of Kalman filtering.
4. the satellite video target tracking method based on the trajectory prediction and the full convolution twin network as claimed in claim 3, wherein: in case c), the prediction and correction mechanism of Kalman filtering is implemented as follows,
A prediction formula is established, and the prediction formula,
xk=Akxk-1+Bkuk+wk
a correction formula is established, and the correction formula is established,
zk=Hkxk+vk
Wherein the content of the first and second substances,
xkis a k-frame state vector representing the predicted position of the target in the k-th frame, xk-1Is the final position of the target of the k-1 frame and represents the position of the final target in the k-1 frame; a. thekis a state transition matrix, ukIs an external control vector, BkIs an external control matrix; z is a radical ofkIs a k frame measurement vector representing a position correction value; hkIs an observation matrix, determined by the detected position; random variable wkAnd vkRepresenting state noise and measurement noise, respectively;
inputting the final position of the target in the k-1 frame into a prediction formula, and using the detected position in the k-th frameupdating an observation matrix HkX is to bekCorrection value z obtained by substituting correction formulakand obtaining the final target position P of the kth frame.
5. The satellite video target tracking method based on the trajectory prediction and the full convolution twin network as claimed in claim 4, wherein: in case d), the final position P of the k-1 frame target is determinedk-1inputting a prediction formula to obtain a prediction position x of the k framekand obtaining the final target position P of the kth frame.
6. The satellite video target tracking method based on the trajectory prediction and the full convolution twin network as claimed in claim 1, 2, 3, 4 or 5, wherein: the small targets in the satellite video data set comprise trains, trolleys and airplanes.
CN201910813189.9A 2019-08-30 2019-08-30 Satellite video target tracking method based on full convolution twin network and track prediction Pending CN110580713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910813189.9A CN110580713A (en) 2019-08-30 2019-08-30 Satellite video target tracking method based on full convolution twin network and track prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910813189.9A CN110580713A (en) 2019-08-30 2019-08-30 Satellite video target tracking method based on full convolution twin network and track prediction

Publications (1)

Publication Number Publication Date
CN110580713A true CN110580713A (en) 2019-12-17

Family

ID=68812539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910813189.9A Pending CN110580713A (en) 2019-08-30 2019-08-30 Satellite video target tracking method based on full convolution twin network and track prediction

Country Status (1)

Country Link
CN (1) CN110580713A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275736A (en) * 2020-01-07 2020-06-12 中国科学院大学 Unmanned aerial vehicle video multi-target tracking method based on target scene consistency
CN111275740A (en) * 2020-01-19 2020-06-12 武汉大学 Satellite video target tracking method based on high-resolution twin network
CN111696500A (en) * 2020-06-17 2020-09-22 不亦乐乎科技(杭州)有限责任公司 Method and device for identifying MIDI sequence chord
CN111931685A (en) * 2020-08-26 2020-11-13 北京建筑大学 Video satellite moving target detection method based on bidirectional tracking strategy
CN112132856A (en) * 2020-09-30 2020-12-25 北京工业大学 Twin network tracking method based on self-adaptive template updating
CN112307981A (en) * 2020-10-29 2021-02-02 西北工业大学 Feature information transmission and cooperative tracking method in space rolling non-cooperative target observation process
CN112597795A (en) * 2020-10-28 2021-04-02 丰颂教育科技(江苏)有限公司 Visual tracking and positioning method for motion-blurred object in real-time video stream
CN112614163A (en) * 2020-12-31 2021-04-06 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Target tracking method and system fusing Bayesian trajectory inference
WO2021142571A1 (en) * 2020-01-13 2021-07-22 深圳大学 Twin dual-path target tracking method
CN115222771A (en) * 2022-07-05 2022-10-21 北京建筑大学 Target tracking method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191491A (en) * 2018-08-03 2019-01-11 华中科技大学 The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
CN109448023A (en) * 2018-10-23 2019-03-08 武汉大学 A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191491A (en) * 2018-08-03 2019-01-11 华中科技大学 The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
CN109448023A (en) * 2018-10-23 2019-03-08 武汉大学 A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIA SHAO 等: "PASiam: Predicting Attention Inspired Siamese Network, for Space-Borne Satellite Video Tracking", 《2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》 *
KIM, HI等: "Siamese adversarial network for object tracking", 《IMAGE AND VISION PROCESSING AND DISPLAY TECHNOLOGY》 *
霍凯: "交通路口智能视频监控系统设计", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275736A (en) * 2020-01-07 2020-06-12 中国科学院大学 Unmanned aerial vehicle video multi-target tracking method based on target scene consistency
WO2021142571A1 (en) * 2020-01-13 2021-07-22 深圳大学 Twin dual-path target tracking method
CN111275740A (en) * 2020-01-19 2020-06-12 武汉大学 Satellite video target tracking method based on high-resolution twin network
CN111275740B (en) * 2020-01-19 2021-10-22 武汉大学 Satellite video target tracking method based on high-resolution twin network
CN111696500A (en) * 2020-06-17 2020-09-22 不亦乐乎科技(杭州)有限责任公司 Method and device for identifying MIDI sequence chord
CN111696500B (en) * 2020-06-17 2023-06-23 不亦乐乎科技(杭州)有限责任公司 MIDI sequence chord identification method and device
CN111931685B (en) * 2020-08-26 2021-08-24 北京建筑大学 Video satellite moving target detection method based on bidirectional tracking strategy
CN111931685A (en) * 2020-08-26 2020-11-13 北京建筑大学 Video satellite moving target detection method based on bidirectional tracking strategy
CN112132856A (en) * 2020-09-30 2020-12-25 北京工业大学 Twin network tracking method based on self-adaptive template updating
CN112132856B (en) * 2020-09-30 2024-05-24 北京工业大学 Twin network tracking method based on self-adaptive template updating
CN112597795A (en) * 2020-10-28 2021-04-02 丰颂教育科技(江苏)有限公司 Visual tracking and positioning method for motion-blurred object in real-time video stream
CN112307981A (en) * 2020-10-29 2021-02-02 西北工业大学 Feature information transmission and cooperative tracking method in space rolling non-cooperative target observation process
CN112614163A (en) * 2020-12-31 2021-04-06 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Target tracking method and system fusing Bayesian trajectory inference
CN112614163B (en) * 2020-12-31 2023-05-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Target tracking method and system integrating Bayesian track reasoning
CN115222771A (en) * 2022-07-05 2022-10-21 北京建筑大学 Target tracking method and device

Similar Documents

Publication Publication Date Title
CN110580713A (en) Satellite video target tracking method based on full convolution twin network and track prediction
CN110490928B (en) Camera attitude estimation method based on deep neural network
CN107481270B (en) Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN111275740B (en) Satellite video target tracking method based on high-resolution twin network
CN111161309B (en) Searching and positioning method for vehicle-mounted video dynamic target
CN106780567B (en) Immune particle filter extension target tracking method fusing color histogram and gradient histogram
CN110555868A (en) method for detecting small moving target under complex ground background
CN110070565A (en) A kind of ship trajectory predictions method based on image superposition
CN108900775B (en) Real-time electronic image stabilization method for underwater robot
CN116228817B (en) Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering
CN110827262A (en) Weak and small target detection method based on continuous limited frame infrared image
CN107360377B (en) Vehicle-mounted video image stabilization method
CN110717934A (en) Anti-occlusion target tracking method based on STRCF
CN113763427A (en) Multi-target tracking method based on coarse-fine shielding processing
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
Min et al. COEB-SLAM: A Robust VSLAM in Dynamic Environments Combined Object Detection, Epipolar Geometry Constraint, and Blur Filtering
CN109410254B (en) Target tracking method based on target and camera motion modeling
CN108492308B (en) Method and system for determining variable light split flow based on mutual structure guided filtering
CN108038872B (en) Dynamic and static target detection and real-time compressed sensing tracking research method
CN113592947B (en) Method for realizing visual odometer by semi-direct method
CN111008555B (en) Unmanned aerial vehicle image small and weak target enhancement extraction method
Ryu et al. Video stabilization for robot eye using IMU-aided feature tracker
CN106920249A (en) The fast track method of space maneuver target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191217