CN117893574A - Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network - Google Patents

Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network Download PDF

Info

Publication number
CN117893574A
CN117893574A CN202410291831.2A CN202410291831A CN117893574A CN 117893574 A CN117893574 A CN 117893574A CN 202410291831 A CN202410291831 A CN 202410291831A CN 117893574 A CN117893574 A CN 117893574A
Authority
CN
China
Prior art keywords
target
response
filter
map
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410291831.2A
Other languages
Chinese (zh)
Inventor
刘晋源
刘勇
仲维
姜智颖
刘日升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202410291831.2A priority Critical patent/CN117893574A/en
Publication of CN117893574A publication Critical patent/CN117893574A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

The invention discloses an infrared unmanned aerial vehicle target tracking method based on a correlation filtering convolutional neural network, which comprises the following steps of: s1, constructing a weak tracker based on a correlation filter and a convolutional neural network to obtain a filter on each convolutional layer, and filtering the windowed depth feature map by the correlation filter to obtain a target response map; s2, integrating the multiple response graphs generated in the S1 to generate a stronger response graph, constructing an integrated tracker, and determining the current position of the target according to the maximum value of the fused response graph in the time domain; s3, determining the current size of the target by adopting a scale estimation strategy; s4, updating a model; the invention provides an integrated tracker based on a correlation filter with a multi-layer convolution characteristic. Meanwhile, a response graph fusion method based on a Kullback-Leibler and a simple scale estimation strategy are provided to adapt to the change of the appearance of a target, so that the precision of an integrated tracker is improved, and the tracking performance is improved.

Description

Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network
Technical Field
The invention belongs to the field of computer vision and infrared target tracking, and relates to an infrared unmanned aerial vehicle target tracking method based on a relevant filtering convolutional neural network.
Background
In recent years, unmanned aerial vehicles are gradually popularized, and new threats are brought to the fields of social security, airspace control, land protection and the like. Therefore, there is an urgent need for high-threat, difficult-to-identify unmanned aerial vehicle targets to perform high-accuracy, high-stability search tracking. Infrared target tracking has more advantages over visible light tracking. The infrared camera can image through certain smog, and the imaging is less influenced by illumination, especially in rainy days, foggy days, low illumination and other complex scenes, the infrared target tracking has the characteristics of all-weather work, strong anti-interference capability and the like, and is often applied to the fields of guided weapons, marine rescue and the like. However, the infrared image is generally low in imaging resolution, lacks description of texture information of the target, and still cannot reliably track the infrared target in real time in the face of changes in the appearance of the target and scene complexity such as occlusion, thermal crossover and background clutter during tracking. Research on how to address these challenges to improve the efficiency and accuracy of infrared target tracking is a popular research direction in this field.
Tracking methods based on Discriminant Correlation Filters (DCFs) perform well in color (RGB) target tracking, and some of them have been improved by some scholars and then applied to the field of infrared target tracking. The DCF-based tracker learns the DCF as an online classifier using the sample image patch and detects the target by classifying the foreground and the background. However, training a strong DCF using limited training samples remains challenging. In general, an appearance model built from more powerful features is more useful for determining objects and background, which can significantly improve the performance of object tracking. In the prior art, some infrared target tracking methods attempt to distinguish between interferents similar to a target by obtaining a feature model with a greater discrimination capability. Although these methods achieve a certain effect, their performance is limited by the limited discrimination of hand extracted features and is difficult to break through.
On the other hand, the strong feature extraction capability of deep learning on the target makes it a mainstream of research in the field of target tracking. The features of the fully connected layer are not suitable for tracking infrared targets due to lack of spatial information of the targets. In contrast, convolution features from convolution layers perform well in thermal infrared tracking, primarily because they are able to accurately acquire the position of an infrared target. However, feature semantic information from a single convolution layer is limited and not robust to challenging performance such as appearance changes and occlusion in tracking.
Disclosure of Invention
Aiming at the problems, the invention aims to provide an infrared unmanned aerial vehicle target tracking method based on a relevant filtering convolutional neural network, which solves the problems of weak visual intensity, fuzzy boundary and extremely easy loss during tracking of an infrared unmanned aerial vehicle in tracking, and effectively aims at challenges such as deformation, shielding, complex background interference and the like.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
an infrared unmanned aerial vehicle target tracking method based on a relevant filtering convolutional neural network comprises the following steps:
s1: constructing a weak tracker based on a correlation filter and a convolutional neural network to obtain a filter on each convolutional layer, and filtering the windowed depth feature map by the correlation filter to obtain a target response map;
s2: integrating the multiple response graphs generated in the S1 to generate a stronger response graph, constructing an integrated tracker, and determining the current position of the target according to the maximum value of the fused response graph in the time domain;
s3: determining the current size of the target by adopting a scale estimation strategy;
s4: updating a model;
the invention has the beneficial effects that:
the invention provides an integrated tracker based on a correlation filter with a multi-layer convolution characteristic. Meanwhile, a response graph fusion method based on a Kullback-Leibler and a simple scale estimation strategy are provided to adapt to the change of the appearance of a target, the precision of an integrated tracker is improved, the tracking performance is improved, and the problems that the visual intensity of the target in tracking of an infrared unmanned aerial vehicle is weak, the boundary is fuzzy, the target is extremely easy to lose during tracking and the like are solved.
Drawings
Fig. 1 is a flowchart of an infrared unmanned aerial vehicle target tracking method based on a correlation filtering convolutional neural network provided by an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and embodiments:
as shown in fig. 1, an infrared unmanned aerial vehicle target tracking method based on a correlation filtering convolutional neural network comprises the following steps:
s1: constructing a weak tracker based on a correlation filter and a convolutional neural network to obtain a filter on each convolutional layer, and filtering the windowed depth feature map by the correlation filter to obtain a target response map;
s1.1: extracting convolution characteristics of an infrared target on a predicted frame (t frame) by using a VGG-Net network trained in advance on an ImageNet data set to obtain an infrared target region extracted from a k convolution layer(m, n, D represent the width, height and total number of channels of the target region, respectively, R represents the whole image domain), and the corresponding Gaussian shaped tag matrix->. By discrete Fourier transform->Set->,/>. The filter corresponding to the kth convolution layer may be expressed in the fourier domain as:
wherein,for infrared target area corresponding to Gaussian shaped tag matrix, < >>Is the result of discrete Fourier transform of the infrared target area extracted by the kth convolution layer, F represents Fourier transform,>is a regularization parameter. In the formula (1), the components are as follows,
wherein,is the result of discrete Fourier transform of the infrared target region,/and>for the corresponding convolution layer filter, D is the total number of channels in the target area,/the total number of channels in the target area>,/>Is element-by-element product. />Is the discrete Fourier transform result of the d channel of the infrared target area extracted by the kth convolution layer,/L->Is a convolution layer filter on the d channel corresponding to the infrared target area.
A simple closed-form solution can be obtained in the fourier domain fast optimization problem (1), see equation (3):
wherein,is a convolution filter on the d channel of the infrared target area extracted by the kth convolution layer,/and/or>The result is obtained after discrete Fourier transform is carried out on the infrared target areas extracted by all the convolution layers;
s1.2: in the current frame (t+1st frame), a search area having a size 1.5 times the size of the target bounding box is cropped. Then, it is resized to 224×224 pixels and sent into the VGG-Net network to extract the feature map of the search area. Is provided withA feature map of a kth convolution layer representing the search region. The feature map is first transformed into the fourier domain: />. Then, calculating a cross-correlation coefficient between the trained filter and the feature map of the current frame to obtain a target response map of the target in the current frame:
wherein,for the target response map on the kth convolution layer of the current frame,/>Inverse of discrete fourier transform, +.>For the feature map to correspond to the Fourier domain,>for the filter corresponding to the kth convolution layerAnd lining She Yuna.
S2: integrating the multiple response graphs generated in the S1 to generate a stronger response graph, constructing an integrated tracker, and determining the current position of the target according to the maximum value of the fused response graph in the time domain;
specifically, n response maps obtained in step S1Wherein each response mapAre all generated by a weak tracker, by fusing these response maps, a stronger response map is obtained>. In fact, every response plot +.>Can be regarded as a probability map, which is defined by a probability distribution +.>Composition, probability distribution represents position->Probability of being the center of the object, and +.>. Therefore, the probability map +.A. is measured using the Kullback-Leibler divergence>And the distance between the fusion probability map Q, and then optimizing the fusion probability map Q by minimizing this distance:
wherein,
representing probability map->And fusion probability map Q, +.>And->The (i, j) th element of the probability maps P and Q, respectively.
However, in practice, the response map obtained by the expression (4) is noisy because the feature map from VGG-Net contains noise. Therefore, noise of the response graph needs to be filtered first before the probability map is fused, and in order to achieve this, the current probability map is filtered by using another probability map, expressed as:
wherein,,/>. The two probability maps have similar probability distribution in the same area, the output of the filtered probability map in the area is higher, and the return value of other areas is lower. The calculation formula (8) can obtain a group of filtered probability maps +.>Comprises->The response map is less noisy.
These filtered response maps are then fused. Thus, the objective function (5) can be rewritten as the following formula:
wherein A is a filtered probability map, and a final fusion response map Q is obtained through a Lagrangian multiplier method:
where n is the number of response plots, it can be seen that Q is the average of all filtered response plots, i.e., the final result is enhanced by all filtered response plots using a weighted sum.
Finally, the position (x, y) of the target on the current frame is obtained by finding the maximum response value of the fusion response map Q:
s3: determining the current size of the target by adopting a scale estimation strategy;
in particular, because the integrated tracker cannot accommodate changes in the appearance of the target, its tracking performance is limited. In order to improve the precision of an integrated tracker, the invention provides an effective scale estimation method. For a given three different scale targetAnd obtaining the corresponding scale change direction of the maximum response chart by the following formula.
Wherein,three scale change directions of the target at the t+1 frame are respectively reduced and unchangedAnd (5) chemical and amplification. />Represents the maximum value of the resulting response map, +.>A fixed weight representing the direction of these scale changes.
For each direction of change of dimension, a fixed step of change is given at each frameTo update the scale factor:
wherein,is the scale factor for the predicted frame t. The target size of the current frame t+1 can then also be obtained:
wherein,representing the target size of the first frame. The scale estimation strategy can effectively improve tracking accuracy.
S4: updating a model;
in particular, since the appearance of object tracking may change dynamically, the model needs to be updated to accommodate the change in appearance. A simple linear updating method is used in the present invention to update the filter. The method is only using the current exampleTo update the filter, the expression is as follows:
wherein the method comprises the steps ofThe infrared target area corresponds to a Gaussian tag matrix, d is an area channel, and +.>Is the product of the elements,is the result of discrete Fourier transform of the d channel of the infrared target area extracted by the kth convolution layer,/L->Discrete fourier transform result for current tracking infrared target region,/and method for tracking infrared target region>For regularization parameters, t is the frame number of the video sequence,update coefficients on the d-channel of the infrared target area extracted for the kth convolution layer,/for the k-th convolution layer>Final update coefficients of the infrared target region extracted for the kth convolution layer, +.>Representation of the infrared target region corresponding filter extracted for the kth convolution layer of the t-th frame in the fourier domain,/for the filter>For learning the rate, the old filter is balanced (+)>) And a new filter (+)>) The ratio between them.
In order to verify the feasibility and effectiveness of the method, a plurality of groups of infrared unmanned aerial vehicle video sequences under different environments are collected, target tracking tests are carried out, good tracking effects are obtained, and the algorithm has high environmental adaptability and tracking stability.
The video recording platform of the embodiment is a miniature triaxial dual-light holder camera, has a high-precision triaxial stability-increasing holder, supports 400 ten thousand pixel video recording, and is 640512 resolution thermal imaging. The computing platform is an Nvidia Xavier NX heterogeneous processor chip platform, and embedded software and hardware cooperate to realize high-level system integration and high-efficiency real-time processing of the algorithm.

Claims (2)

1. The infrared unmanned aerial vehicle target tracking method based on the relevant filtering convolutional neural network is characterized by comprising the following steps of:
s1: constructing a weak tracker based on a correlation filter and a convolutional neural network to obtain a filter on each convolutional layer, and filtering the windowed depth feature map by the correlation filter to obtain a target response map;
s2: integrating the multiple response graphs generated in the S1 to generate a stronger response graph, constructing an integrated tracker, and determining the current position of the target according to the maximum value of the fused response graph in the time domain;
s3: determining the current size of the target by adopting a scale estimation strategy;
s4: updating a model;
the step S1 specifically comprises the following steps:
s1.1: extracting convolution characteristics of an infrared target on a predicted frame by using a VGG-Net network trained in advance on an ImageNet data set to obtain an infrared target region extracted from a kth convolution layerM, n, D represent the width, height and total number of channels, respectively, of the target region, R represents the image overall domain, and the corresponding gaussian-shaped label matrixThe method comprises the steps of carrying out a first treatment on the surface of the By discrete Fourier transform->Set->,/>The method comprises the steps of carrying out a first treatment on the surface of the The filter corresponding to the kth convolution layer is represented in the fourier domain as:
wherein,is the result of discrete Fourier transform of the infrared target area extracted by the kth convolution layer,/>Representing the Fourier transform, +.>Is a regularization parameter;
in the formula (1), the components are as follows,
wherein,for the corresponding convolution layer filter, D is the total number of channels in the target area,/the total number of channels in the target area>,/>Is element-by-element product; />Is the discrete Fourier transform result of the d channel of the infrared target area extracted by the kth convolution layer,/L->Is a convolution layer filter on the d channel of the corresponding infrared target area;
obtaining a closed solution in the Fourier domain fast optimization problem (1), and the closed solution is shown in a formula (3):
wherein,is a convolution filter on the d channel of the infrared target area extracted by the kth convolution layer,/and/or>The result is obtained after discrete Fourier transform is carried out on the infrared target areas extracted by all the convolution layers;
s1.2: in the current frame, cutting a search area with the size 1.5 times that of the target boundary frame; then, the size thereof is adjusted to 224×224 pixels, and is transmitted into the VGG-Net network to extract a feature map of the search area; is provided withA feature map representing a kth convolutional layer of the search region; the feature map is first transformed into the fourier domain:the method comprises the steps of carrying out a first treatment on the surface of the Then, calculating a cross-correlation coefficient between the trained filter and the feature map of the current frame to obtain a target response map of the target in the current frame:
wherein,for the target response map on the kth convolution layer of the current frame,/>Inverse of discrete fourier transform, +.>For the feature map to correspond to the Fourier domain,>a fourier domain representation of a filter corresponding to the kth convolution layer;
the step S2 specifically comprises the following steps: n response graphs obtained by S1 stepWherein each response map,/>Are all generated by a weak tracker, and a stronger response diagram is obtained by fusing the response diagrams>The method comprises the steps of carrying out a first treatment on the surface of the Each response graph->Considered as a probability map, which is defined by a probability distribution +.>Composition, probability distribution represents position->Probability of being the center of the object, andthe method comprises the steps of carrying out a first treatment on the surface of the Therefore, the probability map +.A. is measured using the Kullback-Leibler divergence>And the distance between the fusion probability map Q, and then optimizing the fusion probability map Q by minimizing this distance:
wherein,
representing probability map->And fusion probability map Q, +.>And->(i, j) th elements of the probability maps P and Q, respectively;
the signature from VGG-Net contains noise, resulting in the presence of noise in the response obtained by equation (4); therefore, noise of the response graph needs to be filtered first before the probability map is fused, and in order to achieve this, the current probability map is filtered by using another probability map, expressed as:
wherein,,/>the method comprises the steps of carrying out a first treatment on the surface of the Calculation formula (8) obtains a group of filtered probability mapsComprises->The response diagram is smaller in noise; then fusing the filtered response maps; therefore, the objective function (5) is rewritten as the following formula:
wherein A is a filtered probability map, and a final fusion response map Q is obtained through a Lagrangian multiplier method:
wherein n is the number of response graphs, Q is the average of all filtered response graphs, i.e. the final result is enhanced by all filtered response graphs using a weighted sum;
finally, the position of the target on the current frame is obtained by finding the maximum response value of the fusion response diagram Q
The step S3 specifically comprises the following steps:
in order to improve the precision of the integrated tracker, a scale estimation method is provided; for a given three different scale targetObtaining the corresponding scale change direction of the maximum response diagram through the following formula;
wherein,three scale change directions of the target at the t+1 frame are respectively reduced, unchanged and enlarged; />Represents the maximum value of the resulting response map, +.>Fixed weights representing the direction of these scale changes;
for each direction of change of dimension, a fixed step of change is given at each frameTo update the scale factor:
wherein,a scale factor for the predicted frame t; then the target size of the current frame t+1 is obtained:
wherein,representing the target size of the first frame.
2. The method for tracking the target of the infrared unmanned aerial vehicle based on the relevant filtering convolutional neural network according to claim 1, wherein in the step S4, the model is updated, specifically: updating the filter using a linear updating method; using only the current exampleTo update the filter, the expression is as follows:
wherein the method comprises the steps ofFor infrared target area corresponding to Gaussian shaped tag matrix, < >>Is regional channel->Is to find the product element by element, add->Is the result of discrete Fourier transform of the d channel of the infrared target area extracted by the kth convolution layer,/L->Discrete Fourier transform for current tracking infrared target areaAs a result of (I)>For regularization parameters, ++>For the frame number of the video sequence,update coefficients on the d-channel of the infrared target area extracted for the kth convolution layer,/for the k-th convolution layer>Final update coefficients of the infrared target region extracted for the kth convolution layer, +.>Representation of the infrared target region corresponding filter extracted for the kth convolution layer of the t-th frame in the fourier domain,/for the filter>For learning the rate, old filter is balanced +.>And a new filter->The ratio between them.
CN202410291831.2A 2024-03-14 2024-03-14 Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network Pending CN117893574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410291831.2A CN117893574A (en) 2024-03-14 2024-03-14 Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410291831.2A CN117893574A (en) 2024-03-14 2024-03-14 Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network

Publications (1)

Publication Number Publication Date
CN117893574A true CN117893574A (en) 2024-04-16

Family

ID=90649175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410291831.2A Pending CN117893574A (en) 2024-03-14 2024-03-14 Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network

Country Status (1)

Country Link
CN (1) CN117893574A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108665481A (en) * 2018-03-27 2018-10-16 西安电子科技大学 Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method
CN109035304A (en) * 2018-08-07 2018-12-18 北京清瑞维航技术发展有限公司 Method for tracking target, calculates equipment and device at medium
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A kind of correlation filtering method for tracking target merging multilayer convolution feature
CN111401178A (en) * 2020-03-09 2020-07-10 蔡晓刚 Video target real-time tracking method and system based on depth feature fusion and adaptive correlation filtering
CN113838088A (en) * 2021-08-30 2021-12-24 哈尔滨工业大学 Hyperspectral video target tracking method based on depth tensor
CN116820131A (en) * 2023-07-05 2023-09-29 桂林理工大学 Unmanned aerial vehicle tracking method based on target perception ViT

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108665481A (en) * 2018-03-27 2018-10-16 西安电子科技大学 Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method
CN109035304A (en) * 2018-08-07 2018-12-18 北京清瑞维航技术发展有限公司 Method for tracking target, calculates equipment and device at medium
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A kind of correlation filtering method for tracking target merging multilayer convolution feature
CN111401178A (en) * 2020-03-09 2020-07-10 蔡晓刚 Video target real-time tracking method and system based on depth feature fusion and adaptive correlation filtering
CN113838088A (en) * 2021-08-30 2021-12-24 哈尔滨工业大学 Hyperspectral video target tracking method based on depth tensor
CN116820131A (en) * 2023-07-05 2023-09-29 桂林理工大学 Unmanned aerial vehicle tracking method based on target perception ViT

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIAO LIU ET AL.: "Deep convolutional neural networks for thermal infrared object tracking", 《ELSEVIER》, 26 July 2017 (2017-07-26), pages 189 - 198, XP085201609, DOI: 10.1016/j.knosys.2017.07.032 *
李中科;万长胜;: "具备重检测机制的融合特征视觉跟踪算法", 图学学报, no. 05, 15 October 2018 (2018-10-15) *
葛宝义;左宪章;胡永江;: "视觉目标跟踪方法研究综述", 中国图象图形学报, no. 08, 16 August 2018 (2018-08-16) *

Similar Documents

Publication Publication Date Title
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
CN110675418B (en) Target track optimization method based on DS evidence theory
CN109919981A (en) A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN112686928B (en) Moving target visual tracking method based on multi-source information fusion
CN110555868A (en) method for detecting small moving target under complex ground background
CN111640138B (en) Target tracking method, device, equipment and storage medium
CN112364865B (en) Method for detecting small moving target in complex scene
CN112308883A (en) Multi-ship fusion tracking method based on visible light and infrared images
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN111260687B (en) Aerial video target tracking method based on semantic perception network and related filtering
CN113763427A (en) Multi-target tracking method based on coarse-fine shielding processing
CN110706253B (en) Target tracking method, system and device based on apparent feature and depth feature
CN116402851A (en) Infrared dim target tracking method under complex background
CN114648547A (en) Weak and small target detection method and device for anti-unmanned aerial vehicle infrared detection system
CN110349176A (en) Method for tracking target and system based on triple convolutional networks and perception interference in learning
CN111161323B (en) Complex scene target tracking method and system based on correlation filtering
CN116883897A (en) Low-resolution target identification method
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
CN117893574A (en) Infrared unmanned aerial vehicle target tracking method based on correlation filtering convolutional neural network
CN111027427B (en) Target gate detection method for small unmanned aerial vehicle racing match
CN116453033A (en) Crowd density estimation method with high precision and low calculation amount in video monitoring scene
CN111209877B (en) Depth map-based infrared small target detection method in complex scene
CN113112522A (en) Twin network target tracking method based on deformable convolution and template updating
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination