CN113808170B - Anti-unmanned aerial vehicle tracking method based on deep learning - Google Patents

Anti-unmanned aerial vehicle tracking method based on deep learning Download PDF

Info

Publication number
CN113808170B
CN113808170B CN202111119157.2A CN202111119157A CN113808170B CN 113808170 B CN113808170 B CN 113808170B CN 202111119157 A CN202111119157 A CN 202111119157A CN 113808170 B CN113808170 B CN 113808170B
Authority
CN
China
Prior art keywords
tracking
unmanned aerial
aerial vehicle
value
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111119157.2A
Other languages
Chinese (zh)
Other versions
CN113808170A (en
Inventor
叶润
张�成
景晓康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202111119157.2A priority Critical patent/CN113808170B/en
Publication of CN113808170A publication Critical patent/CN113808170A/en
Application granted granted Critical
Publication of CN113808170B publication Critical patent/CN113808170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of unmanned aerial vehicle identification, and particularly relates to an anti-unmanned aerial vehicle tracking method based on deep learning. The invention is based on the current detection technology state and considers factors such as detection precision, distance, cost and the like, the detection of the unmanned aerial vehicle is realized by adopting a visual correlation technology, the visual detection is divided into two stages of detection and tracking, and a method which is more suitable for tracking the unmanned aerial vehicle is obtained by using a deep learning method. In order to ensure the final tracking speed, a lightweight detection network is designed, the whole detection network has high detection precision while ensuring the speed, and meanwhile, the designed tracking network is a further improvement on the original network. Overall, the last tracking network can effectively and quickly track the unmanned aerial vehicle.

Description

Anti-unmanned aerial vehicle tracking method based on deep learning
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle identification, and particularly relates to an anti-unmanned aerial vehicle tracking method based on deep learning.
Background
The rapid civilian commercialization development of unmanned aerial vehicles has led to a further increase in popularity of unmanned aerial vehicles, but the light safety awareness of users often causes a series of safety problems. Particularly, in some public areas, such as airports, accidents frequently happen, and real-time monitoring of unmanned aerial vehicles is necessary.
The unmanned aerial vehicle of plastics material reflection area is little, and flight height is low, speed is slow, and especially in places such as city, airport, ground clutter is more, adopts the radar to be difficult to effectively detect unmanned aerial vehicle. The audio detection technology is greatly interfered by environmental noise, and unmanned aerial vehicles are difficult to detect in other noisy environments such as cities. The radio frequency detection technology has high sensitivity requirements on transmitting and receiving equipment, and the unmanned aerial vehicle in an electromagnetic silence state is difficult to detect. The optical sensor relied on by the visual detection technology is low in price and convenient to realize, and has the advantages of high accuracy, high speed, large monitoring range and the like, and the characteristics make the optical sensor become one of the most important detection technologies in anti-unmanned equipment. In recent years, the rapid development of deep learning makes convolutional neural networks the dominant method in the field of computer vision, as well as in the field of detection and tracking.
Disclosure of Invention
The invention is based on the current detection technology state and considers factors such as detection precision, distance, cost and the like, the detection of the unmanned aerial vehicle is realized by adopting a visual correlation technology, the visual detection is divided into two stages of detection and tracking, and a method which is more suitable for tracking the unmanned aerial vehicle is obtained by using a deep learning method.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the anti-unmanned aerial vehicle tracking method based on deep learning is characterized by comprising the following steps of:
s1, collecting unmanned aerial vehicle data and preprocessing, wherein the preprocessing method is to calculate the mean value and variance of each unmanned aerial vehicle picture, then subtracting the mean value from the variance of each picture origin, and obtaining a training set after data enhancement;
s2, constructing a detection and tracking network model, wherein the detection network model is based on a central Net, a mobile Net V3-small is adopted as a feature extraction network to extract features of an input training set picture, and after the feature picture output by the feature extraction network is subjected to SPPnet and transpose convolution, a detection result is obtained, and the detection result comprises three branches, namely a central point branch of the unmanned aerial vehicle, a position offset branch relative to the central point and a wide and high branch of a target frame of the unmanned aerial vehicle; the loss function of the detection network is set as:
L total =L ksize L sizeoff L off
wherein lambda is size =0.1,λ off =1,L k The center point branch loss function for the unmanned aerial vehicle:
Figure BDA0003276444780000021
Figure BDA0003276444780000022
is the value output by each point of the central branch, N is the number of positive samples, Σ xyc Representing all points on a network output layer, wherein alpha and beta are preset super parameters in training;
L off offset branch loss function for position relative to the center point:
Figure BDA0003276444780000023
Figure BDA0003276444780000024
for shifting the predicted value of the branch, p is the integer coordinate before downsampling, R is the downsampling rate 4,/L>
Figure BDA0003276444780000025
The coordinate value is rounded after downsampling;
L size loss function for wide-high branches of unmanned aerial vehicle target frames:
Figure BDA0003276444780000026
Figure BDA0003276444780000027
and S is k Respectively predicting and actually obtaining the modulus value of the width and the height;
the tracking network model adopts a structure of a twin neural network: the input is two branches, namely a detected/re-detected unmanned aerial vehicle result and a video frame to be tracked of the next frame; the middle layer is that the two inputs pass through the same characteristic extraction network structure; the output is that the two results after feature extraction are subjected to cross-correlation operation and converted into three branches, and the three branches of the output are respectively: the overlapping rate score of the central point comprehensive classification and quality score of the unmanned aerial vehicle target position, the distance value between the central point and the four sides of the target frame and the probability distribution value corresponding to two integer values near the distance value; the output of the final tracking network model is a position value corresponding to the maximum comprehensive score as a tracking result of the unmanned aerial vehicle; the loss function of the tracking network model is:
Figure BDA0003276444780000031
wherein:
L cls+quality =-|Z-σ| β ((1-Z)log(σ)+Zlog(1-σ))
L distribute =-((Z i+1 -y)log(S i )+(Z-Z i )log(S i+1 ))
Figure BDA0003276444780000032
x, y represents a point on the final feature map, 1 {c*x,y>0} Indicating that the coefficient is 1 if this point is a positive sample, and 0 otherwise. N (N) pos Representing the number of all points on the feature map, Z is the value of the known overlap ratio, sigma is the predicted value, S i And S is i+1 At the boundary value Z i And Z i+1 Upper predicted probability value, L reg Is an IOU penalty.
S3, training the detection network model and the tracking network model constructed in the step S2 by adopting a training set, namely training the detection network model by using a random gradient descent method, wherein the previous 1000 iterations are used as a preheating stage, the learning rate in the stage is increased from 0 to 0.005 of the initial learning rate, then training is performed by using a three-stage learning rate, and the learning rates in the later two stages respectively attenuate by 0.1 times of the learning rate in the previous stage; training a tracking network model by using a random gradient descent method, setting the initial learning rate after preheating to be 0.08, and then training by changing the learning rate by using a cosine descent strategy; thereby obtaining a trained detection network model and a trained tracking network model;
s4, inputting the target unmanned aerial vehicle picture into a detection network model, taking the obtained unmanned aerial vehicle target position as a tracking template, taking the next frame as a search area, and inputting the target unmanned aerial vehicle picture and the next frame into the tracking network model for tracking.
The beneficial effects of the invention are as follows: in the invention, a re-detection process is added in the tracking process, a lightweight detection network is designed for ensuring the final tracking speed, the whole detection network has high detection precision while ensuring the speed, and the designed tracking network is a further improvement on the original network. Overall, the last tracking network can effectively and quickly track the unmanned aerial vehicle.
Drawings
FIG. 1 is a graph of the mean and variance of different channels of a training set picture;
FIG. 2 is a schematic diagram of affine transformation relationship in data enhancement;
FIG. 3 is a detector network model;
FIG. 4 is a tracker network model;
FIG. 5 is an overall tracking flow;
FIG. 6 is a diagram of the detection effect of the detector;
FIG. 7 is a precision graph of tracking effects;
fig. 8 is a graph of success rate of tracking effect.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings:
the training set manufacturing method comprises the following steps: the unmanned aerial vehicle image data are collected, the unmanned aerial vehicle data set adopted in the experiment contains 10763 images, and labelimg software is used for marking one by one. The mean and variance are calculated during the data preprocessing process, and the mean and variance of the three channels are obtained as shown in fig. 1. In the data enhancement process, affine transformation is adopted for translation, rotation and scaling, the transformation relation of affine transformation can be determined by three groups of corresponding points on an original image and a transformed image, the three groups of points and the transformation effect are respectively a central point a point of a graph, a left key point b point and a c point set according to the positions of the two points, and the data enhancement is carried out by adopting random contrast, random saturation and random optical noise.
The detection network model designed by the invention is an improved network architecture based on a CenterNet, the architecture is an anchor-free idea, three branches are used for respectively outputting a predicted central point D of the unmanned aerial vehicle, the final position deviation D of the central point relative to the central point and the final predicted width and height S of a target frame of the unmanned aerial vehicle, and the final result is that a frame with the size S is selected as a prediction result by taking (D+d) as the central point. In the design of the detected model, considering that the whole tracking flow is a deep learning model, the required calculation amount is large, in order to calculate the speed, the detected model is light, the characteristic extraction network adopts MobileNet V3-small, the final 1280L and the final full-connection layer are removed, the SPP module is adopted in the decoding part of the network, and the parameter amount of the whole detected network model is only 2.024M. The designed tracking network is shown in fig. 4, and is constructed by a twin neural network structure SiamyC++, and is also an anchor-free idea, the two branches are a tracking template and a search frame respectively, a tracking result is obtained by decoding according to the corresponding maximum position, the feature extraction network divides AlexNet into two parts, one part is used for subsequent dimension transformation before cross-correlation operation, and the other part is used for feature processing.
The losses of the three branches of the detector (center, offset and width-height) are L respectively k ,L off And L size The total loss is L total =L ksize L sizeoff L offsize =0.1,λ off =1)
Wherein the method comprises the steps of
Figure BDA0003276444780000051
(/>
Figure BDA0003276444780000052
Is the value output by each point of the central branch
Figure BDA0003276444780000053
(/>
Figure BDA0003276444780000054
For shifting the predicted value of the branch, p is the integer coordinate before downsampling, R is the downsampling rate 4,/L>
Figure BDA0003276444780000055
For the coordinate value rounded after downsampling
Figure BDA0003276444780000056
(/>
Figure BDA0003276444780000057
And S is k The modulus of the predicted and actual width-height, respectively
With this as the loss setting, training of the detector was performed using SGD (random gradient descent method), the first 1000 iterations as the warm-up phase, the learning rate at this phase was increased from 0 to the initial learning rate 0.005, then training was performed using a three-stage learning rate, and the learning rates of the latter two stages were each attenuated by 0.1 times at the learning rate of the former phase. The loss setting of the tracker is improved relative to the original Siamy++ method, and the overall loss is as follows:
Figure BDA0003276444780000058
wherein the method comprises the steps of
L cls+quality =-|Z-σ| β ((1-Z) log (σ) +Zlog (1- σ)) (Z is the value of the known overlap ratio, σ is the predicted value)
L distribute =-((Z i+1 -Z)log(S i )+(Z-Z i )log(S i+1 ))(S i And S is i+1 At the boundary value Z i And Z i+1 Up-predictedProbability value
Figure BDA0003276444780000059
(IOU loss)
The loss setting of the tracker is divided into three parts, a joint loss of classification and quality assessment scores, a loss of boundary distribution and a size regression loss. Training was also performed using SGD, with the initial learning rate after warm-up set to 0.08, and then training was performed using cosine-down strategy to change the learning rate. The models of the detector and the tracker, which are obtained after training the detector and the tracker, respectively, are subjected to step S2. The flow chart of steps S2-S4 is shown in fig. 5.
Step S2: the unmanned aerial vehicle target position is obtained by using a detector as a tracking template, wherein the working principle of the detector is as follows: firstly, normalizing the value on the characteristic diagram (heat diagram) of the last classification branch to be a value between 0 and 1 after inputting the picture; performing operation similar to maximum pooling on the values of all points on the heat map every nine points, reserving the point where the maximum value is located, and then sorting the values of all points reserved last; mapping the reserved positions back to corresponding positions of the original image, wherein the positions are added with the offset of the offset branch prediction as the center point of the final prediction result, and the size of the prediction target is determined by the position branch, so that a series of prediction frames are obtained; in order to reduce low-quality prediction frames, the heat map point values corresponding to the prediction frames are compared with a set confidence threshold value of 0.3, and only the prediction frames with the heat map point values larger than the threshold value are reserved to be the final detection result. The detection effect of the present invention is shown in fig. 6. After the tracking template is obtained, the next frame is used as a search area, and the next frame and the search area are used as two inputs of the tracker to track the tracker.
Step S3: the three branches of the tracker respectively output the overlapping rate scores of the comprehensive classification and quality score of the central point of the target position of the unmanned aerial vehicle, the distances j between the central point and the four edges of the target frame, and two integer values j near the j value l And j r Probability distribution value p of (2) jl And p jr . All overlap scores are ranked to find the largest value among them. If this value isIf the tracking confidence coefficient is larger than the preset tracking confidence coefficient threshold value of 0.25, the tracking is considered successful if the tracking confidence coefficient is higher, and the step S4 is executed; if the value is smaller than the threshold value of 0.25, the tracking result is not good, and the step S5 is executed to perform the re-detection flow.
Step S4: calculating the corresponding accurate distance j from the highest center point to the boundary of the target frame f =j r p jr +j l p jl And combining the center point and the distance from the center point to the prediction frame to obtain a final tracking frame, if the frame is the last frame, finishing tracking, and if the frame is not the last frame, continuing tracking by using the tracking result as a tracking input template of the next frame.
Step S5: the confidence of the tracker tracking the frame is not high, so that the frame is considered to be failed to be tracked, the frame is sent to the detector for re-detection, the detection result of the detector also depends on the built-in classification confidence threshold, the classification confidence of all the prediction frames is smaller than the threshold, the frame is considered to have no target (unmanned aerial vehicle), the detector starts to detect the next frame, if the last frame has no target, the whole tracking process is stopped, if the target appears, the detection frame is used as a tracking template, and the tracking process of the next frame is continued in the input step S2.
The actual tracking effect precision graph and the success rate graph of the invention are respectively shown in fig. 7 and 8, the SiamCPP curve is the effect of a basic tracker, the G-SiamCPP curve is the tracking effect of the improved tracker based on the SiamCPP, and the LG-SiamCPP is the tracking effect of the re-detection of the detector after the improvement.

Claims (2)

1. The anti-unmanned aerial vehicle tracking method based on deep learning is characterized by comprising the following steps of:
s1, collecting unmanned aerial vehicle data and preprocessing, wherein the preprocessing method is to calculate the mean value and variance of each unmanned aerial vehicle picture, then subtracting the mean value from the variance of each picture origin, and obtaining a training set after data enhancement;
s2, constructing a detection and tracking network model, wherein the network model is based on a central Net, a mobile Net V3-small is adopted as a feature extraction network to extract features of an input training set picture, and after SPPnet and transpose convolution are carried out on the feature picture output by the feature extraction network, a detection result is obtained, and the detection result comprises three branches, namely a central point branch of the unmanned aerial vehicle, a position offset branch relative to the central point and a wide and high branch of a target frame of the unmanned aerial vehicle; the loss function of the detection network is set as:
L total =L ksize L sizeoff L off
wherein lambda is size =0.1,λ off =1,L k The center point branch loss function for the unmanned aerial vehicle:
Figure FDA0004242537910000011
Figure FDA0004242537910000012
is the value output by each point of the central branch, N is the number of positive samples, Σ xyc Representing all points on a network output layer, wherein alpha and beta are preset super parameters in training;
L off offset branch loss function for position relative to the center point:
Figure FDA0004242537910000013
Figure FDA0004242537910000016
for shifting the predicted value of the branch, p is the integer coordinate before downsampling, R is the downsampling rate 4,/L>
Figure FDA0004242537910000014
The coordinate value is rounded after downsampling;
L size loss function for wide-high branches of unmanned aerial vehicle target frames:
Figure FDA0004242537910000015
Figure FDA0004242537910000017
and S is k Respectively predicting and actually obtaining the modulus value of the width and the height;
the tracking network model adopts a structure of a twin neural network: the input is two branches, namely a detected/re-detected unmanned aerial vehicle result and a video frame to be tracked of the next frame; the middle layer is that the two inputs pass through the same characteristic extraction network structure; the output is that the two results after feature extraction are subjected to cross-correlation operation and converted into three branches, and the three branches of the output are respectively: the overlapping rate score of the central point comprehensive classification and quality score of the unmanned aerial vehicle target position, the distance value between the central point and the four sides of the target frame and the probability distribution value corresponding to two integer values near the distance value; the output of the final tracking network model is a position value corresponding to the maximum comprehensive score as a tracking result of the unmanned aerial vehicle;
the loss function of the tracking network model is:
Figure FDA0004242537910000021
wherein:
L cls+quality =-|Z-σ| β ((1-Z)log(σ)+Zlog(1-σ))
L distribute =-((Z i+1 -y)log(S i )+(Z-Z i )log(S i+1 ))
Figure FDA0004242537910000022
x, y represents the lastA certain point on the feature map, 1 {c*x,y>0} Indicating that if this point is a positive sample, the coefficient is 1, otherwise the coefficient is 0, N pos Representing the number of all points on the feature map, Z is the value of the known overlap ratio, sigma is the predicted value, S i And S is i+1 At the boundary value Z i And Z i+1 Upper predicted probability value, L reg Is an IOU penalty;
s3, training the detection network model and the tracking network model constructed in the step S2 by adopting a training set, namely training the detection network model by using a random gradient descent method, wherein the previous 1000 iterations are used as a preheating stage, the learning rate in the stage is increased from 0 to 0.005 of the initial learning rate, then training is performed by using a three-stage learning rate, and the learning rates in the later two stages respectively attenuate by 0.1 times of the learning rate in the previous stage; training a tracking network model by using a random gradient descent method, setting the initial learning rate after preheating to be 0.08, and then training by changing the learning rate by using a cosine descent strategy; thereby obtaining a trained detection network model and a trained tracking network model;
s4, inputting the target unmanned aerial vehicle picture into a detection network model, taking the obtained unmanned aerial vehicle target position as a tracking template, taking the next frame as a search area, and inputting the target unmanned aerial vehicle picture and the next frame into the tracking network model for tracking.
2. The deep learning-based anti-drone tracking method of claim 1, further comprising:
s5, respectively outputting the overlapping rate scores of the comprehensive classification and the quality score of the central point of the target position of the unmanned aerial vehicle, the distances j between the central point and the four edges of the target frame, and two integer values j near the j value by three branches of the tracker l And j r Probability distribution value of (2)
Figure FDA0004242537910000033
And->
Figure FDA0004242537910000031
Score all overlap ratesSorting the rows, finding out the maximum value, if the maximum value is greater than a preset tracking confidence threshold value of 0.25, considering that the tracking is successful, and executing step S6; if the maximum value is less than the threshold value 0.25, executing step S7;
s6, calculating the corresponding accurate distance from the maximum value center point to the boundary of the target frame
Figure FDA0004242537910000032
Combining the center point and the distance from the center point to the prediction frame to obtain a final tracking frame, if the frame is the last frame, finishing tracking, and if the frame is not the last frame, continuing tracking by using the tracking result as a tracking input template of the next frame;
s7, sending the target frame to a detection network for re-detection, wherein the detection result of the detection network depends on a built-in classification confidence coefficient threshold value, if the classification confidence coefficient of all the prediction frames is smaller than the threshold value, the frame is considered to have no target, the detection network starts to detect the next frame, if the last frame has no target, the whole tracking flow is stopped, if the target appears, the detection frame is used as a tracking template, and the tracking of the next frame is carried out according to the step S4.
CN202111119157.2A 2021-09-24 2021-09-24 Anti-unmanned aerial vehicle tracking method based on deep learning Active CN113808170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111119157.2A CN113808170B (en) 2021-09-24 2021-09-24 Anti-unmanned aerial vehicle tracking method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111119157.2A CN113808170B (en) 2021-09-24 2021-09-24 Anti-unmanned aerial vehicle tracking method based on deep learning

Publications (2)

Publication Number Publication Date
CN113808170A CN113808170A (en) 2021-12-17
CN113808170B true CN113808170B (en) 2023-06-27

Family

ID=78896518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111119157.2A Active CN113808170B (en) 2021-09-24 2021-09-24 Anti-unmanned aerial vehicle tracking method based on deep learning

Country Status (1)

Country Link
CN (1) CN113808170B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
WO2020155873A1 (en) * 2019-02-02 2020-08-06 福州大学 Deep apparent features and adaptive aggregation network-based multi-face tracking method
WO2020187095A1 (en) * 2019-03-20 2020-09-24 深圳市道通智能航空技术有限公司 Target tracking method and apparatus, and unmanned aerial vehicle
CN112329776A (en) * 2020-12-03 2021-02-05 北京智芯原动科技有限公司 License plate detection method and device based on improved CenterNet network
CN112784756A (en) * 2021-01-25 2021-05-11 南京邮电大学 Human body identification tracking method
CN113313706A (en) * 2021-06-28 2021-08-27 安徽南瑞继远电网技术有限公司 Power equipment defect image detection method based on detection reference point offset analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
WO2020155873A1 (en) * 2019-02-02 2020-08-06 福州大学 Deep apparent features and adaptive aggregation network-based multi-face tracking method
WO2020187095A1 (en) * 2019-03-20 2020-09-24 深圳市道通智能航空技术有限公司 Target tracking method and apparatus, and unmanned aerial vehicle
CN112329776A (en) * 2020-12-03 2021-02-05 北京智芯原动科技有限公司 License plate detection method and device based on improved CenterNet network
CN112784756A (en) * 2021-01-25 2021-05-11 南京邮电大学 Human body identification tracking method
CN113313706A (en) * 2021-06-28 2021-08-27 安徽南瑞继远电网技术有限公司 Power equipment defect image detection method based on detection reference point offset analysis

Also Published As

Publication number Publication date
CN113808170A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
CN112184752A (en) Video target tracking method based on pyramid convolution
CN112258554B (en) Double-current hierarchical twin network target tracking method based on attention mechanism
CN108629288B (en) Gesture recognition model training method, gesture recognition method and system
CN111160407B (en) Deep learning target detection method and system
CN111767847B (en) Pedestrian multi-target tracking method integrating target detection and association
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN110334656B (en) Multi-source remote sensing image water body extraction method and device based on information source probability weighting
CN112215080B (en) Target tracking method using time sequence information
CN108830170A (en) A kind of end-to-end method for tracking target indicated based on layered characteristic
CN117252904B (en) Target tracking method and system based on long-range space perception and channel enhancement
CN115761534A (en) Method for detecting and tracking small target of infrared unmanned aerial vehicle under air background
CN111640138A (en) Target tracking method, device, equipment and storage medium
CN115457277A (en) Intelligent pavement disease identification and detection method and system
CN112785626A (en) Twin network small target tracking method based on multi-scale feature fusion
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN113361370B (en) Abnormal behavior detection method based on deep learning
CN113808170B (en) Anti-unmanned aerial vehicle tracking method based on deep learning
CN117079095A (en) Deep learning-based high-altitude parabolic detection method, system, medium and equipment
Wang et al. A Dense-aware Cross-splitNet for Object Detection and Recognition
CN111862147A (en) Method for tracking multiple vehicles and multiple human targets in video
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
CN115880332A (en) Target tracking method for low-altitude aircraft visual angle
CN114067359B (en) Pedestrian detection method integrating human body key points and visible part attention characteristics
CN112613472B (en) Pedestrian detection method and system based on deep search matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant