CN115424072A - Unmanned aerial vehicle defense method based on detection technology - Google Patents

Unmanned aerial vehicle defense method based on detection technology Download PDF

Info

Publication number
CN115424072A
CN115424072A CN202211082837.6A CN202211082837A CN115424072A CN 115424072 A CN115424072 A CN 115424072A CN 202211082837 A CN202211082837 A CN 202211082837A CN 115424072 A CN115424072 A CN 115424072A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
vehicle target
sequence
defense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211082837.6A
Other languages
Chinese (zh)
Other versions
CN115424072B (en
Inventor
折永刚
李春艳
折云岗
王宇
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ordos Shida Technology Co ltd
Original Assignee
Ordos Shida Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ordos Shida Technology Co ltd filed Critical Ordos Shida Technology Co ltd
Priority to CN202211082837.6A priority Critical patent/CN115424072B/en
Publication of CN115424072A publication Critical patent/CN115424072A/en
Application granted granted Critical
Publication of CN115424072B publication Critical patent/CN115424072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an unmanned aerial vehicle defense method based on a detection technology, and the unmanned aerial vehicle defense method based on the detection technology comprises the following steps: acquiring an unmanned aerial vehicle target image data set as a training set; outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; acquiring an image containing a suspected unmanned aerial vehicle target; inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection, and acquiring an unmanned aerial vehicle track sequence; inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence; according to the unmanned aerial vehicle track prediction output sequence, track point longitude and latitude of the unmanned aerial vehicle at the time t are obtained, and a defense speed vector instruction is sent at the point to realize unmanned aerial vehicle defense; the method realizes the acquisition and detection of the target image of the unmanned aerial vehicle, predicts the future track, realizes defense according to the predicted track, and is accurate and efficient.

Description

Unmanned aerial vehicle defense method based on detection technology
Technical Field
The invention relates to the field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle defense method based on a detection technology.
Background
In recent years, the unmanned aerial vehicle technology is continuously improved and perfected, and the unmanned aerial vehicle is widely applied to the military and civil fields. However, frequent occurrence and influence of a wide range of "black fly" and abuse phenomena have led to a hot study of countering and interfering with low-altitude unmanned machines.
Currently, anti-drone technologies are mainly classified into three categories: the interference blocking type is mainly realized by technologies such as signal interference, sound wave interference and the like. Direct destruction classes, including use of lasers, countering drones with drones, and the like. Monitoring control is mainly realized by means of hijacking radio control and the like.
However, a complete and effective unmanned aerial vehicle defense method is still not available in the prior art.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art, provides the unmanned aerial vehicle defense method based on the detection technology, realizes the aims of acquiring and detecting target images of the unmanned aerial vehicle, acquiring a flight track sequence, further predicting future tracks and finally realizing defense accurately according to the predicted tracks, and is an integral unmanned aerial vehicle defense method which is accurate and efficient.
An unmanned aerial vehicle defense method based on a detection technology comprises the following steps:
acquiring an unmanned aerial vehicle target image data set, and enhancing the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set serving as a training set;
outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic diagrams with different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
capturing information of a suspected unmanned aerial vehicle target by a radar to obtain a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB (red, green and blue) camera according to the target radar map;
inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
and (3) obtaining the track point longitude and latitude of the unmanned aerial vehicle at the time t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with the original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
Specifically, an unmanned aerial vehicle target image data set is obtained, and the data set is enhanced through background pixel replacement, where the background pixel replacement specifically is:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then performing background pixel replacement according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set.
Specifically, the unmanned aerial vehicle target detection model is output to a training set for training, wherein the loss function comprises a positioning loss function and a classification loss function, and specifically comprises the following steps:
Figure BDA0003833944870000021
Figure BDA0003833944870000022
Figure BDA0003833944870000023
Figure BDA0003833944870000024
Figure BDA0003833944870000025
wherein the content of the first and second substances,
Figure BDA0003833944870000026
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure BDA0003833944870000027
is the predicted frame position and size;
Figure BDA0003833944870000028
is the true frame position and size;
Figure BDA0003833944870000029
is the prediction box matches the real box with respect to the category p;
Figure BDA00038339448700000210
the method comprises the following steps of (1) obtaining a predicted value with the category of p, N being the number of predicted frames, the abscissa of the starting point of a cx predicted frame, the ordinate of the starting point of a cy predicted frame, the width of a w predicted frame and the height of an h predicted frame; and m is cx, cy, w and h.
Specifically, the inputting the obtained unmanned aerial vehicle trajectory sequence into the LSTM structure to obtain an unmanned aerial vehicle trajectory prediction output sequence specifically includes:
inputting the trajectory sequence of the unmanned aerial vehicle at the moment t into an LSTM structure, processing the trajectory sequence through a sigmoid function, combining the output of an LSTM unit in a previous time node and the input value of a current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input, and adds the output of the hidden layer at the last moment;
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle trajectory prediction output sequence.
In another aspect, an embodiment of the present invention provides a detection technology-based unmanned aerial vehicle defense system, including the following:
a training set acquisition unit: a training set acquisition unit acquires an unmanned aerial vehicle target image data set, and enhances the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set as a training set;
a model training unit: outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
a detection image acquisition unit: capturing information of a suspected unmanned aerial vehicle target by a radar, obtaining a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB camera according to the target radar map;
a trajectory sequence acquisition unit: inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
a trajectory prediction unit: inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
a defense unit: and (3) obtaining the track point longitude and latitude of the unmanned aerial vehicle at the time t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with the original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
Specifically, in the training set obtaining unit, an unmanned aerial vehicle target image data set is obtained, and the data set is enhanced by background pixel replacement, where the background pixel replacement specifically is:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then performing background pixel replacement according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set.
Specifically, in the model training unit, the unmanned aerial vehicle target detection model is output from the training set for training, wherein the loss function includes a positioning loss function and a classification loss function, specifically:
Figure BDA0003833944870000041
Figure BDA0003833944870000042
Figure BDA0003833944870000043
Figure BDA0003833944870000044
Figure BDA0003833944870000045
wherein the content of the first and second substances,
Figure BDA0003833944870000046
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure BDA0003833944870000047
is the predicted frame position and size;
Figure BDA0003833944870000048
is the actual frame position and size;
Figure BDA0003833944870000049
is the prediction box matches the real box with respect to the category p;
Figure BDA00038339448700000410
the category is a predicted value of p, N is the number of predicted frames, the abscissa of the starting point of the cx predicted frame, the ordinate of the starting point of the cy predicted frame, the width of the w predicted frame and the height of the h predicted frame; and m is cx, cy, w and h.
Specifically, in the trajectory prediction unit, the obtained unmanned aerial vehicle trajectory sequence is input into an LSTM structure to obtain an unmanned aerial vehicle trajectory prediction output sequence, which specifically includes:
inputting the trajectory sequence of the unmanned aerial vehicle at the moment t into an LSTM structure, processing the trajectory sequence through a sigmoid function, combining the output of an LSTM unit in a previous time node and the input value of a current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input and adds the output of the hidden layer at the previous moment;
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle trajectory prediction output sequence.
An embodiment of the present invention provides an electronic device, including: the unmanned aerial vehicle defense system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor can realize the steps of the unmanned aerial vehicle defense method based on the detection technology when executing the computer program.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for defending a drone based on a probe technology may be implemented.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
the invention provides an unmanned aerial vehicle defense method based on a detection technology, which comprises the following steps: acquiring an unmanned aerial vehicle target image data set, and enhancing the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set serving as a training set; outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network; capturing information of a suspected unmanned aerial vehicle target by a radar to obtain a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB (red, green and blue) camera according to the target radar map; inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions; inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence; according to the unmanned aerial vehicle track prediction output sequence, track point longitude and latitude of the unmanned aerial vehicle at the time t are obtained, a defense speed vector instruction is sent at the point, and the unmanned aerial vehicle receives the defense speed vector instruction and synthesizes the defense speed vector instruction with an original speed vector to realize unmanned aerial vehicle defense; the method provided by the invention realizes the acquisition and detection of the target image of the unmanned aerial vehicle, obtains the flight track sequence and further predicts the future track, and finally realizes defense accurately according to the predicted track.
Drawings
Fig. 1 is a flowchart of a method for defending an unmanned aerial vehicle based on a detection technology according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a background pixel replacement method according to an embodiment of the present invention;
fig. 3 is a flow chart of an implementation of detection of an unmanned aerial vehicle target detection model according to an embodiment of the present invention;
FIG. 4 is a structural diagram of an LSTM structure provided in an embodiment of the present invention;
fig. 5 is an architecture diagram of a defense system of an unmanned aerial vehicle based on a detection technology according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an embodiment of a computer-readable storage medium according to an embodiment of the present invention.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
The invention adopts the following technical scheme:
as shown in fig. 1, a flow chart of a method for defending an unmanned aerial vehicle based on a detection technology includes the following steps:
s101: acquiring an unmanned aerial vehicle target image data set, and enhancing the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set serving as a training set;
specifically, an unmanned aerial vehicle target image data set is obtained, and the data set is enhanced through background pixel replacement, where the background pixel replacement specifically is:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then replacing background pixels according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set. FIG. 2 is a schematic diagram of a background pixel replacement method according to an embodiment of the present invention; compared with a real unmanned aerial vehicle target image, the unmanned aerial vehicle target image generated after background pixel replacement has high visual similarity, the generated image has similar characteristics with the real image, the diversity of a data set can be increased by a target migration method, and the model generalization capability is improved.
S102: outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
the target detection SSD model takes a VGG16 structure as a basic backbone network, and predicts a target by selecting in the VGG16 and adding more convolution layers behind the network to obtain feature maps with different scales. The method comprises the steps that a 300 x 300 image is input by a model, 6 feature graphs with different sizes are selected through a convolutional neural network, the feature graphs have the sizes of (38,38), (19,19), (10,10), (5,5), (3,3) and (1,1), the number of corresponding prior frames of an anchor point in each layer of feature graph is 4,6,6,6,4,4, and fig. 3 is a flow chart for realizing detection of the unmanned aerial vehicle target detection model provided by the embodiment of the invention; therefore, 8732 predicted values are finally output for each type of target, and a final result is obtained through non-maximum suppression.
Specifically, the training set is output to an unmanned aerial vehicle target detection model for training, wherein the loss function comprises a positioning loss function and a classification loss function, and specifically comprises the following steps:
Figure BDA0003833944870000071
Figure BDA0003833944870000072
Figure BDA0003833944870000073
Figure BDA0003833944870000074
Figure BDA0003833944870000075
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003833944870000076
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure BDA0003833944870000077
is the predicted frame position and size;
Figure BDA0003833944870000078
is the actual frame position and size;
Figure BDA0003833944870000079
is the prediction box matches the real box with respect to the category p;
Figure BDA00038339448700000710
the method comprises the following steps of (1) obtaining a predicted value with the category of p, N being the number of predicted frames, the abscissa of the starting point of a cx predicted frame, the ordinate of the starting point of a cy predicted frame, the width of a w predicted frame and the height of an h predicted frame; and m is cx, cy, w and h.
S103: capturing information of a suspected unmanned aerial vehicle target by a radar to obtain a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB (red, green and blue) camera according to the target radar map;
firstly, monitoring a suspicious target invading in a 2km low-altitude range in real time through a radar, and when the suspicious target enters a 1km range, detecting a target communication signal of the unmanned aerial vehicle to obtain a target radar map, wherein the radar map comprises information such as target position, height, speed and the like, at the moment, rotating a holder of RGB camera equipment, zooming and focusing the equipment to obtain a picture containing the suspicious unmanned aerial vehicle target.
S104: inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
s105: inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
specifically, the inputting the obtained unmanned aerial vehicle trajectory sequence into the LSTM structure to obtain an unmanned aerial vehicle trajectory prediction output sequence specifically includes:
inputting the unmanned aerial vehicle track sequence at the time t into an LSTM structure, wherein FIG. 4 is a structure diagram of the LSTM structure provided by the embodiment of the invention; processing through a sigmoid function, combining the output of an LSTM unit in the last time node and the input value of the current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
wherein, time t unmanned aerial vehicle trajectory sequence, e t =[e 1 ,e 2 ,e 3 ......e n ];
i t =σ(W xi e t +W hi h t-1 +b i );
Wherein, b i For inputting threshold values of gates, W xi Weight parameter matrix of input gates, h t-1 Hiding the output result of a cell on the layer, W hi A weight parameter matrix of an output result of a layer above an input gate is provided, wherein sigma is a sigmoid activation function; i.e. i t Is an input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input and adds the output of the hidden layer at the previous moment;
f t =σ(W xf e t +W hf h t-1 +b f );
wherein, b f To forget the threshold of the door, W xf Weight parameter matrix of forgetting gate, h t-1 Hiding the output result of a cell on the layer, W hf Weight parameter matrix for output results of the previous layer of the forgetting gate, f t In order to forget to leave the door,
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle trajectory prediction output sequence.
O t =σ(W xo e t +W ho h t-1 +b o );
Wherein, b o To forget the threshold of the door, W xo Weight parameter matrix of output gates, h t-1 Hiding the output result of a cell on the layer, W ho Weight parameter matrix for output result of layer above output gate, O t In order to output the output gate, the output gate is provided with a gate,
y t =σ(W hy h t +W hy h t +b y );h t =O t ·tanh(f t )
wherein, y t Output sequence for trajectory prediction, W hy To predict the weights, b y For the prediction threshold, a dot product, and tanh is the tanh activation function.
S106: and (3) obtaining the longitude and latitude of a track point of the unmanned aerial vehicle at the moment t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with an original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
The defense speed vector instruction is sent at a fixed point, the unmanned aerial vehicle receives the defense speed vector instruction and synthesizes the defense speed vector instruction with the original speed vector, namely, the original speed vector of the flight becomes the synthesized speed vector, the course of the unmanned aerial vehicle and the offset of the speed are kept away from the original course, and the defense of the unmanned aerial vehicle is realized.
Fig. 5 is an architecture diagram of a defense system of an unmanned aerial vehicle based on a detection technology according to an embodiment of the present invention; comprises the following steps:
training set acquisition unit 501: a training set acquisition unit acquires an unmanned aerial vehicle target image data set, and enhances the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set as a training set;
specifically, an unmanned aerial vehicle target image data set is obtained, and the data set is enhanced through background pixel replacement, where the background pixel replacement specifically is:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then performing background pixel replacement according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set. Fig. 2 is a schematic diagram of a background pixel replacement method according to an embodiment of the invention; compared with a real unmanned aerial vehicle target image, the unmanned aerial vehicle target image generated after background pixel replacement has higher visual similarity, the generated image has similar characteristics with the real image, the diversity of a data set can be increased by a target migration method, and the generalization capability of a model is improved.
Model training unit 502: outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
the target detection SSD model takes a VGG16 structure as a basic backbone network, and predicts a target by selecting in the VGG16 and adding more convolution layers behind the network to obtain feature maps with different scales. The method comprises the steps that a 300 x 300 image is input by a model, 6 feature graphs with different sizes are selected through a convolutional neural network, the feature graphs have the sizes of (38,38), (19,19), (10,10), (5,5), (3,3) and (1,1), the number of corresponding prior frames of an anchor point in each layer of feature graph is 4,6,6,6,4,4, and fig. 3 is a flow chart for realizing detection of the unmanned aerial vehicle target detection model provided by the embodiment of the invention; 8732 predicted values are finally output for each type of targets, and a final result is obtained through non-maximum suppression.
Specifically, the unmanned aerial vehicle target detection model is output to a training set for training, wherein the loss function comprises a positioning loss function and a classification loss function, and specifically comprises the following steps:
Figure BDA0003833944870000101
Figure BDA0003833944870000102
Figure BDA0003833944870000103
Figure BDA0003833944870000104
Figure BDA0003833944870000105
wherein the content of the first and second substances,
Figure BDA0003833944870000106
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure BDA0003833944870000107
is the predicted frame position and size;
Figure BDA0003833944870000108
is the actual frame position and size;
Figure BDA0003833944870000109
is the prediction box matches the real box with respect to the category p;
Figure BDA00038339448700001010
the method comprises the following steps of (1) obtaining a predicted value with the category of p, N being the number of predicted frames, the abscissa of the starting point of a cx predicted frame, the ordinate of the starting point of a cy predicted frame, the width of a w predicted frame and the height of an h predicted frame; and m is cx, cy, w and h.
The detection image acquisition unit 503: capturing information of a suspected unmanned aerial vehicle target by a radar to obtain a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB (red, green and blue) camera according to the target radar map;
firstly, monitoring a suspicious target invading in a 2km low-altitude range in real time through a radar, and when the suspicious target enters a 1km range, detecting a target communication signal of the unmanned aerial vehicle to obtain a target radar map, wherein the radar map comprises information such as target position, height, speed and the like, at the moment, rotating a holder of RGB camera equipment, zooming and focusing the equipment to obtain a picture containing the suspicious unmanned aerial vehicle target.
Trajectory sequence acquisition unit 504: inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
the trajectory prediction unit 505: inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
specifically, the inputting the acquired unmanned aerial vehicle trajectory sequence into the LSTM structure to obtain an unmanned aerial vehicle trajectory prediction output sequence specifically includes:
inputting the unmanned aerial vehicle track sequence at the time t into an LSTM structure, and FIG. 4 is a structure diagram of the LSTM structure provided by the embodiment of the invention; processing through a sigmoid function, combining the output of an LSTM unit in the last time node and the input value of the current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
wherein, time t unmanned aerial vehicle trajectory sequence, e t =[e 1 ,e 2 ,e 3 ……e n ];
i t =σ(W xi e t +W hi h t-1 +b i );
Wherein, b i For inputting threshold values of gates, W xi Weight parameter matrix of input gates, h t-1 Hiding the output result of a cell on the layer, W hi A weight parameter matrix of an output result of a layer above an input gate is provided, wherein sigma is a sigmoid activation function; i.e. i t Is an input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input and adds the output of the hidden layer at the previous moment;
f t =σ(W xf e t +W hf h t-1 +b f );
wherein, b f To forget the threshold of the door, W xf Weight parameter matrix of forgetting gate, h t-1 Hiding the output result, W, of a cell on the layer hf Weight parameter matrix for output results of the previous layer of the forgetting gate, f t In order to forget to leave the door,
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle track prediction output sequence.
O t =σ(W xo e t +W ho h t-1 +b o );
Wherein, b o To forget the threshold of the door, W xo Weight parameter matrix of output gates, h t-1 Hiding the output result of a cell on the layer, W ho Weight parameter matrix for output result of layer above output gate, O t In order to output the output gate, the output gate is provided with a gate,
y t =σ(W hy h t +W hy h t +b y );h t =O t ·tanh(f t )
wherein, y t Output sequence for trajectory prediction, W hy To predict the weights, b y For the prediction threshold, a dot product, and tanh is the tanh activation function.
The defense unit 506: and (3) obtaining the track point longitude and latitude of the unmanned aerial vehicle at the time t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with the original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
The defense speed vector instruction is sent at a fixed point, the unmanned aerial vehicle receives the defense speed vector instruction and synthesizes the defense speed vector instruction with the original speed vector, namely, the original speed vector of the flight becomes the synthesized speed vector, the course of the unmanned aerial vehicle and the offset of the speed are kept away from the original course, and the defense of the unmanned aerial vehicle is realized.
As shown in fig. 6, an electronic device 600 according to an embodiment of the present invention includes a memory 610, a processor 620, and a computer program 611 stored in the memory 620 and operable on the processor 620, where the processor 620 executes the computer program 611 to implement a method for defending a drone based on a detection technique according to an embodiment of the present invention.
Since the electronic device described in this embodiment is a device used for implementing the embodiment of the present invention, based on the method described in this embodiment of the present invention, a person skilled in the art can understand the specific implementation manner of the electronic device of this embodiment and various variations thereof, so that how to implement the method in this embodiment of the present invention by the electronic device is not described in detail herein, and as long as the person skilled in the art implements the device used for implementing the method in this embodiment of the present invention, the device used for implementing the method in this embodiment of the present invention belongs to the protection scope of the present invention.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating an embodiment of a computer-readable storage medium according to the present invention.
As shown in fig. 7, the present embodiment provides a computer-readable storage medium 700, on which a computer program 711 is stored, and when executed by a processor, the computer program 711 implements a method for defending a drone based on a detection technology, according to the embodiment of the present invention;
it should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention provides an unmanned aerial vehicle defense method based on a detection technology, which comprises the following steps: acquiring an unmanned aerial vehicle target image data set, and enhancing the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set serving as a training set; outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network; capturing information of a suspected unmanned aerial vehicle target by a radar, obtaining a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB camera according to the target radar map; inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions; inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence; according to the unmanned aerial vehicle track prediction output sequence, track point longitude and latitude of the unmanned aerial vehicle at the time t are obtained, a defense speed vector instruction is sent at the point, and the unmanned aerial vehicle receives the defense speed vector instruction and synthesizes the defense speed vector instruction with an original speed vector to realize unmanned aerial vehicle defense; the method provided by the invention realizes the acquisition and detection of the target image of the unmanned aerial vehicle, obtains the flight track sequence and further predicts the future track, and finally realizes defense accurately according to the predicted track.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept belong to the behaviors violating the protection scope of the present invention.

Claims (10)

1. An unmanned aerial vehicle defense method based on a detection technology is characterized by comprising the following steps:
acquiring an unmanned aerial vehicle target image data set, and enhancing the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set serving as a training set;
outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic graphs of different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
capturing information of a suspected unmanned aerial vehicle target by a radar to obtain a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB (red, green and blue) camera according to the target radar map;
inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain unmanned aerial vehicle target positions, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
and (3) obtaining the longitude and latitude of a track point of the unmanned aerial vehicle at the moment t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with an original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
2. The unmanned aerial vehicle defense method based on detection technology as claimed in claim 1, wherein the unmanned aerial vehicle target image data set is obtained, and the data set is enhanced by background pixel replacement, and the background pixel replacement is specifically:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then performing background pixel replacement according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set.
3. The method as claimed in claim 1, wherein the training set is output to a drone target detection model for training, wherein the loss function includes a localization loss function and a classification loss function, specifically:
Figure FDA0003833944860000011
Figure FDA0003833944860000021
Figure FDA0003833944860000022
Figure FDA0003833944860000023
Figure FDA0003833944860000024
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003833944860000025
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure FDA0003833944860000026
is the predicted frame position and size;
Figure FDA0003833944860000027
is the true frame position and size;
Figure FDA0003833944860000028
is the prediction box matches the real box with respect to the category p;
Figure FDA0003833944860000029
the method comprises the following steps of (1) obtaining a predicted value with the category of p, N being the number of predicted frames, the abscissa of the starting point of a cx predicted frame, the ordinate of the starting point of a cy predicted frame, the width of a w predicted frame and the height of an h predicted frame; and m is cx, cy, w and h.
4. The method as claimed in claim 1, wherein the step of inputting the acquired trajectory sequence of the drone into an LSTM structure to obtain a predicted output sequence of the trajectory of the drone includes:
inputting the trajectory sequence of the unmanned aerial vehicle at the moment t into an LSTM structure, processing the trajectory sequence through a sigmoid function, combining the output of an LSTM unit in a previous time node and the input value of a current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input and adds the output of the hidden layer at the previous moment;
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle track prediction output sequence.
5. An unmanned aerial vehicle defense system based on detection technology, characterized in that includes as follows:
a training set acquisition unit: a training set acquisition unit acquires an unmanned aerial vehicle target image data set, and enhances the data set through background pixel replacement to obtain an enhanced unmanned aerial vehicle target image data set as a training set;
a model training unit: outputting the training set to an unmanned aerial vehicle target detection model for training to obtain trained unmanned aerial vehicle target detection; the unmanned aerial vehicle target detection model takes a VGG16 structure as a basic backbone network, and obtains characteristic diagrams with different scales by selecting in the VGG16 and adding a plurality of convolution layers behind the network;
a detection image acquisition unit: capturing information of a suspected unmanned aerial vehicle target by a radar, obtaining a target radar map, and obtaining an image containing the suspected unmanned aerial vehicle target through an RGB camera according to the target radar map;
a trajectory sequence acquisition unit: inputting an image containing a suspected unmanned aerial vehicle target into trained unmanned aerial vehicle target detection, performing unmanned aerial vehicle target detection to obtain an unmanned aerial vehicle target position, and acquiring an unmanned aerial vehicle track sequence according to a plurality of unmanned aerial vehicle target positions;
a trajectory prediction unit: inputting the obtained unmanned aerial vehicle track sequence into an LSTM structure to obtain an unmanned aerial vehicle track prediction output sequence;
a defense unit: and (3) obtaining the track point longitude and latitude of the unmanned aerial vehicle at the time t according to the unmanned aerial vehicle track prediction output sequence, sending a defense speed vector instruction at the point, and synthesizing the defense speed vector instruction with the original speed vector after the unmanned aerial vehicle receives the defense speed vector instruction to realize unmanned aerial vehicle defense.
6. The unmanned aerial vehicle defense system based on detection technology as claimed in claim 1, wherein in the training set acquisition unit, an unmanned aerial vehicle target image data set is acquired, and the data set is enhanced by background pixel replacement, and the background pixel replacement is specifically:
firstly, extracting an unmanned aerial vehicle target from an image containing the unmanned aerial vehicle target, then performing background pixel replacement according to sky background pixels without the unmanned aerial vehicle target, randomly rotating for a certain angle, and smoothing the image by a Gaussian filtering method to obtain a new data set.
7. The system according to claim 1, wherein the model training unit is configured to train a training set to output the drone target detection model, wherein the loss function includes a positioning loss function and a classification loss function, and specifically:
Figure FDA0003833944860000031
Figure FDA0003833944860000032
Figure FDA0003833944860000033
Figure FDA0003833944860000034
Figure FDA0003833944860000041
wherein the content of the first and second substances,
Figure FDA0003833944860000042
is the ith prediction box and the jth real box are matched with each other or not in relation to the category k, and the value is 0 or 1;
Figure FDA0003833944860000043
is the predicted frame position and size;
Figure FDA0003833944860000044
is the true frame position and size;
Figure FDA0003833944860000045
is the prediction box and the real box related to the categoryp is matched;
Figure FDA0003833944860000046
the method comprises the following steps of (1) obtaining a predicted value with the category of p, N being the number of predicted frames, the abscissa of the starting point of a cx predicted frame, the ordinate of the starting point of a cy predicted frame, the width of a w predicted frame and the height of an h predicted frame; and m is cx, cy, w and h.
8. The system of claim 1, wherein the trajectory prediction unit is configured to input the acquired trajectory sequence of the drone into an LSTM structure to obtain a predicted output sequence of the drone trajectory, and specifically includes:
inputting the trajectory sequence of the unmanned aerial vehicle at the moment t into an LSTM structure, processing the trajectory sequence through a sigmoid function, combining the output of an LSTM unit in a previous time node and the input value of a current time node, passing through a set weight parameter matrix of an input gate, and adding a threshold value of the input gate;
the forgetting gate takes the unmanned aerial vehicle track sequence at the moment t as input, and adds the output of the hidden layer at the last moment;
and the output layer receives data generated by hidden layer training to obtain an unmanned aerial vehicle trajectory prediction output sequence.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, wherein the processor implements the method steps of any of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202211082837.6A 2022-09-06 2022-09-06 Unmanned aerial vehicle defense method based on detection technology Active CN115424072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211082837.6A CN115424072B (en) 2022-09-06 2022-09-06 Unmanned aerial vehicle defense method based on detection technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211082837.6A CN115424072B (en) 2022-09-06 2022-09-06 Unmanned aerial vehicle defense method based on detection technology

Publications (2)

Publication Number Publication Date
CN115424072A true CN115424072A (en) 2022-12-02
CN115424072B CN115424072B (en) 2024-02-27

Family

ID=84202700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211082837.6A Active CN115424072B (en) 2022-09-06 2022-09-06 Unmanned aerial vehicle defense method based on detection technology

Country Status (1)

Country Link
CN (1) CN115424072B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953704A (en) * 2023-01-18 2023-04-11 北京理工大学 Unmanned aerial vehicle detection method
CN115953727A (en) * 2023-03-15 2023-04-11 浙江天行健水务有限公司 Floc settling rate detection method and system, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN111832509A (en) * 2020-07-21 2020-10-27 中国人民解放军国防科技大学 Unmanned aerial vehicle weak and small target detection method based on space-time attention mechanism
CN112068111A (en) * 2020-08-13 2020-12-11 中国人民解放军海军工程大学 Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN113569650A (en) * 2021-06-29 2021-10-29 上海红檀智能科技有限公司 Unmanned aerial vehicle autonomous inspection positioning method based on electric power tower label identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN111832509A (en) * 2020-07-21 2020-10-27 中国人民解放军国防科技大学 Unmanned aerial vehicle weak and small target detection method based on space-time attention mechanism
CN112068111A (en) * 2020-08-13 2020-12-11 中国人民解放军海军工程大学 Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN113569650A (en) * 2021-06-29 2021-10-29 上海红檀智能科技有限公司 Unmanned aerial vehicle autonomous inspection positioning method based on electric power tower label identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘佳铭: "基于深度卷积神经网络的无人机识别方法研究", 《舰船电子工程》, pages 22 - 26 *
杨星鑫: "基于LSTM 的无人机轨迹识别技术研究", 《研究与开发》, pages 210 - 195 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953704A (en) * 2023-01-18 2023-04-11 北京理工大学 Unmanned aerial vehicle detection method
CN115953704B (en) * 2023-01-18 2023-10-03 北京理工大学 Unmanned aerial vehicle detection method
CN115953727A (en) * 2023-03-15 2023-04-11 浙江天行健水务有限公司 Floc settling rate detection method and system, electronic equipment and medium
CN115953727B (en) * 2023-03-15 2023-06-09 浙江天行健水务有限公司 Method, system, electronic equipment and medium for detecting floc sedimentation rate

Also Published As

Publication number Publication date
CN115424072B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN115424072B (en) Unmanned aerial vehicle defense method based on detection technology
CN109871763B (en) Specific target tracking method based on YOLO
CN108885699A (en) Character identifying method, device, storage medium and electronic equipment
CN111598182B (en) Method, device, equipment and medium for training neural network and image recognition
CN111723693B (en) Crowd counting method based on small sample learning
CN111783551B (en) Countermeasure sample defense method based on Bayesian convolutional neural network
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
KR102301631B1 (en) Method for integrating driving images acquired from vehicles performing cooperative driving and driving image integrating device using same
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN109543647A (en) A kind of road abnormality recognition method, device, equipment and medium
Xie et al. Adaptive switching spatial-temporal fusion detection for remote flying drones
Khan et al. Safespace mfnet: Precise and efficient multifeature drone detection network
CN115641507A (en) Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion
Xu et al. COCO-Net: A dual-supervised network with unified ROI-loss for low-resolution ship detection from optical satellite image sequences
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
Hashemi et al. Improving transferability of generated universal adversarial perturbations for image classification and segmentation
Balachandran et al. A novel approach to detect unmanned aerial vehicle using Pix2Pix generative adversarial network
Misbah et al. Tf-net: Deep learning empowered tiny feature network for night-time uav detection
CN115630361A (en) Attention distillation-based federal learning backdoor defense method
CN115758337A (en) Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium
Zhao et al. Deep learning-based laser and infrared composite imaging for armor target identification and segmentation in complex battlefield environments
CN115393655A (en) Method for detecting industrial carrier loader based on YOLOv5s network model
CN110705334A (en) Target tracking method, device, equipment and medium
Wu et al. Research on asphalt pavement disease detection based on improved YOLOv5s
CN109669180B (en) Continuous wave radar unmanned aerial vehicle detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant