CN110490155B - Method for detecting unmanned aerial vehicle in no-fly airspace - Google Patents

Method for detecting unmanned aerial vehicle in no-fly airspace Download PDF

Info

Publication number
CN110490155B
CN110490155B CN201910782216.0A CN201910782216A CN110490155B CN 110490155 B CN110490155 B CN 110490155B CN 201910782216 A CN201910782216 A CN 201910782216A CN 110490155 B CN110490155 B CN 110490155B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
target
prediction
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910782216.0A
Other languages
Chinese (zh)
Other versions
CN110490155A (en
Inventor
叶润
闫斌
甘雨涛
青辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Agricultural University
Original Assignee
University of Electronic Science and Technology of China
Sichuan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Agricultural University filed Critical University of Electronic Science and Technology of China
Priority to CN201910782216.0A priority Critical patent/CN110490155B/en
Publication of CN110490155A publication Critical patent/CN110490155A/en
Application granted granted Critical
Publication of CN110490155B publication Critical patent/CN110490155B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention belongs to the technical field of flight-forbidden airspace unmanned aerial vehicles, and particularly relates to a flight-forbidden airspace unmanned aerial vehicle detection method. The invention implements real-time and accurate unmanned aerial vehicle detection on the unmanned aerial vehicle in the no-fly airspace, and aims to effectively reduce the 'black-fly' unmanned aerial vehicle by detecting the flight of the unmanned aerial vehicle. Meanwhile, the unmanned aerial vehicle flying in the airspace can be found more quickly and accurately, the unmanned aerial vehicle counter-measure is implemented more quickly, the loss caused by 'black flying' of the unmanned aerial vehicle is reduced as much as possible, and the probability of safety accidents caused by the unmanned aerial vehicle is reduced. The result detected by the method can detect the target of the unmanned aerial vehicle, accurately identify the target and the approximate position of the target, and enable the algorithm to achieve real-time capability. Considerable reaction time is brought for timely processing the 'black flight' of the unmanned aerial vehicle.

Description

Method for detecting unmanned aerial vehicle in no-fly airspace
Technical Field
The invention belongs to the technical field of flight-forbidden airspace unmanned aerial vehicles, and particularly relates to a flight-forbidden airspace unmanned aerial vehicle detection method.
Background
In recent years, unmanned aerial vehicles are widely applied to various industry fields, and convenience is brought to many industries. Meanwhile, the phenomenon of poor performance is brought, safety accidents caused by the black flight of the unmanned aerial vehicle occur many times in the whole country and even all countries around the world, for example, the unmanned aerial vehicle disturbs the air, walks privately and disturbs sensitive zones, and national defense and public safety are seriously threatened. Defects and loopholes of unmanned aerial vehicle supervision technology are directly shown. In order to enable the unmanned aerial vehicle to be effectively monitored in flight and reduce safety accidents caused by 'black flight' of the unmanned aerial vehicle, the invention aims to provide a novel method for detecting the unmanned aerial vehicle in the no-fly airspace. The unmanned aerial vehicle small target at a distance is difficult to detect at present, is easy to be confused with bird or other similar object detection, and can detect the target in real time. Therefore, the method can accurately identify the small targets and the interferents at a distance and meet the real-time capability.
Disclosure of Invention
The invention provides a brand-new method for detecting an unmanned aerial vehicle in a no-fly airspace, and the whole method realizes the detection of the unmanned aerial vehicle by using a deep learning method. The method is established on the target detection model YOLOv3 algorithm, and accurate real-time unmanned aerial vehicle detection is realized by effectively improving and improving the frame model detection. The method provided by the invention mainly comprises four parts of sample acquisition and pretreatment, integral network structure, prediction result processing and loss function calculation.
1. Sample acquisition and pretreatment
The acquisition of the sample can be pictures downloaded on a network or pictures collected by the unmanned aerial vehicle, but the acquired pictures have differences in background, size and style. And then labeling the acquired picture, labeling the target by using a universal rectangular frame in target detection, and storing the label data. Before the image is input into the network, the image needs to be preprocessed, and the preprocessing mainly includes image cropping, scaling, flipping, shifting, brightness adjustment and noise addition, so that the input image can be fixed in size, such as 416 × 416, and the tag data also needs to be processed correspondingly. The method can greatly increase the number and diversity of training, improve the robustness of the model and enable the network to have better generalization capability on more complex images.
2. Integrated network architecture
After the preprocessed input image is obtained, the input image is input into a network for processing. The whole network mainly comprises three parts, namely a backbone network, a feature fusion network and a prediction network.
2.1 backbone network
The backbone network extracts features of the input image to obtain more complex features of the target. The structure diagram is shown in the backbone network module of fig. 1, and the network mainly includes 52 convolutional layers by using dark net53 as a template. The input of the network is an image obtained by preprocessing, the image passes through 52 convolutional networks at a time, and feature maps of a 26 th layer, a 43 th layer and a 52 th layer are selected from the 52 convolutional layers to serve as three feature maps for feature fusion in the next step. The sizes of the three-layer feature map are 52 × 52,26 × 26, and 13 × 13, respectively.
2.2 feature fusion network
The feature fusion part is used for fusing three feature graphs obtained by the backbone network, so that the network can adapt to targets with different sizes. The block diagram is shown in the feature fusion module of fig. 1. The network firstly processes the feature map with the size of 13 multiplied by 13, and outputs the feature map with the size of 13 multiplied by 13 after 5 convolutions; then, upsampling the 13 × 13 feature map obtained just before to obtain a feature map with the size of 26 × 26, performing matrix connection operation on the feature map with the size of 26 × 26 obtained by the backbone network, and then obtaining the feature map with the size of 26 × 26 through 5 convolutional layers; the characteristic diagram is up-sampled to obtain a characteristic diagram with the size of 52 x 52, and the characteristic diagram with the size of 52 x 52 obtained by the backbone network is subjected to matrix connection operation and then is subjected to 5 convolutional layers to obtain a characteristic diagram with the size of 52 x 52. Through the above operations, three feature-fused feature maps with sizes of 52 × 52,26 × 26, and 13 × 13 are obtained.
2.3 predictive networks
The structure of the prediction network is shown in the prediction module of fig. 1. The network consists of mainly only three groups of two convolutions. The input of each group of convolution is a feature map output after fusion, and the predicted result comprises three parts, namely a position regression value of the target, the confidence coefficient of the target and the category of the target. The predicted values of the three networks are respectively 13 × 13 × 3 × (4+1+1),26 × 26 × 3 × (4+1+1), and 52 × 52 × 3 × (4+1+1), where 13,26, and 52 represent the sizes of the feature maps, 3 represents the number of the prior frames, 4 represents the position regression value, the middle 1 represents the confidence of the target, and the latter 1 represents the category, and is 1 because there is only one type of unmanned aerial vehicle.
3. Prediction result processing
The result obtained by network prediction cannot directly obtain the position and the type of the target, and the desired result can be obtained after indirect processing. The transformation process mainly comprises position regression and preferential selection. In the part, three groups of predicted values obtained by a prediction network are mainly input, and the three groups of predicted values are sequentially processed in the same way.
3.1 positional regression
Because the network has a poor effect on directly training the label data, an indirect method is adopted to train the label data. First, a rectangular frame (prior frame) is predefined, wherein 3 rectangular frames with the same size exist in each grid of one feature map, and the rectangular frames on different feature maps have different sizes.
As shown in FIG. 2, among the 4 position regression values (t)x,ty) Represents the translation prediction value sum (t)w,th) And expressing a scaling predicted value, and performing fine adjustment on the coordinate position and the width and height of the prior frame to finally achieve the best overlapping degree of the rectangular frame and the labeling frame after the fine adjustment. Dotted line box P in FIG. 2w,PhFor the prior frame, the solid frame b is usedw,bhThe marked frame is the rectangular frame obtained by prediction, and the rectangular frame with the dotted line which is not marked is the rectangular frame obtained by prediction.
In order to further improve the prediction accuracy, the prediction method is optimized on the basis of YOLOv3, the prediction reference points are increased by four times, the fine tuning directions are changed into four directions, and better overlapping degree with the labeling boxes is facilitated. As shown in fig. 3, a dotted frame Pw,PhFor the prior frame, the solid frame b is usedw,bhThe left graph is a schematic diagram of an original algorithm, and the right graph is a schematic diagram of an improved algorithm. The grid is augmented with four corner points (α, β, λ, δ). The invention selects hyperbolic tangent function tanhx to predict translation value (t)x,ty) Transformation to the (-1,1) range. The addition of four corner points predicts four sets of predicted rectangular boxes (b)x,by,bw,bh) The formula is shown below, wherein (t)x,ty,tw,th) To predict the resulting value, (c)x,cy) For the offset value between the upper left corner of the grid and the upper left corner of the feature map, (p)w,ph) Is the width and height of the prior frame,
Figure BDA0002176950770000031
is the width and height of the prediction box, i ∈ (α, β, λ, δ):
the alpha corner point:
Figure BDA0002176950770000032
the beta corner point:
Figure BDA0002176950770000033
gamma corner point:
Figure BDA0002176950770000041
delta corner point:
Figure BDA0002176950770000042
taking the feature map with a size of 13 × 13 as an example, 13 × 13 × 3 rectangular frames can be obtained through the above regression formula, and the sizes of these rectangular frames are based on the size of the feature map, so that it is necessary to map the feature map to an image size 416 × 416 of the input network, and finally, to regress to the size of the original image.
3.2 preferred selection
Above, the position regression is processed, and here, the confidence level and the category are processed correspondingly. Taking the predicted value 13 × 13 × 3 × 6 as an example, a confidence predicted value of 13 × 13 × 3 × 1 is obtained, and for such a large number of predicted values, a predicted value larger than a given threshold (0.5) is preferentially selected. Therefore, the number of predicted values is greatly reduced, and the obtained result is more accurate. There is no need to do processing for classes, as there is only one class. With this preference, there are still many interfering rectangular boxes that can predict the target, but there are differences in the degree of overlap, so a prediction box filtering operation is required to filter out those repeated prediction boxes.
3.3 prediction Box Filtering
The predicted boundary frame of the unmanned aerial vehicle may have a plurality of prediction frames corresponding to the same target, and at this time, redundant prediction frames need to be filtered, so that the best prediction frame is finally obtained. It is common to implement with NMS (non-maxima suppression) algorithms. However, a plurality of unmanned aerial vehicles may overlap in the detection process, and if the overlapped unmanned aerial vehicle targets cannot be detected simultaneously by using the NMS, the Soft-NMS is selected as the prediction frame filtering algorithm in the unmanned aerial vehicle prediction process.
In the operation process of the traditional NMS, when two prediction boxes are overlapped, one bad prediction box is directly filtered out. The Soft-NMS would not filter the bad one but would replace the original score with a slightly lower score. The formula is as follows:
Figure BDA0002176950770000043
adjacent prediction block biThe ratio IOU of the overlapped intersection set with the prediction frame M is larger than a set threshold value NtWill predict the bounding box biThe fraction of (c) exhibits a linear reduction. That is, closer to M, the more the frame score value is reduced, and less close M is less relevant.
4. Computation of loss function
Since convolutional networking is a training process, a loss function is needed to obtain better prediction values. The loss calculation is mainly divided into three aspects, loss of position regression value, loss of target confidence and loss of target class. If the center point of the truth value falls into a certain grid of the feature map, three priori boxes corresponding to the grid are responsible for predicting the image rectangular box. And simultaneously calculating the overlapping degrees of the three prior frames and the true value respectively, and taking the prior frame with the maximum overlapping degree as a predicted regression frame. When calculating the position regression value loss and the target class loss, only the prior frame with the largest overlapping degree and the class thereof are used for calculation, namely the constraint condition in the following formula
Figure BDA0002176950770000051
For the confidence loss of the target, the confidence loss of the prior box with the largest overlap degree is considered, and the loss of the prior box with the smaller overlap degree is also considered, namely the prior box is considered not to be used for predicting the target, namely the constraint condition in the following formula
Figure BDA0002176950770000052
The total target confidence loss is the sum of the two components. In the calculation loss section, three sets of prediction values obtained for the prediction network and label data (true value) of the corresponding image are input. The total loss equation is as follows:
Figure BDA0002176950770000053
in the above-mentioned formula, the compound of formula,
Figure BDA0002176950770000054
the a-th offset corner point as the jth prior box of the ith lattice is used to predict drones,
Figure BDA0002176950770000055
denotes that the ith lattice jth prior frame the a-th offset corner point is not used to predict the drone, s2Size of the characteristic diagram, B number of prior frames, B3, 4 tableShows that the points return to 4 angular points respectively, (t)x,ty,tw,th,t0And s) is the predicted value,
Figure BDA0002176950770000056
is the true value of the,
Figure BDA0002176950770000057
is the real value of the coordinates, and the real value of the coordinates,
Figure BDA0002176950770000058
for the true class value, c ∈ {0,1} is the total number of classes, σ (t)0) And representing the confidence coefficient score of the predicted bounding box, wherein the BCE is a binary cross entropy function. Lambda [ alpha ]coord1 denotes the weight of the coordinate loss, λobj=5,λnoobjA loss weight value of 0.5 indicates target and no target, respectively.
The unmanned aerial vehicle detection method has the beneficial effects that the unmanned aerial vehicle detection method can be used for implementing real-time and accurate unmanned aerial vehicle detection in the no-fly airspace, and aims to effectively reduce 'black-fly' unmanned aerial vehicles through the flight detection of the unmanned aerial vehicles. Meanwhile, the unmanned aerial vehicle flying in the airspace can be found more quickly and accurately, the unmanned aerial vehicle counter-measure is implemented more quickly, the loss caused by 'black flying' of the unmanned aerial vehicle is reduced as much as possible, and the probability of safety accidents caused by the unmanned aerial vehicle is reduced. The result detected by the method can detect the target of the unmanned aerial vehicle, accurately identify the target and the approximate position of the target, and enable the algorithm to achieve real-time capability. Considerable reaction time is brought for timely processing the 'black flight' of the unmanned aerial vehicle.
Drawings
FIG. 1: integrated network architecture
FIG. 2: position and shape regression schematic
FIG. 3: optimized position regression schematic
FIG. 4: overall system flow diagram
FIG. 5: schematic diagram of feature fusion network
FIG. 6 is a diagram of the effect of unmanned aerial vehicle detection
Detailed Description
The invention is described in further detail below with reference to the attached drawing figures:
fig. 4 is an overall flowchart, and the technical solution of the present invention is specifically described by the flowchart.
1) And (3) splitting the original data, wherein one part is a training set, the other part is a test set, and the ratio of the training set to the test set is 7: 3. The training set is used for training the network, and the testing set is used for testing the trained model.
2) And preprocessing the training set, wherein the preprocessing operations comprise image cropping, scaling, turning, shifting, brightness adjustment, noise addition and standardization, and the preprocessing is used for obtaining an input image 416 x 416 with a fixed size. Meanwhile, the label data of the image also needs to be processed correspondingly. The images are then combined into a batch, which is input into the network.
3) The feature extraction network in the graph comprises a backbone network and a feature fusion network, and the feature extraction network mainly extracts features of input data and is used for predicting by a subsequent prediction network. The feature extraction network will obtain three feature maps with different sizes, 13 × 13,26 × 26 and 52 × 52.
4) The prediction network predicts the three feature maps obtained by the feature extraction network respectively, and the obtained prediction results have different calculation modes at different stages. In the training phase, mainly loss is calculated, and loss is calculated between the predicted value and the processed label data, wherein the loss comprises three parts, namely rectangular box calculation loss, confidence coefficient loss and category loss. In the inference stage (prediction stage), the predicted value obtained by the prediction network is mainly mapped to the rectangular frame on the image and is correspondingly processed to obtain the rectangular frame finally predicted on the original image. Since the prediction is divided into three parts, the three results need to be fused to obtain the final result.
Finally, model testing, the input is not a training set, but a testing set, and the preprocessing of the testing set only needs to be standardized and fixed with the size 416 x 416. And then, processing the feature extraction network, the prediction network and the prediction result in sequence to obtain a final result. And performing index calculation on the obtained final result and actual label data to obtain the performance index of the model, wherein the performance index needing to be calculated mainly has average accuracy and detection speed. Since the present invention is directed to the detection of small targets, the accuracy of detecting small targets separately is also needed.
The invention provides a novel method for detecting an unmanned aerial vehicle in a no-fly airspace, which comprises the steps of overall architecture of a network, unmanned aerial vehicle sample enhancement design, a boundary frame prediction method, an overall loss function, multi-scale detection design, prediction frame filtering design and unmanned aerial vehicle classification prediction, and finally obtaining an unmanned aerial vehicle detection model through sample training. The sample enhancement design enriches the unmanned aerial vehicle sample set and enhances the robustness of the unmanned aerial vehicle training model. The boundary frame prediction method increases four times of reference angular points, improves the number of unmanned aerial vehicle predicted boundary frames, and finally obtains more accurate unmanned aerial vehicle boundary frame position and width-height ratio through prediction. The multi-scale detection design comprises the multi-scale design in the training stage and the multi-scale design in the prediction stage, and the multi-scale detection performance and the small target detection effect of the unmanned aerial vehicle are improved. Finally, the Soft-NMS algorithm is selected for filtering the boundary box in the boundary box filtering design of the unmanned aerial vehicle, and the phenomenon that the unmanned aerial vehicle is overlapped and cannot be predicted is avoided. Finally, the unmanned aerial vehicle detection model obtained by training is obtained through experimental detection, and the index AP50、AP75And APS values of 1.00, 0.85, 0.83, respectively, and Inference Time (Inference Time) of 0.030 s. Therefore, the unmanned aerial vehicle detection method provided by the invention can meet the detection real-time performance while meeting the high-precision detection effect. Therefore, the novel method for detecting the no-fly airspace unmanned aerial vehicle has wide application prospect in the field of unmanned aerial vehicle supervision.

Claims (1)

1. A method for detecting a no-fly airspace unmanned aerial vehicle is characterized by comprising the following steps:
s1, sample acquisition and pretreatment: acquiring and marking a flight image of the unmanned aerial vehicle, marking an unmanned aerial vehicle target by using a horizontal rectangular frame in target detection, and storing tag data; preprocessing an image, including image cutting, zooming, overturning, shifting, brightness adjustment and noise addition, to obtain a sample image with a fixed size;
s2, training an unmanned aerial vehicle recognition network through a deep learning method, wherein the unmanned aerial vehicle recognition network comprises a backbone network, a feature fusion network and a prediction network, and the specific method comprises the following steps:
s21, extracting features of the input sample image through a backbone network, wherein the backbone network takes a darknet53 as a template and comprises 52 convolutional layers, the input sample image is obtained by preprocessing, the sample image passes through the 52 convolutional layers at a time, feature maps of a 26 th layer, a 43 th layer and a 52 th layer are selected from the 52 convolutional layers to serve as feature fusion feature maps, and the three layers of feature maps are 52 × 52,26 × 26 and 13 × 13 respectively;
s22, the feature fusion network is used for fusing the three feature maps obtained by the backbone network, processing the 13 x 13 feature maps firstly, and outputting the 13 x 13 fused feature maps after 5 convolutions; then, the fused feature map with the size of 13 × 13 is subjected to up-sampling to obtain a 26 × 26 sampled feature map, the 26 × 26 sampled feature map is subjected to matrix connection operation with the 26 × 26 feature map obtained by the backbone network and then subjected to 5 convolutional layers to obtain a 26 × 26 fused feature map, the 26 × 26 fused feature map is subjected to up-sampling to obtain a 52 × 52 sampled feature map, the 52 × 52 feature map obtained by the backbone network is subjected to matrix connection operation and then subjected to 5 convolutional layers to obtain a 52 × 52 fused feature map, and three feature-fused feature maps with the sizes of 52 × 52,26 × 26 and 13 × 13 are obtained through the operation;
s23, the prediction network comprises three groups of two convolutions, the input of each group of convolutions is a fused feature map which is output after fusion, and the prediction result comprises three parts which are coordinate regression values (t) of the target respectivelyx,ty,tw,th) Confidence t of the target0And a category of the target; the predicted values of the three groups of networks are respectively 13 × 13 × 3 × (4+1+1),26 × 26 × 3 × (4+1+1),52 × 52 × 3 × (4+1+1), wherein 13,26, 52 represent the sizes of the feature maps, 3 represents the number of rectangular boxes, 4 represents the position regression value, the middle 1 represents the confidence of the target, and the latter 1 represents the category, i.e. only one type of unmanned aerial vehicle;
loss function: the loss function calculation is mainly divided into three aspects, namely loss of a position regression value, loss of a target confidence coefficient and loss of a target category, the central point of a true value is defined to fall into a certain grid of a feature map, three priori frames corresponding to the grid are responsible for predicting a rectangular image frame, overlap degrees of the three priori frames and the true value are calculated respectively at the same time, the priori frame with the largest overlap degree is taken as a predicted regression frame, only the priori frame with the largest overlap degree and the category of the priori frame are used for calculation when the position regression value loss and the target category loss are calculated, for the target confidence coefficient loss, the confidence coefficient loss of the priori frame with the largest overlap degree is considered, the loss of the priori frame with the smaller overlap degree is also considered, namely the priori frame cannot be used for predicting a target, namely a limiting condition is considered, and the total target confidence coefficient loss is the sum of two parts; inputting three groups of predicted values obtained by a prediction network and label data of corresponding images, wherein a loss formula is as follows:
Figure FDA0003514742720000021
wherein the content of the first and second substances,
Figure FDA0003514742720000022
the a-th offset corner point of the jth prior box of the ith lattice is used to predict the drone bounding box,
Figure FDA0003514742720000023
indicates that the ith lattice jth prior frame a offset corner point is not used to predict the UAV bounding box, s2Indicates the size of the feature map, B indicates the number of rectangular boxes, (t)x,ty,tw,th,t0S) is the predicted value, s represents the predicted class probability value,
Figure FDA0003514742720000024
is the true value of the,
Figure FDA0003514742720000025
is the real value of the coordinates, and the real value of the coordinates,
Figure FDA0003514742720000026
for the true class value, c ∈ {0,1} is the total number of classes, σ (t)0) Representing the confidence score of the predicted bounding box, BCE being a binary cross-entropy function, λcoord1 denotes the weight of the coordinate loss, λobj=5,λnoobj0.5 represents the loss weight with and without target, respectively;
s3, predicting the unmanned aerial vehicle images by using the trained unmanned aerial vehicle recognition network to obtain three groups of predicted values;
s4, processing the predicted value to obtain the unmanned aerial vehicle detection result, specifically:
s41, position regression: among 4 position regression values (t)x,ty) Represents the translation prediction value sum (t)w,th) And expressing a scaling predicted value, and performing fine adjustment on the coordinate position and the width and height of the prior frame, wherein the fine adjustment method comprises the following steps: defining four corner points (alpha, beta, lambda and delta) in the grid, and predicting four groups of predicted rectangular frames (b) by adding the four corner pointsx,by,bw,bh):
The alpha corner point:
Figure FDA0003514742720000027
Figure FDA0003514742720000028
Figure FDA0003514742720000029
Figure FDA00035147427200000210
the beta corner point:
Figure FDA0003514742720000031
Figure FDA0003514742720000032
Figure FDA0003514742720000033
Figure FDA0003514742720000034
gamma corner point:
Figure FDA0003514742720000035
Figure FDA0003514742720000036
Figure FDA0003514742720000037
Figure FDA0003514742720000038
delta corner point:
Figure FDA0003514742720000039
Figure FDA00035147427200000310
Figure FDA00035147427200000311
Figure FDA00035147427200000312
wherein (c)x,cy) For the offset value between the upper left corner of the grid and the upper left corner of the feature map, the hyperbolic tangent function tanh x will shift the predicted value (t)x,ty) Conversion into the (-1,1) range, (p)w,ph) Is the width and height of the prior frame,
Figure FDA00035147427200000313
is the width and height of the prediction box, i ∈ (α, β, λ, δ);
s42, selecting preferentially: selecting a predicted value greater than a given threshold;
s43, prediction box filtering: there are still many prediction rectangular frames selected by preference, and there may exist multiple prediction frames corresponding to the same target in the predicted unmanned aerial vehicle prediction frame, and at this time, the redundant prediction frames need to be filtered to obtain a final prediction frame: and removing redundant prediction frames by adopting a Soft-NMS method, and finally obtaining only one prediction frame for each target on the image.
CN201910782216.0A 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace Active CN110490155B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910782216.0A CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910782216.0A CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Publications (2)

Publication Number Publication Date
CN110490155A CN110490155A (en) 2019-11-22
CN110490155B true CN110490155B (en) 2022-05-17

Family

ID=68553079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910782216.0A Active CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Country Status (1)

Country Link
CN (1) CN110490155B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274894A (en) * 2020-01-15 2020-06-12 太原科技大学 Improved YOLOv 3-based method for detecting on-duty state of personnel
CN111832508B (en) * 2020-07-21 2022-04-05 桂林电子科技大学 DIE _ GA-based low-illumination target detection method
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN116389783B (en) * 2023-06-05 2023-08-11 四川农业大学 Live broadcast linkage control method, system, terminal and medium based on unmanned aerial vehicle

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
WO2017167282A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and computer storage medium
CN109002777A (en) * 2018-06-29 2018-12-14 电子科技大学 A kind of infrared small target detection method towards complex scene
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109389086A (en) * 2018-10-09 2019-02-26 北京科技大学 Detect the method and system of unmanned plane silhouette target
CN109598290A (en) * 2018-11-22 2019-04-09 上海交通大学 A kind of image small target detecting method combined based on hierarchical detection
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN109919058A (en) * 2019-02-26 2019-06-21 武汉大学 A kind of multisource video image highest priority rapid detection method based on Yolo V3
CN110033050A (en) * 2019-04-18 2019-07-19 杭州电子科技大学 A kind of water surface unmanned boat real-time target detection calculation method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
WO2017167282A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and computer storage medium
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
CN109002777A (en) * 2018-06-29 2018-12-14 电子科技大学 A kind of infrared small target detection method towards complex scene
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109389086A (en) * 2018-10-09 2019-02-26 北京科技大学 Detect the method and system of unmanned plane silhouette target
CN109598290A (en) * 2018-11-22 2019-04-09 上海交通大学 A kind of image small target detecting method combined based on hierarchical detection
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109919058A (en) * 2019-02-26 2019-06-21 武汉大学 A kind of multisource video image highest priority rapid detection method based on Yolo V3
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN110033050A (en) * 2019-04-18 2019-07-19 杭州电子科技大学 A kind of water surface unmanned boat real-time target detection calculation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Deep Learning Approach for Car Detection in UAV Imagery;Nassim Ammour 等;《remote sensing》;20170327;1-15 *
基于深度学习的无人机遥感图像目标识别方法研究;祝思君;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190215;I140-191 *
基于相关滤波的绝缘子跟踪与测距算法;刘永姣 等;《科学技术与工程》;20170930;262-267 *
我国民用无人机法律规制问题研究;任一可;《中国优秀硕士学位论文全文数据库 社会科学I辑》;20190115;G119-158 *
禁飞区无人机预警算法研究;闫斌 等;《计算机应用研究》;20170828;2651-2658 *

Also Published As

Publication number Publication date
CN110490155A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN112818903B (en) Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN110490155B (en) Method for detecting unmanned aerial vehicle in no-fly airspace
CN107871119B (en) Target detection method based on target space knowledge and two-stage prediction learning
CN111723748B (en) Infrared remote sensing image ship detection method
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN106599773B (en) Deep learning image identification method and system for intelligent driving and terminal equipment
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN104517103A (en) Traffic sign classification method based on deep neural network
CN113421269A (en) Real-time semantic segmentation method based on double-branch deep convolutional neural network
CN114627052A (en) Infrared image air leakage and liquid leakage detection method and system based on deep learning
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN110807384A (en) Small target detection method and system under low visibility
CN113269133A (en) Unmanned aerial vehicle visual angle video semantic segmentation method based on deep learning
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN115147745A (en) Small target detection method based on urban unmanned aerial vehicle image
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN115439766A (en) Unmanned aerial vehicle target detection method based on improved yolov5
CN115272876A (en) Remote sensing image ship target detection method based on deep learning
CN106971402B (en) SAR image change detection method based on optical assistance
CN116740516A (en) Target detection method and system based on multi-scale fusion feature extraction
CN111160372A (en) Large target identification method based on high-speed convolutional neural network
CN115690410A (en) Semantic segmentation method and system based on feature clustering
Zheng et al. Multiscale Fusion Network for Rural Newly Constructed Building Detection in Unmanned Aerial Vehicle Imagery
CN114565764A (en) Port panorama sensing system based on ship instance segmentation
CN113850783A (en) Sea surface ship detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant