CN113822372A - Unmanned aerial vehicle detection method based on YOLOv5 neural network - Google Patents

Unmanned aerial vehicle detection method based on YOLOv5 neural network Download PDF

Info

Publication number
CN113822372A
CN113822372A CN202111220550.0A CN202111220550A CN113822372A CN 113822372 A CN113822372 A CN 113822372A CN 202111220550 A CN202111220550 A CN 202111220550A CN 113822372 A CN113822372 A CN 113822372A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
feature
neural network
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111220550.0A
Other languages
Chinese (zh)
Inventor
屈景怡
毕新杰
刘闪亮
李云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN202111220550.0A priority Critical patent/CN113822372A/en
Publication of CN113822372A publication Critical patent/CN113822372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an unmanned aerial vehicle detection method based on a YOLOv5 neural network, which comprises the following steps: 1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set; 2) inputting the unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of feature maps with different sizes; 3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures; 4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information. The unmanned aerial vehicle detection method based on the YOLOv5 neural network can improve the accuracy of small target detection through multi-scale feature fusion, not only effectively improves the detection speed of the unmanned aerial vehicle, but also greatly improves the accuracy of detection.

Description

Unmanned aerial vehicle detection method based on YOLOv5 neural network
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle detection, and particularly relates to an unmanned aerial vehicle detection method and system based on a YOLOv5 neural network.
Background
The existing unmanned aerial vehicle detection technology mainly comprises the following types: radio detection, acoustic detection, radar detection, photoelectric detection means and the like, and each technology has corresponding advantages and disadvantages:
radio detection equipment carries out unmanned aerial vehicle's discernment and location through the radio signal that detects transmission data, belongs to passive detection means, and is far away to the detection distance of formula unmanned aerial vehicle. However, the radio detection equipment is easily interfered by other external radio signals, and a non-cooperative unmanned aerial vehicle or a remote control unmanned aerial vehicle communicating with 5G signals cannot be identified due to the complex electromagnetic environment of the civil aviation airport.
Acoustic detection is different based on the unmanned aerial vehicle sound of different models, can use sound check out test set discernment unmanned aerial vehicle's type, through multi-direction microphone array, catches unmanned aerial vehicle unique high frequency motor sound when the function, handles and filters the sound wave, and the analysis confirms specific frequency to confirm near unmanned aerial vehicle's existence. Acoustic detection is susceptible to interference from ambient noise and is therefore suitable for quieter environments, prone to failure due to higher noise in urban areas or noisy environments, and has an effective detection distance that is too close, for example only 200m for the product of DroneShield corporation in the united states and only 150m for the product of Alsok corporation in japan.
The target detection means based on the radar detection equipment is slightly influenced by weather factors such as cloud, fog, rain, snow and the like, has strong penetrating power, has the characteristics of all weather and all-time, is suitable for long-distance target detection, and can reach the detection distance of 5-7km to a small target. But it may interfere with the existing electronics of the airport in the near range of the airport; the echo of a large target in a complex environment of an airport can be mixed with the echo of a small target of an unmanned aerial vehicle or a flying bird, so that the detection performance of the small target is influenced; in addition, after the radar equipment is used for detecting the unmanned aerial vehicle or the flying bird, the accurate picture of the target cannot be intuitively acquired, and a supervisor cannot easily identify whether the target is the unmanned aerial vehicle from the acquired data; when a plurality of targets close to each other appear simultaneously, the number of the targets cannot be correctly judged by the radar technology, and the potential safety hazard is undoubtedly left for an airport.
The detection distance of the photoelectric technology can reach 2km, and the maximum advantage is that a visible light image of a target can be obtained. The influence under adverse conditions such as rain, fog, snow is great, and the detection and the discernment of unmanned aerial vehicle can be influenced to a certain extent to photoelectricity visible light image quality is not good, nevertheless under good weather, photoelectricity can obtain high-quality visible light image.
Can combine radar equipment and opto-electronic equipment's advantage, utilize the position of radar check out test set detection target, send positional information for opto-electronic equipment, guide opto-electronic equipment to turn to the direction of target place to utilize opto-electronic equipment to acquire the visible light image, this kind of mode, can be quick, accurate acquisition unmanned aerial vehicle's image information, this application is based on the image information who acquires, realizes unmanned aerial vehicle's detection discernment in the image, be used for improving precision and response speed to unmanned aerial vehicle detection.
Disclosure of Invention
In view of this, the present invention is directed to a method and a system for detecting an unmanned aerial vehicle based on a YOLOv5 neural network, which are used to realize fast identification of the unmanned aerial vehicle in an image on the acquired image information.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, the invention provides a method for detecting an unmanned aerial vehicle based on a YOLOv5 neural network, which comprises the following steps:
1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set;
2) inputting the unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of feature maps with different sizes;
3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures;
4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information.
Further, in step 1, the image preprocessing method is as follows:
and increasing the number of the pictures of the acquired unmanned aerial vehicle related pictures by adopting a data amplification method, and labeling the pictures to obtain an unmanned aerial vehicle data set.
Further, the data amplification method is as follows:
and expanding the number of picture samples by image splicing, rotation and noise addition.
Further, in step 2, a processing method of the data set of the unmanned aerial vehicle in the Focus layer is as follows:
the input is copied into four, and then cut into four 3 × 320 × 320 slices by a slicing operation, the four slices are connected in depth by tensor stitching, and then 32 × 320 × 320 output is generated by a convolutional layer having a convolutional kernel number of 32, and the result is input to the next convolutional layer.
Further, in step 2, the bottleeckcsp backbone network includes several 1 × 1 and 3 × 3 convolutional layers, each of which is followed by a BN layer and a Mish layer.
Further, in step 3, pooling processing is performed on feature maps of different sizes by using the SPP structure, specifically: pooling and stacking was performed with pooling cores of sizes 5,9, 13.
Further, in step 2, the number of the obtained feature maps with different sizes is 3;
in step 3, a specific method for performing fusion processing on the feature information of the feature maps with different sizes by using the FPN and PAN structures is as follows:
carrying out tensor splicing on the characteristic diagram with the minimum dimension of 20 × 20 and a 40 × 40 characteristic diagram output by a previous network through one-time upsampling in the FPN structure, then carrying out tensor splicing on the characteristic diagram with the 80 × 80 characteristic diagram output by the previous network through one-time upsampling, and outputting a characteristic diagram predicted with the maximum dimension of 80 × 80; then, downsampling the 80 × 80 feature map in the PAN structure for one time, carrying out tensor splicing with the 40 × 40 feature map of the previous network, and outputting a feature map with a mesoscale of 40 × 40 prediction; finally, the 40 × 40 feature map is subjected to downsampling again, and tensor splicing is performed on the 40 × 40 feature map and the 20 × 20 feature map of the previous network, so that the feature map with the minimum dimension of 20 × 20 is output.
Further, in step 4, the regression loss function of the prediction network is:
Figure BDA0003312428220000041
wherein A, B is two arbitrary rectangle frames, and C is the minimum bounding rectangle of A and B.
In a second aspect, the present invention provides an electronic device, including a processor, and a memory communicatively coupled to the processor and configured to store instructions executable by the processor, wherein: the processor, when executing the instructions, implements the steps of the method for detecting a drone based on the YOLOv5 neural network according to the first aspect.
In a third aspect, the present invention provides a server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor, wherein: the instructions are executable by the processor to cause the at least one processor to perform the steps of the YOLOv5 neural network-based drone detection method according to the first aspect above.
Compared with the prior art, the unmanned aerial vehicle detection method and system based on the YOLOv5 neural network have the following beneficial effects:
(1) the unmanned aerial vehicle detection method based on the YOLOv5 neural network can improve the accuracy of small target detection through multi-scale feature fusion, not only effectively improves the detection speed of the unmanned aerial vehicle, but also greatly improves the accuracy of detection.
(2) The unmanned aerial vehicle detection method based on the YOLOv5 neural network is simple in training, easy to operate, capable of avoiding complex and tedious operations and high in usability.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a network structure diagram of an unmanned aerial vehicle detection method based on the YOLOv5 neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a CSP structure according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an SPP module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of FPN and PAN structures according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a process of calculating a loss function GIOU according to an embodiment of the present invention;
fig. 6 is a detection result diagram of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The first embodiment is as follows:
the embodiment provides an unmanned aerial vehicle detection method based on a YOLOv5 neural network, as shown in fig. 1, including the following steps:
1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set;
2) inputting an unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of characteristic diagrams with different sizes, wherein the Focus layer is a neural network structure;
3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures;
4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information.
In step 1, the image preprocessing method is as follows:
and increasing the number of the pictures of the acquired unmanned aerial vehicle related pictures by adopting a data amplification method, and labeling the pictures to obtain an unmanned aerial vehicle data set.
The data amplification method is as follows:
and expanding the number of picture samples by image splicing, rotation and noise addition.
In step 2, the processing method of the unmanned aerial vehicle data set in the Focus layer is as follows:
the input is copied into four, and then cut into four 3 × 320 × 320 slices by a slicing operation, the four slices are connected in depth by tensor stitching, and then 32 × 320 × 320 output is generated by a convolutional layer having a convolutional kernel number of 32, and the result is input to the next convolutional layer.
In step 2, the backbone network structure of the bottleeckcsp is as follows, as shown in fig. 2, fig. 2 is a schematic diagram of the CSP structure, and the CSP can enhance the learning ability of the neural network, maintain accuracy while reducing the weight, reduce the computation bottleneck, and reduce the memory cost. The BottleneckCSP is based on Darknet53, and adds CSP structure on each large residual block, the BottleneckCSP network is composed of a series of 1 × 1 and 3 × 3 convolutional layers, and each convolutional layer is followed by a BN layer and a Mish layer.
In step 3, the feature maps with different sizes are subjected to pooling treatment by using the SPP structure, and the method comprises the following steps: pooling and stacking with pooling cores of sizes 5,9,13, as follows:
as shown in fig. 3, the SPP structure can be used to solve the problem of non-uniform size of the input image, and the fusion of features with different sizes in the SPP is beneficial to the situation of large difference of the sizes of the targets in the image to be detected, especially for complex multi-target images. In the figure, the whole area is directly pooled, each layer obtains one point, and 256 points are obtained in total to form a 1 x 256 vector; secondly, dividing the region into 4 grids of 2 × 2, and pooling each grid to obtain 4 vectors of 1 × 256; and finally, dividing the region into 16 grids of 4 multiplied by 4, pooling each grid to obtain 16 vectors of 1 multiplied by 256, and finally splicing the results obtained by pooling in the three division modes. As can be seen from the figure, the whole process is completely independent of the size of the input, so that candidate boxes with any size can be processed. After the characteristic diagram of the embodiment of the invention is input into the SPP structure, pooling and stacking are carried out by using pooling cores with the sizes of 5,9 and 13.
In the step 2, the number of the obtained feature maps with different sizes is 3;
in step 3, as shown in fig. 4, fig. 4 is a schematic diagram of FPN and PAN structures, feature fusion is performed on feature maps of different scales through the FPN and PAN structures, and three feature maps of different scales are output, where the FPN and PAN structures are both neural networks.
The specific method for fusing the feature information of the feature maps with different sizes by using the FPN and PAN structures is as follows:
carrying out tensor splicing on the characteristic diagram with the minimum dimension of 20 × 20 and a 40 × 40 characteristic diagram output by a previous network through one-time upsampling in the FPN structure, then carrying out tensor splicing on the characteristic diagram with the 80 × 80 characteristic diagram output by the previous network through one-time upsampling, and outputting a characteristic diagram predicted with the maximum dimension of 80 × 80; then, downsampling the 80 × 80 feature map in the PAN structure for one time, carrying out tensor splicing with the 40 × 40 feature map of the previous network, and outputting a feature map with a mesoscale of 40 × 40 prediction; finally, the 40 × 40 feature map is subjected to downsampling again, and tensor splicing is performed on the 40 × 40 feature map and the 20 × 20 feature map of the previous network, so that the feature map with the minimum dimension of 20 × 20 is output.
In step 4, as shown in fig. 5, the regression loss function of the prediction network is:
Figure BDA0003312428220000081
wherein A, B is two arbitrary rectangle frames, and C is the minimum bounding rectangle of A and B.
Fig. 6 is a detection result diagram of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the embodiment of the present invention;
table 1 compares the performance of the inventive examples with the YOLOv4 neural network.
TABLE 1
Figure BDA0003312428220000091
Precision in table 1 is the Precision, that is, the correct ratio is predicted in the data predicted as positive type; recall is the Recall rate, i.e., the correct rate is predicted in all positive classes of data. The calculation formula is as follows:
Figure BDA0003312428220000092
Figure BDA0003312428220000093
wherein, tp (true positive) represents the accuracy of predicting as positive class; FP (false positive) represents the false alarm rate of predicting a negative class as a positive class; fn (false positive) indicates the false negative rate of predicting a positive class as a negative class.
AP (average Precision) is the average Precision, and the calculation method is the area enclosed by the Precision-Recall curve.
In a second aspect, the present invention provides an electronic device, including a processor, and a memory communicatively coupled to the processor and configured to store instructions executable by the processor, wherein: when the processor executes the instructions, the steps of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the above embodiment are implemented, and as for a hardware structure of an electronic device, the method can be implemented by using the prior art, and details are not repeated here.
In a third aspect, the present invention provides a server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor, wherein: the instructions are executed by the processor, so that the at least one processor performs the steps of the method for detecting a drone based on the YOLOv5 neural network according to the foregoing embodiment, and as for the hardware structure of the server, the hardware structure can be implemented by using the prior art, and details are not described here.
Those of ordinary skill in the art will appreciate that the various illustrative systems and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above described division of elements is merely a logical division, and other divisions may be realized, for example, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not executed. The units may or may not be physically separate, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. The unmanned aerial vehicle detection method based on the YOLOv5 neural network is characterized by comprising the following steps of:
1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set;
2) inputting the unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of feature maps with different sizes;
3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures;
4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information.
2. The method for unmanned aerial vehicle detection based on YOLOv5 neural network of claim 1, wherein in step 1, the image preprocessing method is as follows:
and increasing the number of the pictures of the acquired unmanned aerial vehicle related pictures by adopting a data amplification method, and labeling the pictures to obtain an unmanned aerial vehicle data set.
3. The method for detecting the unmanned aerial vehicle based on the Yolov5 neural network as claimed in claim 2, wherein: the data amplification method is as follows:
and expanding the number of picture samples by image splicing, rotation and noise addition.
4. The method for detecting the unmanned aerial vehicle based on the YOLOv5 neural network as claimed in claim 1, wherein in step 2, the processing method of the unmanned aerial vehicle data set in the Focus layer is as follows:
the input is copied into four, and then cut into four 3 × 320 × 320 slices by a slicing operation, the four slices are connected in depth by tensor stitching, and then 32 × 320 × 320 output is generated by a convolutional layer having a convolutional kernel number of 32, and the result is input to the next convolutional layer.
5. The method for detecting the unmanned aerial vehicle based on the Yolov5 neural network of claim 1, wherein: in step 2, the BottleneckCSP backbone network comprises a plurality of 1 × 1 and 3 × 3 convolutional layers, and each convolutional layer is followed by a BN layer and a Mish layer.
6. The method for detecting the unmanned aerial vehicle based on the YOLOv5 neural network of claim 1, wherein in step 3, the feature maps with different sizes are pooled by using an SPP structure, specifically: pooling and stacking was performed with pooling cores of sizes 5,9, 13.
7. The method for detecting the unmanned aerial vehicle based on the YOLOv5 neural network of claim 1, wherein in the step 2, the number of feature maps with different sizes is 3;
in step 3, a specific method for performing fusion processing on the feature information of the feature maps with different sizes by using the FPN and PAN structures is as follows:
carrying out tensor splicing on the characteristic diagram with the minimum dimension of 20 × 20 and a 40 × 40 characteristic diagram output by a previous network through one-time upsampling in the FPN structure, then carrying out tensor splicing on the characteristic diagram with the 80 × 80 characteristic diagram output by the previous network through one-time upsampling, and outputting a characteristic diagram predicted with the maximum dimension of 80 × 80; then, downsampling the 80 × 80 feature map in the PAN structure for one time, carrying out tensor splicing with the 40 × 40 feature map of the previous network, and outputting a feature map with a mesoscale of 40 × 40 prediction; finally, the 40 × 40 feature map is subjected to downsampling again, and tensor splicing is performed on the 40 × 40 feature map and the 20 × 20 feature map of the previous network, so that the feature map with the minimum dimension of 20 × 20 is output.
8. The method for detecting the unmanned aerial vehicle based on the YOLOv5 neural network as claimed in claim 1, wherein in step 4, the regression loss function of the prediction network is:
Figure FDA0003312428210000031
wherein A, B is two arbitrary rectangle frames, and C is the minimum bounding rectangle of A and B.
9. An electronic device comprising a processor and a memory communicatively coupled to the processor and configured to store processor-executable instructions, wherein: the processor when executing the instructions implements the steps of the YOLOv5 neural network-based drone detection method of any one of claims 1-8.
10. A server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor, wherein: the instructions are executable by the processor to cause the at least one processor to perform the steps of the YOLOv5 neural network-based drone detecting method of any one of claims 1-8.
CN202111220550.0A 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network Pending CN113822372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111220550.0A CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111220550.0A CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Publications (1)

Publication Number Publication Date
CN113822372A true CN113822372A (en) 2021-12-21

Family

ID=78920559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111220550.0A Pending CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Country Status (1)

Country Link
CN (1) CN113822372A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596431A (en) * 2022-03-10 2022-06-07 北京百度网讯科技有限公司 Information determination method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 Traffic sign detection method based on improved cascade neural network
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 Face recognition detection method based on mixed attention mechanism
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 Traffic sign detection method based on improved cascade neural network
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 Face recognition detection method based on mixed attention mechanism
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨晓玲 等: "基于yolov5的交通标志识别检测", 《信息技术与信息化》, no. 4, pages 28 - 30 *
言有三: "《深度学习之人脸图像处理 核心算法与案例实战》", 31 July 2020, 机械工业出版社, pages: 106 - 107 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596431A (en) * 2022-03-10 2022-06-07 北京百度网讯科技有限公司 Information determination method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US10514711B2 (en) Flight control using computer vision
KR102661954B1 (en) A method of processing an image, and apparatuses performing the same
US20190220650A1 (en) Systems and methods for depth map sampling
CN111326023A (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
US9501839B1 (en) Methods and systems for detecting moving objects in a sequence of image frames produced by sensors with inconsistent gain, offset, and dead pixels
JP2018163096A (en) Information processing method and information processing device
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN110390706A (en) A kind of method and apparatus of object detection
US20230342953A1 (en) Information processing apparatus, control method, and program
CN112650300A (en) Unmanned aerial vehicle obstacle avoidance method and device
CN113822372A (en) Unmanned aerial vehicle detection method based on YOLOv5 neural network
WO2022179207A1 (en) Window occlusion detection method and apparatus
CN110287957B (en) Low-slow small target positioning method and positioning device
CN116453109A (en) 3D target detection method, device, equipment and storage medium
CN111553474A (en) Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
CN115131756A (en) Target detection method and device
CN115082690A (en) Target recognition method, target recognition model training method and device
CN112334880B (en) Method and device for processing obstacle by movable platform and computer storage medium
CN113920174A (en) Point cloud registration method, device, equipment, medium and automatic driving vehicle
Šuľaj et al. Examples of real-time UAV data processing with cloud computing
US11209517B2 (en) Mobile body detection device, mobile body detection method, and mobile body detection program
CN116681884B (en) Object detection method and related device
CN112050829A (en) Motion state determination method and device
CN115755941A (en) Service processing method, device, equipment and storage medium
KR101944425B1 (en) Apparatus and method for controlling driving of aircraft

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination