CN113435303A - Non-cooperative unmanned aerial vehicle visual detection and identification method - Google Patents

Non-cooperative unmanned aerial vehicle visual detection and identification method Download PDF

Info

Publication number
CN113435303A
CN113435303A CN202110700406.0A CN202110700406A CN113435303A CN 113435303 A CN113435303 A CN 113435303A CN 202110700406 A CN202110700406 A CN 202110700406A CN 113435303 A CN113435303 A CN 113435303A
Authority
CN
China
Prior art keywords
feature
unmanned aerial
aerial vehicle
feature fusion
cooperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110700406.0A
Other languages
Chinese (zh)
Inventor
陈彦桥
柴兴华
张小龙
张泽勇
李晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110700406.0A priority Critical patent/CN113435303A/en
Publication of CN113435303A publication Critical patent/CN113435303A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The invention belongs to the technical field of image processing, discloses a visual detection and identification method for a non-cooperative unmanned aerial vehicle, and mainly solves the problem of target detection and identification of the unmanned aerial vehicle under a long distance and a large field of view. The method comprises the following implementation steps: collecting image data in a non-cooperative scene and marking; extracting image features from the acquired image data by using a bottleneck feature extraction module; performing feature fusion on the extracted image features by adopting a feature fusion network; and obtaining a target recognition result based on the feature fusion result. The bottleneck feature extraction module adopted by the invention can effectively reduce feature dimension and parameter quantity, the feature fusion module adopted by the invention enhances the feature expression capability of the model, can effectively improve the feature expression capability of the model on small targets and multi-scale targets, and can effectively reduce calculated amount and improve detection rate by adopting the cavity convolution.

Description

Non-cooperative unmanned aerial vehicle visual detection and identification method
Technical Field
The invention belongs to the technical field of image processing, and mainly relates to a visual detection and identification method for a non-cooperative unmanned aerial vehicle, which can be used for target detection and identification of the unmanned aerial vehicle under a long distance and a large field of view.
Background
In recent years, with the continuous improvement of high and new technology levels such as automation technology, computer technology, electronic devices and the like, the unmanned aerial vehicle has wide application value in the fields of meteorological detection, mapping, border control, forest fire rescue, disaster monitoring, pesticide spraying, civil communication interruption, traffic control, geological survey and the like due to the advantages of small volume, no artificial limitation, flexible movement and relatively low energy consumption.
With the continuous reduction of the use threshold of the unmanned aerial vehicle, but due to the loss of unified industrial standards and supervision technologies, the unmanned aerial vehicle is frequently lack of escape restricted areas, the traffic safety of the airspace is seriously disturbed, and the possibility that the unmanned aerial vehicle is abused is remarkably increased. Unmanned aerial vehicles can be generally classified into "cooperative" and "non-cooperative" types, depending on whether communication is performed between the target and the detection device. The non-cooperative invasion flight events of the unmanned aerial vehicle in the low-altitude airspace are frequent at home and abroad, so that the method not only harms the privacy of the citizen and the safety of lives and properties, but also seriously restricts the development of the industrialization of the unmanned aerial vehicle, and brings great threat to the public safety and the national safety. Therefore, the video image information is urgently needed to be adopted to effectively detect low-slow small targets such as low-altitude non-cooperative unmanned aerial vehicles and the like so as to realize subsequent protection suppression.
The low-altitude airspace is a flight area with the height of below 1000m, and has great application value in agriculture, medical treatment and traffic industries, and the value of the low-altitude airspace cannot be ignored more and more along with the continuous improvement of national economy. The opening of the low-altitude airspace brings benefits to the development of national economy, but the safety environment of the low-altitude airspace is not optimistic and severe worldwide. The low-altitude slow-speed small target is a general term of low-altitude flyers with the flying height below 1000m, the flying speed less than 200km/h and the radar reflection area less than 2 square meters, and common low-speed small targets mainly comprise multi-rotor unmanned planes, captive balloons, aerial photography balloons, power delta wings, flying birds, kites and the like.
Currently, typical non-cooperative drone target detection technologies include photoelectric, radio detection, acoustic, radar, and the like. The single sounding equipment has the advantages and disadvantages, the radar detection equipment is poor in detection effect due to the fact that a low-altitude detection blind area exists and is easily influenced by low-altitude space clutter, the radio detection equipment requires that a detected target has matching performance and is not good for the target which is hidden intentionally and keeps radio silence, and the photoelectric detection equipment has the advantages of being strong in target visibility, visual, clear and the like compared with the photoelectric detection equipment, and the application time is longest. The detection and identification technology of the non-cooperative unmanned aerial vehicle of the photoelectric equipment refers to the application of ground photoelectric reconnaissance equipment, so that the non-cooperative unmanned aerial vehicle can be early warned, detected and tracked in time, and accurate information can be provided for subsequent anti-unmanned measures. The photoelectric technology in the unmanned aerial vehicle is to acquire the intrusion information of the unmanned aerial vehicle by using the computer vision. The photoelectric module adopts a deep learning method, operates a target detection algorithm, uses a visible light camera as basic scene sensing equipment, obtains a large scene to be detected, researches a plurality of dynamic targets of a complex background, researches a multi-dynamic target detection and tracking technology, preliminarily screens the detected targets, and actively finds a suspected unmanned aerial vehicle target and then locks and tracks the suspected unmanned aerial vehicle target. Meanwhile, a visible light image is used as a carrier, a candidate target analyzing and discriminating technology is researched for a non-cooperative unmanned aerial vehicle target under a complex background of a near scene, and accurate identification of low-altitude targets such as unmanned aerial vehicles, flying birds and kites is achieved. However, compared with a common target, a small target has less image information of the target, an unobvious feature profile, diversified detection tasks and complicated detection background, so that the small target detection algorithm faces many problems in the aspects of semantic analysis, sample mining and the like, which causes that the small target is difficult to be accurately identified by using a traditional target detection method, and then the situations of false identification and missing identification occur. Represented in the image, the target detection and identification under a long-distance large view field. The image is reflected to have a large field of view, low signal-to-noise ratio and small target imaging area. Therefore, the problem of accuracy and real-time performance of the small target detection algorithm becomes a direction and a challenge and a problem facing the improvement of the small target detection in the future.
Disclosure of Invention
Aiming at the problems, the invention provides a visual detection and identification method for a non-cooperative unmanned aerial vehicle, and a better target detection and identification result of the unmanned aerial vehicle is obtained.
The technical scheme adopted by the invention is as follows:
step 1, collecting image data in a non-cooperative scene and marking;
step 2, extracting image features from the image data acquired in the step 1 by using a bottleneck feature extraction module;
step 3, performing feature fusion on the extracted image features by adopting a feature fusion network;
and 4, obtaining a target recognition result based on the feature fusion result.
The bottleneck feature extraction module in step 2 uses Darknet53 as a basic network, Darknet53 contains 5 residual modules, and a 1 × 1 convolutional layer is additionally inserted between the feature map of each residual module and the convolutional layer, so that feature dimension reduction is realized.
Wherein, the step 3 is specifically as follows: and (3) respectively carrying out cavity convolution with different sizes on the image features extracted in the step (2) through three branches, so that three feature graphs with different sizes are generated for the objects with different sizes, and carrying out feature fusion based on the generated three feature graphs with different sizes.
Wherein, the average accuracy rate mAP is used as the evaluation index of the target identification in the step 4.
Compared with the prior art, the invention has the following advantages:
1. the bottleneck feature extraction module adopted by the invention can effectively reduce feature dimension and parameter quantity.
2. The feature fusion network adopted by the invention enhances the feature expression capability of the model, and can effectively improve the feature expression capability of the model on small targets and multi-scale targets.
3. The invention can effectively reduce the calculated amount and improve the detection rate by adopting the void convolution.
Drawings
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a diagram of a method framework of the present invention;
FIG. 3 is a schematic diagram of a bottleneck feature extraction module of the method of the present invention;
FIG. 4 is a schematic diagram of a feature fusion network of the method of the present invention;
FIG. 5 is a diagram of the target detection and identification effect of the method of the present invention.
Detailed Description
The following detailed description of the implementation steps and experimental results of the present invention is made with reference to the accompanying drawings:
referring to fig. 1 and 2, a non-cooperative unmanned aerial vehicle visual detection and identification method includes the following steps:
step 1, collecting image data in a non-cooperative scene, collecting 7638 images according to three categories of kites, flying birds and unmanned planes, and labeling;
step 2, extracting image features from the image data acquired in the step 1 by using a bottleneck feature extraction module;
the bottleneck feature extraction module is shown in fig. 3, and the specific operation mode is as follows:
the method comprises the steps of additionally inserting a 1 x 1 convolutional layer between an input feature diagram and a convolutional layer to achieve the purpose of dimension reduction of the input feature diagram, thereby reducing the calculation time of a model, controlling the number of convolutional cores in the 1 x 1 convolutional layer to dynamically adjust the number of channels of an output feature diagram, achieving the effect of dimension reduction, reducing the dimension of the current convolutional layer by using a bottleneck feature structure in addition to the dimension reduction of the input feature diagram, and if the width and the height of the input feature diagram are both WiThe number of channels is CiIf the 1 × 1 convolutional layer is not introduced, the width and height of the convolutional kernel in the 3 × 3 convolutional layer is 3 × 3, and the number is NkThe total parameter of the convolution layer is Ci×NkX 3; if a bottleneck characteristic structure is introduced, a 1 × 1 convolution layer is introduced between the input characteristic diagram and a 3 × 3 convolution layer, and the number of convolutions is NaThe total parameter of the bottleneck characteristic structure is Ci×Na×1×1+Nk×NaX 3 × 3, the compression ratio of the parameter is calculated in the following manner:
Figure BDA0003129558380000031
Figure BDA0003129558380000032
r is a parameter compression ratio, gamma is a channel compression ratio, dimension reduction is carried out on the channel dimension of the input characteristic diagram by using the 1 x 1 convolutional layer, gamma is larger than 1, and the bottleneck characteristic structure can be floated according to the requirement compared with the parameter compression ratio of the original 3 x 3 convolutional layer by controlling the gamma value.
Step 3, performing feature fusion on the extracted image features by adopting a feature fusion network;
inputting the features extracted in the step 2 into a feature fusion network for feature fusion, wherein a schematic diagram of the feature fusion network is shown in fig. 4, the feature fusion network creates a plurality of scale feature maps, and uses the cavity convolutions with different sizes in different branches, so that the objects with different scales have different receptive fields, and then feature fusion is performed based on the generated three feature maps with different scales, and the specific method is as follows: and respectively utilizing an up-sampling layer and a down-sampling layer to zoom the feature maps of all scales, and then utilizing a series operation to fuse the feature maps of uniform sizes.
Step 4, obtaining a target identification result based on the feature fusion result; the method comprises the steps of outputting detection results of all scales by utilizing the relevance and the importance among automatic learning feature channels, and using an average accuracy value meanAverage Precision (mAP) as an evaluation index of target identification.
The effect of the invention can be further illustrated by the following simulation experiment:
1. experimental conditions and methods
The hardware platform is as follows: NVIDIA Jetson TX2 high-performance calculation board card;
the software platform is as follows: pythrch 1.4;
the experimental method comprises the following steps: SSD, yoloV3, YoloV4-tiny, methods of the invention.
2. Simulation content and results
70% of image data under a non-cooperative scene is randomly selected for training, the rest 30% of the image data are used for testing, a detection identification test result of each method is given in table 1, a target identification result diagram of the invention is given in fig. 5, (a) a target standard frame representing the image data, and (b) a target prediction frame obtained by the method of the invention. The result shows that the detection and identification method of the non-cooperative unmanned aerial vehicle provided by the invention has better detection precision in three typical targets of kites, flying birds and unmanned aerial vehicles, and can well meet the real-time detection requirement of the non-cooperative unmanned aerial vehicle.
TABLE 1 test results of detection and identification of various methods
Figure BDA0003129558380000041

Claims (4)

1. A non-cooperative unmanned aerial vehicle visual detection and identification method is characterized by comprising the following steps:
step 1, collecting image data in a non-cooperative scene and marking;
step 2, extracting image features from the image data acquired in the step 1 by using a bottleneck feature extraction module;
step 3, performing feature fusion on the extracted image features by adopting a feature fusion network;
and 4, obtaining a target recognition result based on the feature fusion result.
2. The method of claim 1, wherein the bottleneck feature extraction module uses Darknet53 as a base network, Darknet53 contains 5 residual error modules, and a 1 x 1 convolutional layer is additionally inserted between a feature map and a convolutional layer of each residual error module to achieve feature dimension reduction.
3. The method according to claim 1, wherein the step 3 is specifically as follows: and (3) respectively carrying out cavity convolution with different sizes on the image features extracted in the step (2) through three branches, so that three feature graphs with different sizes are generated for the objects with different sizes, and carrying out feature fusion based on the generated three feature graphs with different sizes.
4. The method according to claim 1, wherein the output target recognition result of step 4 uses an average accuracy rate mAP as an evaluation index of target recognition.
CN202110700406.0A 2021-06-23 2021-06-23 Non-cooperative unmanned aerial vehicle visual detection and identification method Pending CN113435303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110700406.0A CN113435303A (en) 2021-06-23 2021-06-23 Non-cooperative unmanned aerial vehicle visual detection and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110700406.0A CN113435303A (en) 2021-06-23 2021-06-23 Non-cooperative unmanned aerial vehicle visual detection and identification method

Publications (1)

Publication Number Publication Date
CN113435303A true CN113435303A (en) 2021-09-24

Family

ID=77753684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110700406.0A Pending CN113435303A (en) 2021-06-23 2021-06-23 Non-cooperative unmanned aerial vehicle visual detection and identification method

Country Status (1)

Country Link
CN (1) CN113435303A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126202A (en) * 2019-12-12 2020-05-08 天津大学 Optical remote sensing image target detection method based on void feature pyramid network
CN112418117A (en) * 2020-11-27 2021-02-26 北京工商大学 Small target detection method based on unmanned aerial vehicle image
CN112508099A (en) * 2020-12-07 2021-03-16 国网河南省电力公司电力科学研究院 Method and device for detecting target in real time
CN113012150A (en) * 2021-04-14 2021-06-22 南京农业大学 Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126202A (en) * 2019-12-12 2020-05-08 天津大学 Optical remote sensing image target detection method based on void feature pyramid network
CN112418117A (en) * 2020-11-27 2021-02-26 北京工商大学 Small target detection method based on unmanned aerial vehicle image
CN112508099A (en) * 2020-12-07 2021-03-16 国网河南省电力公司电力科学研究院 Method and device for detecting target in real time
CN113012150A (en) * 2021-04-14 2021-06-22 南京农业大学 Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卢毅: ""基于轻量级卷积神经网络的人脸检测和识别算法研发"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN107808139B (en) Real-time monitoring threat analysis method and system based on deep learning
CN109087510B (en) Traffic monitoring method and device
CN109981192B (en) Frequency spectrum monitoring method of airspace blackout flying unmanned aerial vehicle
CN111913156B (en) Radar radiation source individual identification method based on deep learning model and feature combination
CN109255286B (en) Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
CN109816695A (en) Target detection and tracking method for infrared small unmanned aerial vehicle under complex background
CN107818326A (en) A kind of ship detection method and system based on scene multidimensional characteristic
CN114419825B (en) High-speed rail perimeter intrusion monitoring device and method based on millimeter wave radar and camera
CN105809954B (en) Traffic incidents detection method and system
CN109815863A (en) Firework detecting method and system based on deep learning and image recognition
CN113156417B (en) Anti-unmanned aerial vehicle detection system, method and radar equipment
CN110147714A (en) Coal mine gob crack identification method and detection system based on unmanned plane
CN107507417A (en) A kind of smartway partitioning method and device based on microwave radar echo-signal
CN102867183A (en) Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system
CN110427878A (en) A kind of sudden and violent signal recognition method of Rapid Radio and system
CN108711172A (en) Unmanned plane identification based on fine grit classification and localization method
CN111913177A (en) Method and device for detecting target object and storage medium
CN113947188A (en) Training method of target detection network and vehicle detection method
CN108596952A (en) Fast deep based on candidate region screening learns Remote Sensing Target detection method
CN115690564A (en) Outdoor fire smoke image detection method based on Recursive BIFPN network
CN113569921A (en) Ship classification and identification method and device based on GNN
CN116824335A (en) YOLOv5 improved algorithm-based fire disaster early warning method and system
Arul et al. Machine learning based automated identification of thunderstorms from anemometric records using shapelet transform
CN109815773A (en) A kind of low slow small aircraft detection method of view-based access control model
EP4025977A1 (en) Improved 3d mapping by distinguishing between different environmental regions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination