CN111881982A - Unmanned aerial vehicle target identification method - Google Patents

Unmanned aerial vehicle target identification method Download PDF

Info

Publication number
CN111881982A
CN111881982A CN202010751602.6A CN202010751602A CN111881982A CN 111881982 A CN111881982 A CN 111881982A CN 202010751602 A CN202010751602 A CN 202010751602A CN 111881982 A CN111881982 A CN 111881982A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
target
image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010751602.6A
Other languages
Chinese (zh)
Inventor
赵文超
张樯
李斌
张蛟淏
侯棋文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN202010751602.6A priority Critical patent/CN111881982A/en
Publication of CN111881982A publication Critical patent/CN111881982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses an unmanned aerial vehicle target identification method, which comprises the following steps: the data enhancement step comprises: performing data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified to obtain enhanced data; the characteristic extraction step comprises: performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified; the target detection step comprises: and coding the category and position information of the unmanned aerial vehicle target to be identified in the image through a neural network, and decoding the category and position information to determine the detection result of the unmanned aerial vehicle target to be identified in the original image. The invention can realize the automatic detection and identification of the unmanned aerial vehicle in the field of unmanned aerial vehicle counter-braking.

Description

Unmanned aerial vehicle target identification method
Technical Field
The invention relates to the technical field of imaging, in particular to an unmanned aerial vehicle target identification method.
Background
At present, with the continuous maturity of unmanned aerial vehicle technology and the substantial decline of relevant product price, each type of unmanned aerial vehicle has been applied to different fields. Unmanned aerial vehicle has also become the criminal instrument in lawless persons' hands when bringing convenience for people. Because of the lack of the supervision and control measures for the unmanned aerial vehicle, the phenomena of abuse and illegal flight of the unmanned aerial vehicle become more serious, and the events of candid photograph and privacy invasion by the unmanned aerial vehicle are more endless. In the face of the threat of such targets, no effective detection and discovery means exists at present, and a complete counter-control system is also lacked.
Because the small unmanned aerial vehicle has small target size, variable flying speed and complex flying environment, an unmanned aerial vehicle counter-braking system is required to have higher requirements on the precision and the speed of a target positioning and identifying method. Mainly utilize the infrared mode of radar live among the prior art to carry out unmanned aerial vehicle's discernment and detection. The unmanned aerial vehicle detection method based on the radar and the radio is easy to be influenced by terrain, and cannot judge whether a detected target is an unmanned aerial vehicle or not and cannot determine the model of the target unmanned aerial vehicle. The method is mainly based on a high-brightness small target detection algorithm of a connected domain algorithm, the adaptability of the selected image features is poor, the judgment condition is single, when the method is used, point targets in flying birds and ground objects can be judged to be unmanned aerial vehicles, and the false alarm rate is high. And finally, judging whether the target is the unmanned aerial vehicle or not and judging the model of the unmanned aerial vehicle by manpower.
Disclosure of Invention
The invention aims to solve the technical problem of providing an unmanned aerial vehicle target identification method, which realizes automatic detection and identification of an unmanned aerial vehicle in the field of unmanned aerial vehicle countermeasure.
The invention discloses a target identification method, which comprises the following steps: a data enhancement step, a feature extraction step and a target detection step;
the data enhancement step includes: performing data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified to obtain enhanced data;
the feature extraction step includes: performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified;
the target detection step includes: and coding the category and position information of the unmanned aerial vehicle target to be identified in the image through a neural network, and decoding the category and position information to determine the detection result of the unmanned aerial vehicle target to be identified in the original image.
Preferably, the method previously comprises: and establishing an unmanned aerial vehicle data set, adopting a distance-based K-means clustering algorithm, and adopting the distance as an evaluation index of similarity to calculate a prior frame.
Preferably, the data enhancement processing of the acquired original image of the target of the unmanned aerial vehicle to be identified includes:
and carrying out random blurring and/or motion blurring processing on the original image of the unmanned aerial vehicle target to be identified.
Preferably, performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified includes:
sequentially performing first convolution processing, second convolution processing and third convolution processing on the enhanced data, respectively extracting a first detection characteristic diagram, a second detection characteristic diagram and a third detection characteristic diagram, and fusing the obtained first detection characteristic diagram, second detection characteristic diagram and third detection characteristic diagram; and taking the fused features as image features for classifying and positioning the target of the unmanned aerial vehicle to be identified.
Preferably, the establishing of the unmanned aerial vehicle data set, the adoption of a distance-based K-means clustering algorithm, and the adoption of the distance as an evaluation index calculation prior frame of the similarity comprise:
acquiring image data of unmanned aerial vehicles of different models, and establishing unmanned aerial vehicle data sets classified under each flight scene and each flight attitude;
adopting a K-means clustering algorithm based on distance, adopting the distance as an evaluation index of similarity, and enabling the ratio IOU of the intersection and the union of the prior frame and the real frame to be maximum, wherein the distance d of the K-means clustering represents the formula as follows:
d=1-IOU。
preferably, the method further comprises: calculating said neural networkLoss function: l istotal=λ1Lloc2Lconf3Lcla
Wherein L islocLocating an offset loss for the target;
Lconfa target confidence loss;
Lclaclassifying losses for the target;
the offset loss adopts the square sum of errors, and the classification loss adopts binary cross entropy, lambda1,λ2,λ3Is the equilibrium coefficient.
Preferably, when the unmanned aerial vehicle data set is established, random blurring and/or motion blurring processing is performed on the unmanned aerial vehicle data set.
Preferably, the randomly blurring the drone data set comprises:
carrying out fuzzy processing on each picture in the unmanned aerial vehicle data set by adopting a Gaussian point spread function: the imaging formula of the optical system is as follows:
g(x,y)=∫∫f(ξ,η)h(x-ξ,y-η)dξdη=f(x,y)*h(x,y)
wherein f (, x) represents the image pre-image, h (, x) represents the point spread function, ζ, η is the offset of the image in x, y;
the gaussian defocus blur model formula is as follows:
Figure BDA0002610216080000031
in the formula, σ is a parameter of the gaussian defocus model.
Preferably, the motion blur processing of the drone data set comprises:
the imaging formula of the motion-blurred image is as follows:
Figure BDA0002610216080000032
wherein v isxIs the translation speed, v, of the image in the x directionyIn the y directionThe upward translation speed, T, is the shutter open time, i.e., the time at which the blurred image is generated, and n (x, y) is additive noise.
Preferably, by varying σ, different degrees of defocus blur are simulated.
Compared with the prior art, the invention has the following advantages:
the method firstly carries out targeted image data acquisition and labeling on various unmanned aerial vehicles. And then, processing the label of the unmanned aerial vehicle data by using a clustering algorithm, and selecting a prior frame suitable for the unmanned aerial vehicle target. And then carrying out data enhancement on the unmanned aerial vehicle data. And inputting the enhanced data into a neural network for training, wherein the trained model can complete the target detection of the unmanned aerial vehicle.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart of an unmanned aerial vehicle target identification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an unmanned aerial vehicle target recognition device according to an embodiment of the present invention;
FIG. 3 is a block diagram of a computing device according to another embodiment of the present application;
FIG. 4 is a diagram of a computer readable storage medium structure according to another embodiment of the present application;
FIG. 5 is a flow chart of deep learning based unmanned aerial vehicle detection according to an embodiment of the present invention;
fig. 6 is a schematic diagram of unmanned aerial vehicle species acquisition according to an embodiment of the present invention;
fig. 7 is a schematic view of a data acquisition scenario for an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 8 is a schematic diagram of raw data of a training drone according to an embodiment of the present invention;
fig. 9 is a schematic diagram of fuzzy data of a training drone according to an embodiment of the present invention;
fig. 10 is a schematic diagram of the unmanned aerial vehicle identification result according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Example one
As shown in fig. 1, an embodiment of the present invention provides an unmanned aerial vehicle target identification method, including: a data enhancement step, a feature extraction step and a target detection step;
s101, the data enhancement step comprises: performing data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified to obtain enhanced data;
s102, the characteristic extraction step comprises the following steps: performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified;
s103, the target detection step comprises: and coding the category and position information of the unmanned aerial vehicle target to be identified in the image through a neural network, and decoding the category and position information to determine the detection result of the unmanned aerial vehicle target to be identified in the original image.
The embodiment of the invention adopts a regression-based method to carry out deep learning and target identification, integrates unmanned aerial vehicle target classification and positioning, and realizes end-to-end detection. The detection speed reaches 40 frames per second, and real-time video target identification can be realized.
In the embodiment of the invention, the method comprises the following steps: and establishing an unmanned aerial vehicle data set, adopting a distance-based K-means clustering algorithm, and adopting the distance as an evaluation index of similarity to calculate a prior frame.
In the embodiment of the present invention, the data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified includes:
and carrying out random blurring and/or motion blurring processing on the original image of the unmanned aerial vehicle target to be identified.
In the embodiment of the present invention, performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified includes:
sequentially performing first convolution processing, second convolution processing and third convolution processing on the enhanced data, respectively extracting a first detection characteristic diagram, a second detection characteristic diagram and a third detection characteristic diagram, and fusing the obtained first detection characteristic diagram, second detection characteristic diagram and third detection characteristic diagram; and taking the fused features as image features for classifying and positioning the target of the unmanned aerial vehicle to be identified.
The neural network structure of the embodiment of the invention is composed of an input layer, a convolution layer and an output layer. The input layer is data obtained by performing data enhancement processing on an original image. Then 53 convolutional layers, whose main operation is to perform convolution operation on the data of the input layer to extract image features for subsequent classification and localization processing. Each convolution module (Conv) consists of a convolution layer followed by a Batch Normalization (BN) layer and a leakage relu (modified linear unit) layer to perform feature extraction, Normalization and activation of the information. A Residual module (Residual) is also added between convolution modules to solve the problem of gradient disappearance. The network body is divided into 3 stages, the 1-26 layers of convolution are stage 1, the 27-43 layers of convolution are stage 2, the 44-52 layers of convolution are stage 3, the lower layer of convolution (1-26 layers) has a small sensing field and is responsible for detecting small targets, the deep layer of convolution (44-52) has a large sensing field and is responsible for detecting large targets, and the middle layer of convolution (27-43 layers) is responsible for detecting medium-sized targets. The convolution layers in the three stages can output three different feature maps, the deep feature map can be subjected to up-sampling to be fused with the low-level feature map, so that targets with different scales can be detected, and the detection capability of the targets with different sizes can be improved. The information finally output by the neural network comprises the codes of the types and the position information of the targets in the image, and the detection result can be drawn in the original image by uniformly decoding the information.
In the embodiment of the invention, the unmanned aerial vehicle data set is established, a distance-based K-means clustering algorithm is adopted, and the distance is used as an evaluation index calculation prior frame of similarity, and the method comprises the following steps:
acquiring image data of unmanned aerial vehicles of different models, and establishing unmanned aerial vehicle data sets classified under each flight scene and each flight attitude;
adopting a K-means clustering algorithm based on distance, adopting the distance as an evaluation index of similarity, and enabling the ratio IOU of the intersection and the union of the prior frame and the real frame to be maximum, wherein the distance d of the K-means clustering represents the formula as follows:
d=1-IOU。
in the embodiment of the invention, when an image is detected, a large number of detection frames with different length-width ratios are randomly generated by taking each pixel image as a center, then the category and the offset are predicted for each detection frame, then the position of the detection frame is adjusted according to the predicted offset so as to obtain a predicted boundary frame, and finally the predicted boundary frame needing to be output is screened. However, the calculation amount of the randomly generated detection frame is huge, so that the prior frame is generated in advance according to the data set, the training speed can be increased, and the accuracy of position prediction can be improved.
The embodiment of the invention adopts a distance-based K-means clustering algorithm and adopts the distance as an evaluation index of similarity, namely, the closer the distance between two objects is, the greater the similarity of the two objects is. The purpose of clustering in the embodiment of the present invention is to make the prior frame and the adjacent real frame have a larger IOU (the ratio of the intersection and the union of the prior frame and the real frame is larger, and the larger the ratio is, the closer the prior frame and the real frame are), so the distance d formula of this clustering is as follows:
d=1-IOU
in the embodiment of the present invention, the identification method further includes: calculating a loss function of the neural network: l istotal=λ1Lloc2Lconf3Lcla
Wherein L islocLocating an offset loss for the target;
Lconfa target confidence loss;
Lclaclassifying losses for the target;
the offset loss adopts the square sum of errors, and the classification loss adopts binary cross entropy, lambda1、λ2、λ3Is the equilibrium coefficient.
The loss function of the neural network of the embodiment of the invention mainly comprises three parts, namely target positioning offset loss LlocTarget confidence loss LconfAnd target classification penalty LclaWhere coordinate losses are taken as the sum of squares of the errors and class losses are taken as the binary cross entropy, λ1,λ2,λ3Is the equilibrium coefficient.
Ltotal=λ1Lloc2Lconf3Lcla
In the embodiment of the invention, when the unmanned aerial vehicle data set is established, random fuzzy and/or motion fuzzy processing is carried out on the unmanned aerial vehicle data set.
In the embodiment of the present invention, the performing random fuzzy processing on the data set of the unmanned aerial vehicle includes:
carrying out fuzzy processing on each picture in the unmanned aerial vehicle data set by adopting a Gaussian point spread function: the imaging formula of the optical system is as follows:
g(x,y)=∫∫f(ξ,η)h(x-ξ,y-η)dξdη=f(x,y)*h(x,y)
wherein f (, x) represents the image pre-image, h (, x) represents the point spread function, ζ, η is the offset of the image in x, y;
the gaussian defocus blur model formula is as follows:
Figure BDA0002610216080000081
in the formula, sigma is a parameter of the Gaussian defocus model, and defocus blurs of different degrees are simulated by changing sigma.
In the embodiment of the present invention, the motion blur processing on the data set of the unmanned aerial vehicle includes:
the imaging formula of the motion-blurred image is as follows:
Figure BDA0002610216080000082
wherein v isxIs the translation speed, v, of the image in the x directionyFor the translation speed in the y direction, T is the shutter open time, i.e. the time at which a blurred image is generated, and n (x, y) is additive noise.
Because the reaction scene of the actual unmanned aerial vehicle is relatively complex, the actual image is often blurred due to the influence of factors such as weather conditions (rain and fog), defocusing of an optical system, rapid movement of a target, too far distance of the target and the like, and therefore the model trained by using the clear image cannot be applied to the actual scene. Therefore, in the embodiment of the invention, data enhancement needs to be carried out on the data set during training, and the target image in the labeling frame is subjected to fuzzy processing randomly.
In the embodiment of the invention, a Gaussian point spread function model is utilized, and the model widely appears in the fields of microscopes, optical cameras and the like. In the embodiment of the invention, in the application of unmanned aerial vehicle imaging, the factors of the point spread function influencing the imaging quality are more, and the point spread function tends to be in Gaussian distribution as a result of the integration of the factors. The imaging formula is as follows:
g(x,y)=∫∫f(ξ,η)h(x-ξ,y-η)dξdη=f(x,y)*h(x,y)
wherein f (, x) represents the image pre-image, h (, x) represents the point spread function, ζ, η is the offset of the image in x, y; the gaussian defocus blur model formula is as follows:
Figure BDA0002610216080000091
in the formula, sigma is a parameter of the Gaussian defocus model, and defocus blur of different degrees can be simulated by changing sigma.
Since the projection of the object on the imaging plane is translated or rotated during the time when the shutter of the photographing apparatus is opened, the received images are overlapped with each other. The imaging formula of the motion-blurred image in the embodiment of the present invention is thus as follows:
Figure BDA0002610216080000092
wherein v isxIs the translation speed, v, of the image in the x directionyFor translation speed in the y direction, T is the shutter open time, i.e. the time at which a blurred image is generated, and n (x, y) is additive noise.
Example two
As shown in fig. 2, an embodiment of the present invention further provides an unmanned aerial vehicle target identification apparatus, including: the system comprises a data enhancement module, a feature extraction module and a target detection module;
the data enhancement module: performing data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified to obtain enhanced data;
the feature extraction module: performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified;
the target detection module: the method comprises the steps of coding the category and position information of the unmanned aerial vehicle target to be identified in an image through a neural network, and decoding the category and position information to determine the unmanned aerial vehicle target detection result to be identified in the original image.
EXAMPLE III
Embodiments also provide a computing device, referring to fig. 3, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 4, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
Example four
The specific application object of the embodiment of the invention is unmanned aerial vehicle detection, and the software code of the unmanned aerial vehicle detection is realized by python and VC + + programming.
This embodiment realizes the unmanned aerial vehicle based on the unmanned aerial vehicle detection of degree of depth study to the demand that unmanned aerial vehicle countermeasures, can realize real-time, automatic unmanned aerial vehicle and detect and discern. As shown in fig. 5, the present embodiment illustrates a flow of the unmanned detection based on deep learning:
1) establishing unmanned aerial vehicle data set
In order to ensure the accuracy of the detection result of the unmanned aerial vehicle, the unmanned aerial vehicle data set with multiple backgrounds, multiple flight attitudes and fine classification is established in the embodiment for training the deep learning network. First, a plurality of different drone image data are collected, as shown in fig. 6. The collected scenes include 30 different scenes such as residential buildings, factories, business districts, mountains, forests, rivers, seacoasts and the like. The collected weather includes sunny days and rainy days, as shown in fig. 7. The resolution of the drone visible light image data is 1920 x 1080.
When collecting the unmanned aerial vehicle: 1. images of the unmanned aerial vehicle at 4 different pitch angles and 8 different distance positions within the range of 30-degree upward viewing of the video camera are acquired. 2. The image data needs to include images of the drone in different directions (front, back, up, down, side). 3. The position of the unmanned aerial vehicle in the image cannot be within 50 pixels of the frame, and the unmanned aerial vehicle occupies no less than 128 × 128 pixels and no more than 1/4 frames. 4. Need gather a small amount of unmanned aerial vehicle images that have slight sheltering from alone. 5. And then, screening and labeling the acquired data to finally obtain 10 ten thousand visible light, wherein 2 ten thousand of training sets and 8 ten thousand of verification sets are obtained.
2) Calculate a priori Block
As the number and the scale of the output feature maps are changed, the size of the prior frame needs to be adjusted correspondingly. The sizes of the prior frames are obtained by using the K-means clustering method, the sizes of the prior frames have three feature maps, 3 prior frames are set for each feature map, and therefore the prior frames with 9 sizes are clustered. The 9 prior frames calculated in the drone dataset are: gamma's (14, 16), (30, 23), (58, 27), (66, 40), (129, 69), (126, 88), (202, 178) and (393) gamma's 331).
In this distribution, a larger prior frame (126 x 88), (202 x 178), and (393 x 331) is applied to the smallest 13 x 13 signature (with the largest receptive field) and is suitable for detecting larger objects. The medium 26 x 26 feature map (medium receptive field) applies medium prior frames (58 x 27), (66 x 40), (129 x 69) suitable for detecting medium sized objects. The smaller prior frames (14 x 10), (27 x 16), and (30 x 23) applied to the larger 52 x 52 signature (smaller field) are suitable for detecting smaller objects.
3) Data enhancement
Because images acquired by practical application scenes are fuzzy images, random fuzzy processing needs to be carried out on the training data set to simulate the practical situation. As shown in fig. 8 and 9, fig. 8 is a schematic diagram of raw data of the unmanned aerial vehicle for training, and fig. 9 is a schematic diagram of fuzzy data of the unmanned aerial vehicle for training.
4) Training neural networks
And inputting the data into a neural network for training. During training, the training batch is set as 100 times, the learning rate is 0.001, and the attenuation is 0.0005. After training, the obtained model can be used for unmanned aerial vehicle target detection, and the unmanned aerial vehicle is judged when the confidence coefficient is greater than 0.5 during use, and fig. 10 shows the unmanned aerial vehicle recognition result, and the unmanned aerial vehicle recognition accuracy is more than 90% this moment, and recognition speed 40FPS can be applied to the unmanned aerial vehicle field of detection.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (10)

1. An unmanned aerial vehicle target identification method is characterized by comprising the following steps: a data enhancement step, a feature extraction step and a target detection step;
the data enhancement step includes: performing data enhancement processing on the acquired original image of the target of the unmanned aerial vehicle to be identified to obtain enhanced data;
the feature extraction step includes: performing convolution operation on the enhanced data to extract image features for classifying and positioning the target of the unmanned aerial vehicle to be identified;
the target detection step includes: and coding the category and position information of the unmanned aerial vehicle target to be identified in the image through a neural network, and decoding the category and position information to determine the detection result of the unmanned aerial vehicle target to be identified in the original image.
2. The identification method according to claim 1, characterized in that it previously comprises: and establishing an unmanned aerial vehicle data set, adopting a distance-based K-means clustering algorithm, and adopting the distance as an evaluation index of similarity to calculate a prior frame.
3. The identification method according to claim 1 or 2, wherein the data enhancement processing of the acquired original image of the drone target to be identified comprises:
and carrying out random blurring and/or motion blurring processing on the original image of the unmanned aerial vehicle target to be identified.
4. The identification method according to claim 1 or 2, wherein the convolution operation of the enhancement data to extract the image features for the target classification and positioning of the unmanned aerial vehicle to be identified comprises:
sequentially performing first convolution processing, second convolution processing and third convolution processing on the enhanced data, respectively extracting a first detection characteristic diagram, a second detection characteristic diagram and a third detection characteristic diagram, and fusing the obtained first detection characteristic diagram, second detection characteristic diagram and third detection characteristic diagram; and taking the fused features as image features for classifying and positioning the target of the unmanned aerial vehicle to be identified.
5. The identification method according to claim 2, wherein the establishing of the unmanned aerial vehicle data set and the employing of a distance-based K-means clustering algorithm, and the employing of the distance as an evaluation index of similarity to calculate the prior box comprises:
acquiring image data of unmanned aerial vehicles of different models, and establishing unmanned aerial vehicle data sets classified under each flight scene and each flight attitude;
adopting a K-means clustering algorithm based on distance, adopting the distance as an evaluation index of similarity, and enabling the ratio IOU of the intersection and the union of the prior frame and the real frame to be maximum, wherein the distance d of the K-means clustering represents the formula as follows:
d=1-IOU。
6. the identification method according to claim 1 or 2, further comprising: calculating a loss function of the neural network: l istotal=λ1Lloc2Lconf3Lcla
Wherein L islocLocating an offset loss for the target;
Lconfa target confidence loss;
Lclaclassifying losses for the target;
the offset loss adopts the square sum of errors, and the classification loss adopts binary cross entropy, lambda1、λ2、λ3Is the equilibrium coefficient.
7. The identification method according to claim 2 or 5, characterized in that, when the unmanned aerial vehicle data set is established, the unmanned aerial vehicle data set is subjected to random blurring and/or motion blurring processing.
8. The identification method of claim 7, wherein randomly blurring the drone dataset comprises:
carrying out fuzzy processing on each picture in the unmanned aerial vehicle data set by adopting a Gaussian point spread function: the imaging formula of the optical system is as follows:
g(x,y)=∫∫f(ξ,η)h(x-ξ,y-η)dξdη=f(x,y)*h(x,y)
wherein f (, x) represents the image pre-image, h (, x) represents the point spread function, ζ, η is the offset of the image in x, y;
the gaussian defocus blur model formula is as follows:
Figure FDA0002610216070000031
in the formula, σ is a parameter of the gaussian defocus model.
9. The identification method of claim 7, wherein motion blur processing the drone dataset comprises:
the imaging formula of the motion-blurred image is as follows:
Figure FDA0002610216070000032
wherein v isxIs the translation speed, v, of the image in the x directionyFor the translation speed in the y direction, T is the shutter open time, i.e. the time at which a blurred image is generated, and n (x, y) is additive noise.
10. The identification method according to claim 8, characterized in that by varying σ, varying degrees of defocus blur are simulated.
CN202010751602.6A 2020-07-30 2020-07-30 Unmanned aerial vehicle target identification method Pending CN111881982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010751602.6A CN111881982A (en) 2020-07-30 2020-07-30 Unmanned aerial vehicle target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010751602.6A CN111881982A (en) 2020-07-30 2020-07-30 Unmanned aerial vehicle target identification method

Publications (1)

Publication Number Publication Date
CN111881982A true CN111881982A (en) 2020-11-03

Family

ID=73204591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010751602.6A Pending CN111881982A (en) 2020-07-30 2020-07-30 Unmanned aerial vehicle target identification method

Country Status (1)

Country Link
CN (1) CN111881982A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529815A (en) * 2022-02-10 2022-05-24 中山大学 Deep learning-based traffic detection method, device, medium and terminal
CN117527137A (en) * 2024-01-06 2024-02-06 北京领云时代科技有限公司 System and method for interfering unmanned aerial vehicle communication based on artificial intelligence

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN103236037A (en) * 2013-04-03 2013-08-07 北京航空航天大学 Unmanned aerial vehicle real-time image simulation method based on hierarchical degradation model
CN108490473A (en) * 2018-02-10 2018-09-04 深圳大学 A kind of the unmanned plane enhancing localization method and system of fusion GNSS and UWB
CN108765325A (en) * 2018-05-17 2018-11-06 中国人民解放军陆军工程大学 A kind of small drone Restoration method of blurred image
CN109410130A (en) * 2018-09-28 2019-03-01 华为技术有限公司 Image processing method and image processing apparatus
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN110109482A (en) * 2019-06-14 2019-08-09 上海应用技术大学 Target Tracking System based on SSD neural network
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN111126359A (en) * 2019-11-15 2020-05-08 西安电子科技大学 High-definition image small target detection method based on self-encoder and YOLO algorithm
CN111161305A (en) * 2019-12-18 2020-05-15 任子行网络技术股份有限公司 Intelligent unmanned aerial vehicle identification tracking method and system
CN111460995A (en) * 2020-03-31 2020-07-28 普宙飞行器科技(深圳)有限公司 Unmanned aerial vehicle-based power line inspection method and inspection system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN103236037A (en) * 2013-04-03 2013-08-07 北京航空航天大学 Unmanned aerial vehicle real-time image simulation method based on hierarchical degradation model
CN108490473A (en) * 2018-02-10 2018-09-04 深圳大学 A kind of the unmanned plane enhancing localization method and system of fusion GNSS and UWB
CN108765325A (en) * 2018-05-17 2018-11-06 中国人民解放军陆军工程大学 A kind of small drone Restoration method of blurred image
CN109410130A (en) * 2018-09-28 2019-03-01 华为技术有限公司 Image processing method and image processing apparatus
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN110109482A (en) * 2019-06-14 2019-08-09 上海应用技术大学 Target Tracking System based on SSD neural network
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN111126359A (en) * 2019-11-15 2020-05-08 西安电子科技大学 High-definition image small target detection method based on self-encoder and YOLO algorithm
CN111161305A (en) * 2019-12-18 2020-05-15 任子行网络技术股份有限公司 Intelligent unmanned aerial vehicle identification tracking method and system
CN111460995A (en) * 2020-03-31 2020-07-28 普宙飞行器科技(深圳)有限公司 Unmanned aerial vehicle-based power line inspection method and inspection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
甘雨涛: "卷积神经网络在低空空域无人机检测中的研究", 中国硕士学位论文全文数据库 (工程科技Ⅱ辑), no. 2020, pages 031 - 241 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529815A (en) * 2022-02-10 2022-05-24 中山大学 Deep learning-based traffic detection method, device, medium and terminal
CN117527137A (en) * 2024-01-06 2024-02-06 北京领云时代科技有限公司 System and method for interfering unmanned aerial vehicle communication based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN111178183B (en) Face detection method and related device
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN105930822A (en) Human face snapshot method and system
CN110751630B (en) Power transmission line foreign matter detection method and device based on deep learning and medium
CN109635634B (en) Pedestrian re-identification data enhancement method based on random linear interpolation
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
CN112347933A (en) Traffic scene understanding method and device based on video stream
CN112801158A (en) Deep learning small target detection method and device based on cascade fusion and attention mechanism
CN110751206A (en) Multi-target intelligent imaging and identifying device and method
Patil et al. Motion saliency based generative adversarial network for underwater moving object segmentation
CN113435407B (en) Small target identification method and device for power transmission system
CN111881982A (en) Unmanned aerial vehicle target identification method
CN111881984A (en) Target detection method and device based on deep learning
CN108509826B (en) Road identification method and system for remote sensing image
Malav et al. DHSGAN: An end to end dehazing network for fog and smoke
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN116402852A (en) Dynamic high-speed target tracking method and device based on event camera
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN111738071A (en) Inverse perspective transformation method based on movement change of monocular camera
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
Li et al. Weak moving object detection in optical remote sensing video with motion-drive fusion network
CN111695373A (en) Zebra crossing positioning method, system, medium and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination