CN114943251B - Unmanned aerial vehicle target recognition method based on fusion attention mechanism - Google Patents

Unmanned aerial vehicle target recognition method based on fusion attention mechanism Download PDF

Info

Publication number
CN114943251B
CN114943251B CN202210548931.XA CN202210548931A CN114943251B CN 114943251 B CN114943251 B CN 114943251B CN 202210548931 A CN202210548931 A CN 202210548931A CN 114943251 B CN114943251 B CN 114943251B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
layer
module
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210548931.XA
Other languages
Chinese (zh)
Other versions
CN114943251A (en
Inventor
周代英
何彬宇
易传莉雯
王特起
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210548931.XA priority Critical patent/CN114943251B/en
Publication of CN114943251A publication Critical patent/CN114943251A/en
Application granted granted Critical
Publication of CN114943251B publication Critical patent/CN114943251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of unmanned aerial vehicle recognition, and particularly relates to an unmanned aerial vehicle target recognition method based on a fused attention mechanism. The recognition model comprises a data input layer, a preprocessing layer, a convolution module, an attention module, a full connection layer and a softmax classification layer, wherein the data input layer and the preprocessing layer are used for processing an input unmanned aerial vehicle micro Doppler spectrogram; extracting spectrogram depth features by a convolution module and an attention module; and the full-connection layer and the classification layer are adopted to realize classification and identification of the unmanned aerial vehicle target. Because the attention module is designed aiming at the characteristics of the micro Doppler spectrogram, the local characteristic information favorable for classification can be extracted, and the attention module and the convolution module are fused to extract the characteristic with better classification performance, so that the identification performance of the target is further improved. Simulation experiments verify the effectiveness of the methods herein.

Description

Unmanned aerial vehicle target recognition method based on fusion attention mechanism
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle recognition, and particularly relates to an unmanned aerial vehicle target recognition method based on a fused attention mechanism.
Background
With the development of science and technology, unmanned aerial vehicles are widely applied in both military and civil fields, but safety problems such as collision, terrorist attack, black flight and the like are brought. Therefore, it becomes particularly important to be able to quickly and accurately identify the type of unmanned aerial vehicle in order to make a timely response.
At present, the main method for unmanned aerial vehicle target identification is to generate micro Doppler spectrograms by utilizing micro motions of unmanned aerial vehicle rotors, and then to conduct classification identification based on the spectrograms. The traditional unmanned aerial vehicle target recognition method needs to manually design recognition features to be extracted, which is time-consuming and low in accuracy. In recent years, the recognition method based on the convolutional neural network has been greatly successful in the image field, a plurality of scholars have introduced the recognition method into the unmanned aerial vehicle target recognition field, but the convolutional neural network focuses on the characteristic information of all areas, so that some classified local areas are often ignored, and the attention mechanism focuses on some distinguishing local areas, so that the convolutional neural network fused with the attention mechanism can further improve the accuracy of unmanned aerial vehicle target recognition.
Disclosure of Invention
The invention combines an attention mechanism with a convolutional neural network, and provides an unmanned aerial vehicle target recognition method based on the combined attention mechanism.
The technical scheme of the invention is as follows:
an unmanned aerial vehicle target recognition method based on a fusion attention mechanism comprises the following steps:
s1, acquiring a micro Doppler spectrogram of an unmanned aerial vehicle to form a training data set;
s2, constructing an unmanned aerial vehicle target recognition model based on a fusion attention mechanism, wherein the unmanned aerial vehicle target recognition model comprises an input layer, a first convolution module, an attention module, a second convolution module, a third convolution module, a full connection layer and a classification layer; the input layer receives training data and inputs the training data into a first convolution module after normalization, the first convolution module outputs a feature graph x to an attention module, and an attention function adopted by the attention module is as follows:
Figure BDA0003653653170000021
wherein h is the height of x, w is the width of x, x (i, j) is the pixel value of the ith row and jth column in the feature map x, r is a preset parameter, the height ratio occupied by the feature of the identified interest is determined by experiments according to actual needs, a two-dimensional mask with the same width as the feature map x is generated after the attention function g (·) is performed, the pixel value in the area of the identified interest is 1, the pixel values in other areas are 0, and the feature after the attention mechanism processing is expressed as follows:
f(g(x(i,j)),x(i,j))=g(x(i,j))×x(i,j),i∈[0,h),j∈[0,w)
the feature extraction is carried out through a second convolution module and a third convolution module, wherein the second convolution module and the third convolution module are conventional convolution modules and comprise a plurality of convolution kernels, the size and the number of the convolution kernels are determined by experiments, the output of the third convolution module is the extracted two-dimensional depth feature, a full connection layer is connected in series, the full connection layer converts the extracted two-dimensional depth feature into one-dimensional vector feature, a softmax classifier is adopted to realize a classification layer, the one-dimensional vector feature output by the full connection layer is used as the input of the classification layer, and a label corresponding to the maximum value in the output vector of the classification layer is used as the class label of the target;
s3, training the unmanned aerial vehicle target recognition model in the S2 by adopting the training data set in the S1 to obtain a trained unmanned aerial vehicle recognition model;
s4, acquiring a micro Doppler spectrogram of the unmanned aerial vehicle to be identified, and inputting the micro Doppler spectrogram to be identified by adopting a trained unmanned aerial vehicle identification model.
The method has the beneficial effects that as the attention module is designed aiming at the characteristics of the micro Doppler spectrogram, the local characteristic information favorable for classification can be extracted, and the attention module and the convolution module are fused to extract the characteristic with better classification performance, so that the identification performance of the target is further improved. Simulation experiments verify the effectiveness of the methods herein. .
Drawings
Fig. 1 is a schematic diagram of an unmanned aerial vehicle target recognition model structure based on a fused attention mechanism.
Fig. 2 is a schematic diagram of micro-doppler spectrograms of a two-rotor and a four-rotor unmanned aerial vehicle.
Detailed Description
The invention is described in detail below with reference to the drawings and simulations:
as shown in fig. 1, the unmanned aerial vehicle target recognition model based on the fusion attention mechanism provided by the invention consists of a data input layer, a preprocessing layer, a convolution module, an attention module, a full connection layer and a softmax classification layer, wherein the convolution module and the attention module are used for extracting features, and the full connection layer and the classification layer are used for realizing the classification recognition of targets. The input part consists of a data set input layer and a preprocessing layer. The input data set is a training set or a testing set consisting of micro Doppler spectrograms of the unmanned aerial vehicle; the preprocessing layer performs normalization processing on the input spectrogram.
The feature extraction part consists of a convolution module 1, an attention module, a convolution module 2 and a convolution module 3. The convolution modules are general convolution networks, and the difference between different convolution modules is mainly that the convolution kernels are different in size; the implementation of the attention module is as follows:
as shown in fig. 2, (a) is a micro-doppler spectrum of the two-rotor unmanned aerial vehicle, and fig. 2 (b) is a micro-doppler spectrum of the four-rotor unmanned aerial vehicle, it can be seen from fig. 2 that the key distinguishing feature is located in the middle region of the image, so that the network is expected to be capable of focusing on the middle region of the feature map more, and an attention mechanism is designed based on the feature. Assuming that the height of the feature map x is h, the width is w, and the height ratio occupied by the main feature is r, the design attention function g (x (i, j)) is as follows:
Figure BDA0003653653170000031
where x (i, j) is the pixel value of the ith row and jth column in the feature map x, that is, after g (·) is passed, a two-dimensional mask with the same width as the feature map x is generated, and the pixel value in the main region of interest is 1, and the pixel values in the other regions are 0. The characteristics after being processed by the attention mechanism are expressed as follows:
f(g(x(i,j)),x(i,j))=g(x(i,j))×x(i,j),i∈[0,h),j∈[0,w)
that is, the features processed by the attention mechanism only focus on the local region features with distinguishing force, and ignore the features of other regions. Therefore, the problem that the conventional convolution network focuses on the characteristic information of the whole image area is solved.
The classification and identification part consists of a full connection layer and a classification layer. The full connection layer converts the extracted two-dimensional depth features into one-dimensional vector features so as to facilitate the classification and identification later; and realizing a classification layer by adopting a softmax classifier, and taking a label corresponding to the maximum value in the output vector of the softmax classifier as a class label of the target.
Simulation example
Through simulation experiment design 6 kinds of unmanned aerial vehicle, be single rotor unmanned aerial vehicle, two rotor unmanned aerial vehicle, three rotor unmanned aerial vehicle, four rotor unmanned aerial vehicle, six rotor unmanned aerial vehicle and eight rotor unmanned aerial vehicle respectively. Wherein, the distance from the rotor wing to the center of the unmanned aerial vehicle is 0.8m, the length of the blade is 0.3m, and the rotating speed of the rotor wing is 30r/s; the carrier frequency of the simulation radar is 34.6GHz (Ka wave band), and the pulse repetition frequency is 125KHz; the distance between the center of the unmanned aerial vehicle and the radar is 100m, the pitch angle of the unmanned aerial vehicle relative to the radar is 10 degrees, the azimuth angle is 45 degrees, and the signal-to-noise ratio is 15dB.
The total observation time of each type of target is 15s, radar echo signals of every 0.05s are taken as a frame, the repetition rate between every two frames is 0.6, then 15/(0.05 x (1-0.6))=750 radar echo signals can be obtained for each type of target, and 600 radar echo signals are randomly selected as samples.
And performing short-time Fourier transform (STFT) on the obtained radar echo data to obtain a micro Doppler spectrogram, wherein each class of targets has 600 micro Doppler spectrograms, each class of samples divides a training set and a testing set according to the proportion of 7:3, each class of targets has 420 training samples and 180 testing samples, and 6 classes of targets have 2520 training samples and 1080 testing samples in total.
The above data sets were subjected to identification experiments using the conventional CNN method and the methods herein, and the results are shown in table 1. Wherein, the learning rate is 0.001, the iteration number is 200, the convolution module 1 contains 32 convolution kernels of 5*5, the convolution module 2 contains 64 convolution kernels of 5*5, and the convolution module 3 contains 32 convolution kernels of 3*3 by adopting the cross entropy loss function; the height ratio r in the attention module is 0.7.
Table 1 recognition results of two network models
Figure BDA0003653653170000041
From table 1, it can be seen that the average recognition accuracy of the cnn+ attentiveness mechanism method reaches 97.68%, and the recognition time required for the method directly using CNN is almost equal to that of the cnn+ attentiveness mechanism method, but the average recognition accuracy of the cnn+ attentiveness mechanism method is 0.55% higher than that of the conventional CNN method, and the above results indicate that the method proposed herein is effective.

Claims (1)

1. The unmanned aerial vehicle target recognition method based on the fusion attention mechanism is characterized by comprising the following steps of:
s1, acquiring a micro Doppler spectrogram of an unmanned aerial vehicle to form a training data set;
s2, constructing an unmanned aerial vehicle target recognition model based on a fusion attention mechanism, wherein the unmanned aerial vehicle target recognition model comprises an input layer, a first convolution module, an attention module, a second convolution module, a third convolution module, a full connection layer and a classification layer; the input layer receives training data and inputs the training data into a first convolution module after normalization, the first convolution module outputs a feature graph x to an attention module, and an attention function adopted by the attention module is as follows:
Figure FDA0003653653160000011
wherein h is the height of x, w is the width of x, x (i, j) is the pixel value of the ith row and jth column in the feature map x, r is a preset parameter, the height ratio occupied by the feature of the identified interest is expressed, a two-dimensional mask with the same width as the feature map x is generated after the attention function g (·) is performed, the pixel value in the area of the identified interest is 1, the pixel values in other areas are 0, and the feature processed by the attention mechanism is expressed as follows:
f(g(x(i,j)),x(i,j))=g(x(i,j))×x(i,j),i∈[0,h),j∈[0,w)
then, carrying out feature extraction through a second convolution module and a third convolution module, wherein the output of the third convolution module is the extracted two-dimensional depth feature, and then connecting a full connection layer in series, the full connection layer converts the extracted two-dimensional depth feature into one-dimensional vector feature, finally, a softmax classifier is adopted to realize a classification layer, the one-dimensional vector feature output by the full connection layer is used as the input of the classification layer, and the label corresponding to the maximum value in the output vector of the classification layer is used as the class label of the target;
s3, training the unmanned aerial vehicle target recognition model in the S2 by adopting the training data set in the S1 to obtain a trained unmanned aerial vehicle recognition model;
s4, acquiring a micro Doppler spectrogram of the unmanned aerial vehicle to be identified, and inputting the micro Doppler spectrogram to be identified by adopting a trained unmanned aerial vehicle identification model.
CN202210548931.XA 2022-05-20 2022-05-20 Unmanned aerial vehicle target recognition method based on fusion attention mechanism Active CN114943251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210548931.XA CN114943251B (en) 2022-05-20 2022-05-20 Unmanned aerial vehicle target recognition method based on fusion attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210548931.XA CN114943251B (en) 2022-05-20 2022-05-20 Unmanned aerial vehicle target recognition method based on fusion attention mechanism

Publications (2)

Publication Number Publication Date
CN114943251A CN114943251A (en) 2022-08-26
CN114943251B true CN114943251B (en) 2023-05-02

Family

ID=82909087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210548931.XA Active CN114943251B (en) 2022-05-20 2022-05-20 Unmanned aerial vehicle target recognition method based on fusion attention mechanism

Country Status (1)

Country Link
CN (1) CN114943251B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN110516596A (en) * 2019-08-27 2019-11-29 西安电子科技大学 Empty spectrum attention hyperspectral image classification method based on Octave convolution
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 Face recognition detection method based on mixed attention mechanism
CN112464792A (en) * 2020-11-25 2021-03-09 北京航空航天大学 Remote sensing image ship target fine-grained classification method based on dynamic convolution
CN112966555A (en) * 2021-02-02 2021-06-15 武汉大学 Remote sensing image airplane identification method based on deep learning and component prior
CN113191185A (en) * 2021-03-10 2021-07-30 中国民航大学 Method for classifying targets of unmanned aerial vehicle by radar detection through Dense2Net
CN114140683A (en) * 2020-08-12 2022-03-04 天津大学 Aerial image target detection method, equipment and medium
CN114445366A (en) * 2022-01-26 2022-05-06 沈阳派得林科技有限责任公司 Intelligent long-distance pipeline radiographic image defect identification method based on self-attention network
CN114488140A (en) * 2022-01-24 2022-05-13 电子科技大学 Small sample radar one-dimensional image target identification method based on deep migration learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11915602B2 (en) * 2020-07-31 2024-02-27 North Carolina State University Drone detection, classification, tracking, and threat evaluation system employing field and remote identification (ID) information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN110516596A (en) * 2019-08-27 2019-11-29 西安电子科技大学 Empty spectrum attention hyperspectral image classification method based on Octave convolution
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model
CN114140683A (en) * 2020-08-12 2022-03-04 天津大学 Aerial image target detection method, equipment and medium
CN112464792A (en) * 2020-11-25 2021-03-09 北京航空航天大学 Remote sensing image ship target fine-grained classification method based on dynamic convolution
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 Face recognition detection method based on mixed attention mechanism
CN112966555A (en) * 2021-02-02 2021-06-15 武汉大学 Remote sensing image airplane identification method based on deep learning and component prior
CN113191185A (en) * 2021-03-10 2021-07-30 中国民航大学 Method for classifying targets of unmanned aerial vehicle by radar detection through Dense2Net
CN114488140A (en) * 2022-01-24 2022-05-13 电子科技大学 Small sample radar one-dimensional image target identification method based on deep migration learning
CN114445366A (en) * 2022-01-26 2022-05-06 沈阳派得林科技有限责任公司 Intelligent long-distance pipeline radiographic image defect identification method based on self-attention network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Payal Mittal等.Deep learning-based object detection in low-altitude UAV datasets: A survey.Image and Vision Computing.2020,第104卷1-24. *
Wei Sun等 .RSOD: Real-time small object detection algorithm in UAV-based traffic monitoring.Applied Intelligence .2021,第52卷8448–8463. *
杨威.基于机载视觉的无人机自主目标检测技术研究.中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑.2022,C031-457. *
莫文昊.基于深度学习的航拍图像目标检测算法.中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑.2021,C031-557. *

Also Published As

Publication number Publication date
CN114943251A (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN108388927B (en) Small sample polarization SAR terrain classification method based on deep convolution twin network
CN108921030B (en) SAR automatic target recognition method
Yetgin et al. Power line recognition from aerial images with deep learning
CN107830996B (en) Fault diagnosis method for aircraft control surface system
CN107103338A (en) Merge the SAR target identification methods of convolution feature and the integrated learning machine that transfinites
CN111352086B (en) Unknown target identification method based on deep convolutional neural network
CN109766934B (en) Image target identification method based on depth Gabor network
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
Wan et al. Recognizing the HRRP by combining CNN and BiRNN with attention mechanism
CN110232371B (en) High-precision HRRP radar multi-target identification method based on small samples
CN112446874A (en) Human-computer cooperation autonomous level damage assessment method
CN113191185A (en) Method for classifying targets of unmanned aerial vehicle by radar detection through Dense2Net
CN104732224B (en) SAR target identification methods based on two-dimentional Zelnick moment characteristics rarefaction representation
CN113269203B (en) Subspace feature extraction method for multi-rotor unmanned aerial vehicle recognition
CN114943251B (en) Unmanned aerial vehicle target recognition method based on fusion attention mechanism
CN114821335B (en) Unknown target discrimination method based on fusion of depth features and linear discrimination features
CN116682015A (en) Feature decoupling-based cross-domain small sample radar one-dimensional image target recognition method
CN108106500B (en) Missile target type identification method based on multiple sensors
CN115656958A (en) Detection method and detection device for real-time track initiation and track classification
CN115909086A (en) SAR target detection and identification method based on multistage enhanced network
CN114966587A (en) Radar target identification method and system based on convolutional neural network fusion characteristics
CN114137518A (en) Radar high-resolution range profile open set identification method and device
He et al. A multi-scale radar HRRP target recognition method based on pyramid depthwise separable convolution network
CN112257792A (en) SVM (support vector machine) -based real-time video target dynamic classification method
CN114936576B (en) Subblock differential coding distribution characteristic extraction method in multi-rotor unmanned aerial vehicle identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant