CN111736125A - Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network - Google Patents

Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network Download PDF

Info

Publication number
CN111736125A
CN111736125A CN202010256158.0A CN202010256158A CN111736125A CN 111736125 A CN111736125 A CN 111736125A CN 202010256158 A CN202010256158 A CN 202010256158A CN 111736125 A CN111736125 A CN 111736125A
Authority
CN
China
Prior art keywords
hrrp
rnn
sample
layer
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010256158.0A
Other languages
Chinese (zh)
Other versions
CN111736125B (en
Inventor
潘勉
吕帅帅
李训根
刘爱林
李子璇
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010256158.0A priority Critical patent/CN111736125B/en
Publication of CN111736125A publication Critical patent/CN111736125A/en
Application granted granted Critical
Publication of CN111736125B publication Critical patent/CN111736125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar target identification method based on an attention mechanism and a bidirectional stacking recurrent neural network, which comprises the steps of firstly preprocessing to reduce the sensitivity in an HRRP sample and establishing a dynamic adjustment layer; then selecting the size of a sliding window to segment the HRRP, wherein the moving distance of the sliding window is less than the length of the sliding window; then, adjusting the importance degree of each segmentation sequence through an importance network; modeling the time sequence correlation of the sample through the bidirectional stack RNN, and extracting high-level characteristics of the sample; and finally, adjusting the importance degree of the hidden layer state by adopting a multi-level attention mechanism and classifying the target by softmax.

Description

Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network
Technical Field
The invention belongs to the field of radar target identification, and particularly relates to a radar target identification method based on an attention mechanism and a bidirectional stacking recurrent neural network.
Background
The Range Resolution of a High-Resolution broadband radar is much smaller than the target size, and the echo is also called a High Resolution Range Profile (HRRP) of the target. The HRRP contains extremely valuable structural information such as the radial size of a target, the distribution of scattering points and the like, and has wide engineering application prospect. Therefore, the HRRP-based radar automatic target identification method gradually becomes a hot spot of research in the field of radar automatic target identification.
For most HRRP target recognition systems, because original HRRP samples are often high in dimension and often difficult to directly represent essential attributes of recognition objects, feature extraction is a key loop in the original HRRP samples. The main work of feature extraction is to provide some help for subsequent recognition tasks (such as reducing data dimension, strengthening discriminant information and the like) through some linear or nonlinear transformation. The effective features not only can fully express data, but also can distinguish differences of different categories, thereby improving the identification precision.
The traditional feature extraction method can be divided into two parts: (1) a feature extraction method based on dimension reduction; (2) transform (Transformer) -based feature extraction methods, such as bispectrum, spectrogram, spectral amplitude features, and the like. These methods project the HRRP signal into the frequency domain and then model and identify its frequency domain features. The traditional feature extraction method achieves good identification performance in experiments, but the following two problems exist: (1) most of the feature extraction methods are unsupervised and lossy, which means that part of the separable information is inevitably lost in the feature extraction process, and the identification of the back-end classifier is not facilitated. (2) The selection of the feature extraction method highly depends on the cognitive and empirical accumulation of researchers on HRRP data, and a satisfactory effect is difficult to achieve under the condition of some lack of prior information.
In order to solve the problems of the conventional method in feature extraction, in recent years, a method based on deep learning is introduced into the field of radar target identification. The radar high-resolution range profile identification method based on deep learning can be roughly divided into the following three categories: (1) deep learning methods based on encoder-decoder structures. (2) A deep learning method based on a Convolutional Neural Network (CNN) structure. (3) A deep learning method based on a recurrent neural network. However, methods (1) and (2) directly feature extract and model the envelope information of the HRRP ensemble, ignoring sequence dependencies between HRRP distance elements that may reflect target physical structure features. The method (3) models based on sequence correlation, and although the physical structure features are described in modeling, the following problems exist: (1) distance units with small amplitude may contain some features with strong separability, but these features are rarely used; (2) the unidirectional RNN can only use the current time and the structural information before the current time in prediction, and cannot well use the integral structural information prior contained in the HRRP.
Disclosure of Invention
In view of the above technical problems, the present invention is directed to providing a radar target identification method based on an attention mechanism and a bidirectional stacking recurrent neural network, and the method includes firstly preprocessing to reduce sensitivity in an HRRP sample and establish a dynamic adjustment layer; then selecting the size of a sliding window to segment the HRRP, wherein the moving distance of the sliding window is less than the length of the sliding window; then, adjusting the importance degree of each segmentation sequence through an importance network; modeling the time sequence correlation of the sample through the bidirectional stack RNN, and extracting high-level characteristics of the sample; and finally, adjusting the importance degree of the hidden layer state by adopting a multi-level attention mechanism and classifying the target by softmax.
In order to solve the technical problems, the invention adopts the following technical scheme:
a radar target identification method based on an attention mechanism and a bidirectional stacked cyclic neural network comprises the following steps:
s1, collecting data set, merging HRRP data set collected by radar according to target types, selecting training sample and testing sample in different data sections for each type of sample, and selecting training sample and testing sample in training setAnd in the process of selecting the test set, ensuring that the postures of the selected training set samples and the radars cover the postures of the test set samples and the radars, wherein the ratio of the number of the various target training set samples to the number of the test set samples is 8:2, and recording the selected data set as T { (x)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykRepresenting that the sample belongs to the kth class, collecting c class targets, and representing the total number of the samples by n;
s2, preprocessing the original HRRP sample set, and determining the intensity of the HRRP including the radar transmitting power, the target distance, the radar antenna gain and the radar receiver gain, before identifying the target by using the HRRP2The method of intensity normalization processes original HRRP echoes so as to improve the problem of HRRP intensity sensitivity, wherein the HRRP is intercepted from radar echo data through a range window, and the recorded range image is not fixed in the position of a range gate in the intercepting process so as to cause HRRP translation sensitivity, and the problem of HRRP translation sensitivity is improved through a gravity center alignment method;
s3, because the amplitude difference of the echo in each distance unit in the HRRP is large, directly sending the data into the convolutional layer can cause the model to pay more attention to the distance unit with large amplitude, however, the distance unit with small amplitude may contain some characteristics with strong separability, which is helpful for radar target identification, a dynamic adjustment layer is added before segmenting the HRRP to adjust the whole dynamic range of the HRRP, and the adjustment layer can determine how to adjust the whole dynamic of the HRRP by model training on the premise that the relative relation of the sizes of the distance units is not changed, so as to achieve better identification effect;
s4, selecting a sliding window with a fixed length to segment the processed HRRP sample, wherein the data format after segmentation is the input format of a subsequent deep neural network;
s5, establishing an importance adjusting network to adjust the channel of the processed data, automatically acquiring the importance degree of each characteristic channel by the importance network in a learning mode, and then improving useful characteristics according to the importance degree and inhibiting characteristics with little use for the current task;
s6, building deep neural classification, adjusting parameters and optimizing, adopting a bidirectional recurrent neural network, inputting HRRP data into two independent RNN models in a positive direction and a negative direction respectively, and splicing the obtained hidden layers;
s7, carrying out preprocessing operations of steps S2, S3 and S4 in a training phase on the test data collected in the S1;
s8, the sample processed by the S7 is sent to the model constructed by the S6 to be tested to obtain a result, namely, the output of the attention mechanism is finally classified through a softmax layer, and the ith HRRP test sample
Figure BDA0002437405090000041
The probability corresponding to a kth class radar target in the target set may be calculated as:
Figure BDA0002437405090000042
where exp (·) denotes an index operation, and c denotes the number of categories.
Preferably, the S2 further comprises the following steps:
s201, intensity normalization, assuming original HRRP is represented as xraw=[x1,x2,…,xL]Where L represents the total number of range cells contained within the HRRP, the HRRP after intensity normalization is represented as:
Figure BDA0002437405090000043
s202, aligning the samples, translating the HRRP to move the gravity center g of the HRRP to be close to L/2, and distributing the distance units containing the information in the HRRP to be close to the center, wherein the calculation method of the gravity center g of the HRRP is as follows:
Figure BDA0002437405090000044
wherein x isiFor ith in original HRRPAnd a dimension signal unit.
Preferably, the S3 further includes: the HRRP sample is dynamically adjusted, namely the sample is subjected to power processing, the data is subjected to power processing, the diversity of target category difference is reflected from multiple angles, the information contained in the radar HRRP is reflected in multiple different forms from the multiple angles, the subsequent network can conveniently extract features from the multiple angles for identification, and the output of a dynamic adjustment layer can be expressed as follows:
Figure BDA0002437405090000045
wherein M is the number of channels of the dynamic adjustment layer, the ith dynamic adjustment channel
Figure BDA0002437405090000046
Can be expressed as:
Figure BDA0002437405090000047
wherein, αiRepresenting the coefficients of a power transformation.
Preferably, the S4 further includes:
s401, performing sliding window segmentation on the dynamically adjusted HRRP sample, setting the length of a sliding window to be N, and setting the sliding distance to be d, wherein d is less than N, namely, two adjacent signals after the segmentation have an overlapping part with the length of N-d, the overlapping segmentation is larger, sequence characteristics in the HRRP sample are reserved, a subsequent deep neural network can also learn characteristics which are more useful for classification in the sample in a larger manner, wherein the number of the segmentation corresponds to the time point dimension in the input format of the subsequent deep neural network, and the length N of the sliding window corresponds to the input signal dimension of each time point;
s402, the output after sliding window slicing can be represented as:
Figure BDA0002437405090000051
wherein M is the number of sequences after segmentation, wherein the t-th segmentation sequence is
Figure BDA0002437405090000052
Wherein d is the sliding distance of the window, and N is the length of the sliding window.
Preferably, the S5 further includes:
s501, the importance network carries out importance adjustment on the segmented HRRP, selectively emphasizes input sequences at certain time points with more separable information and inhibits input sequences at other time points with less importance by learning global information of a convolution channel, and after the importance network is adjusted, the model becomes more balanced, so that more important and more useful characteristics can be highlighted, the HRRP characterization capability of the model is improved, and the importance adjustment is divided into a compression characteristic and an excitation characteristic;
s502, compressing the characteristic part: the sample after being cut by the sliding window is
Figure BDA0002437405090000053
Figure BDA0002437405090000054
The feature is composed of M sequences, each sequence is an N-dimensional vector, and each sequence is compressed into a real number weight x representing the importance degree of the sequence through a full connection layer and an activation functionsq,xslideThe output through the full connection can be calculated by:
xsq=f(Wxslide+b)
wherein the activation function f (-) is a Sigmoid function,
Figure BDA0002437405090000055
s503, a characteristic excitation part: selectively adjusting the extracted features through an Excitation formula to obtain adjusted features FE
FE=xslide⊙xsq
Wherein xsq=[xsq(1),xsq(2),…,xsq(M)]It is an M-dimensional vector, ⊙ denotes xslideEach element in each channel is multiplied by xsqThe numbers in the corresponding dimension in this vector. As in feature FEThe mth channel in (1) is adjusted to:
Figure BDA0002437405090000061
preferably, in particular, the S6 further includes:
s601, designing the classification network into a multi-layer stacked bidirectional RNN, and assuming that the input is a characteristic FRNN
Figure BDA0002437405090000062
Wherein M isiDenotes the dimension of each time point of the ith bidirectional RNN, N denotes the length of the input sequence, and the output is assumed to be Foutput
Figure BDA0002437405090000063
Where H is the number of hidden units, and the vector corresponding to the kth time point in the sequence can be represented as:
Figure BDA0002437405090000064
wherein f (-) represents an activation function,
Figure BDA0002437405090000065
represents a hidden layer output matrix corresponding to a forward RNN included in the ith bi-directional RNN,
Figure BDA0002437405090000066
indicating the kth hidden layer state contained in the forward RNN contained in the ith bi-directional RNN, and, similarly,
Figure BDA0002437405090000067
express correspondenceA hidden layer output matrix of a backward RNN included in the ith bi-directional RNN,
Figure BDA0002437405090000068
represents a kth hidden layer state contained in a backward RNN contained in an ith bidirectional RNN, bFiRepresents the output layer bias of the ith bi-directional RNN;
s602, an attention mechanism in the network selects hidden layer states obtained by the last layers of bidirectional RNNs at different moments for splicing, wherein the hidden layer state after the splicing of the ith layer is as follows:
Figure BDA0002437405090000069
finally, adding the spliced hidden layers of each layer to obtain a hidden layer state c processed by the attention modelATTComprises the following steps:
Figure BDA0002437405090000071
α thereinikRepresents the weight corresponding to the kth time point of the ith layer, M represents the number of hidden states contained in the forward RNN or backward RNN of each layer in the bidirectional RNN model, namely the time point dimension, N1Number of layers representing network stack, N0Means that taking the hidden state in the two-way RNN of the stack of several layers from the last layer for cATT,αikThe method of (a) is shown as follows:
Figure BDA0002437405090000072
wherein e isikThe energy added for the forward and backward hidden states in the ith bi-directional RNN is represented as:
eik=UATTtanh(WATThik)
wherein
Figure BDA0002437405090000073
They are used forIs a parameter for calculating the energy of the hidden unit, l is the dimension of the hidden unit, and M is the dimension of the time point;
s603, designing the loss function as cross entropy, learning parameters by calculating the gradient of the loss function relative to the parameters by using training data, fixing the learned parameters when the model converges, and expressing as follows by adopting a cost function based on the cross entropy:
Figure BDA0002437405090000074
wherein N represents the number of training samples in a batch, enIs a one-hot vector representing the true label of the nth training sample, P (i | x)train) Representing the probability that the training sample corresponds to the ith target.
The invention has the following beneficial effects:
(1) in the embodiment of the invention, the dynamic adjustment layer is applied, because some better separability characteristics are possibly caused by relative amplitude values, the decision of a subsequent classifier is hardly influenced, and the dynamic adjustment layer is used for deciding how to adjust the overall dynamics of the HRRP through model training on the premise that the relative relation of the sizes of all distance units is not changed, so that a better recognition effect is achieved.
(2) The importance adjusting network is applied in the embodiment of the invention, and can selectively emphasize the convolution channel containing more separable information and restrain the convolution channel which is not useful by learning the global information of the convolution channel. After adjustment, the model becomes more balanced from the perspective of a space channel (convolution channel), so that more important and useful features can be highlighted, and the HRRP (high resolution ratio) representation capability of the model is improved.
(3) The model organized in the mode can better abstract the structural characteristics of a high layer step by step according to the context of data, and the hidden state inside each two-way cyclic neural network layer contains structural representations of different layers, so that the HRRP can be better applied to recognition.
(4) In the embodiment of the invention, an attention model is applied, the judgment weight given by a middle signal gathering region is strengthened in the classification process, and the judgment weight given by noise regions at two sides is reduced.
Drawings
Fig. 1 is a flowchart illustrating steps of a radar target identification method based on an attention mechanism and a bidirectional stacked recurrent neural network according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a radar target identification method based on an attention mechanism and a bidirectional stacked recurrent neural network according to an embodiment of the present invention is shown, and specifically, the method includes the following steps:
s1, collecting a data set, merging HRRP data sets collected by the radar according to the types of targets, selecting training samples and test samples in different data sections for each type of sample, ensuring that the postures formed by the selected training set samples and the radar cover the postures formed by the test set samples and the radar in the selection process of the training set and the test set, wherein the ratio of the number of the training set samples and the test set samples of each type of target is 8:2, and recording the selected data set as T { (x is the ratio of the number of the training set samples to the number of the test set samples to 2)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykRepresenting that the sample belongs to the kth class, collecting c class targets, and representing the total number of the samples by n;
s2, forPreprocessing an original HRRP sample set, wherein the intensity of the HRRP is determined by the factors including radar transmitting power, target distance, radar antenna gain and radar receiver gain, and the intensity of the HRRP is determined by l before the HRRP is used for target identification2The method of intensity normalization processes original HRRP echoes so as to improve the problem of HRRP intensity sensitivity, wherein the HRRP is intercepted from radar echo data through a range window, and the recorded range image is not fixed in the position of a range gate in the intercepting process so as to cause HRRP translation sensitivity, and the problem of HRRP translation sensitivity is improved through a gravity center alignment method;
specifically, S2 further includes the steps of:
s201, intensity normalization, assuming original HRRP is represented as xraw=[x1,x2,…,xL]Where L represents the total number of range cells contained within the HRRP, the HRRP after intensity normalization can be expressed as:
Figure BDA0002437405090000091
and S202, aligning samples. The HRRP is translated to move its center of gravity g to around L/2 so that those range bins of HRRP that contain information will be distributed around the center. The calculation method of the gravity center g of the HRRP is as follows:
Figure BDA0002437405090000092
wherein x isiIs the i-th dimension signal unit in the original HRRP.
After the original HRRP sample is processed by the intensity normalization and gravity center alignment method, the amplitude value is limited between 0 and 1, so that the scale is unified, and the value between 0 and 1 is very favorable for subsequent neural network processing; HRRP echo signals with right or left distribution are adjusted to be near the center point.
S3, because the amplitude difference of the echo in each distance unit in the HRRP is large, directly sending the data into the convolutional layer can cause the model to pay more attention to the distance unit with large amplitude, however, the distance unit with small amplitude may contain some characteristics with strong separability, which is helpful for radar target identification, a dynamic adjustment layer is added before segmenting the HRRP to adjust the whole dynamic range of the HRRP, and the adjustment layer can determine how to adjust the whole dynamic of the HRRP by model training on the premise that the relative relation of the sizes of the distance units is not changed, so as to achieve better identification effect;
the S3 further includes dynamically adjusting the HRRP sample, that is, performing multiple power processing on the sample, performing power processing on the data, so as to reflect diversity of target category differences from multiple angles, and embodying information contained in the radar HRRP from multiple angles in multiple different forms, which facilitates subsequent networks to extract features from multiple angles for identification, and the output of the dynamic adjustment layer may be expressed as:
Figure BDA0002437405090000101
wherein M is the number of channels of the dynamic adjustment layer, the ith dynamic adjustment channel
Figure BDA0002437405090000102
Can be expressed as
Figure BDA0002437405090000103
Wherein, αiRepresenting the coefficients of a power transformation.
S4, selecting a sliding window with a fixed length to segment the processed HRRP sample, wherein the data format after segmentation is the input format of a subsequent deep neural network;
the S4 further includes:
s401, performing sliding window segmentation on the dynamically adjusted HRRP sample, setting the length of a sliding window to be N, and setting the sliding distance to be d, wherein d is less than N, namely, two adjacent signals after the segmentation have an overlapping part with the length of N-d, the overlapping segmentation is larger, sequence characteristics in the HRRP sample are reserved, a subsequent deep neural network can also learn characteristics which are more useful for classification in the sample in a larger manner, wherein the number of the segmentation corresponds to the time point dimension in the input format of the subsequent deep neural network, and the length N of the sliding window corresponds to the input signal dimension of each time point;
s402, the output after sliding window slicing can be represented as:
Figure BDA0002437405090000104
wherein M is the number of sequences after segmentation, wherein the t-th segmentation sequence is
Figure BDA0002437405090000105
Wherein d is the sliding distance of the window, and N is the length of the sliding window.
S5, establishing an importance adjusting network to adjust the channel of the processed data, automatically acquiring the importance degree of each characteristic channel by the importance network in a learning mode, and then improving useful characteristics according to the importance degree and inhibiting characteristics with little use for the current task;
specifically, the S5 further includes:
s501, the importance network carries out importance adjustment on the segmented HRRP, selectively emphasizes input sequences at certain time points with more separable information and inhibits input sequences at other time points with less importance by learning global information of a convolution channel, and after the importance network is adjusted, the model becomes more balanced, so that more important and more useful characteristics can be highlighted, the HRRP characterization capability of the model is improved, and the importance adjustment is divided into a compression characteristic and an excitation characteristic;
s502, compressing the characteristic part: the sample after being cut by the sliding window is
Figure BDA0002437405090000111
Figure BDA0002437405090000112
The feature is composed of M sequences, each sequence is an N-dimensional vector, and each sequence is compressed into a real number weight x representing the importance degree of the sequence through a full connection layer and an activation functionsq,xslideThe output through the full connection can be calculated by:
xsq=f(Wxslide+b)
wherein the activation function f (-) is a Sigmoid function,
Figure BDA0002437405090000113
s503, a characteristic excitation part: selectively adjusting the extracted features through an Excitation formula to obtain adjusted features FE
FE=xslide⊙xsq
Wherein xsq=[xsq(1),xsq(2),…,xsq(M)]It is an M-dimensional vector, ⊙ denotes xslideEach element in each channel is multiplied by xsqThe numbers in the corresponding dimension in this vector. As in feature FEThe mth channel in (1) is adjusted to:
Figure BDA0002437405090000114
s6, building deep neural classification, adjusting parameters and optimizing, adopting a bidirectional recurrent neural network, inputting HRRP data into two independent RNN models in a positive and negative direction respectively, and splicing the obtained hidden layers.
The conventional RNN model is unidirectional, when HRRP data is input into the conventional model, the HRRP data can be input along one direction, so that the input at the current moment only has conditional dependency on the input data before the HRRP data, and the input information at the later moment cannot be effectively applied at the current moment. However, the HRRP contains the physical structure prior of the whole target, and only one-way information is considered to be unfavorable for modeling and identifying the HRRP characteristics. In particular, when a unidirectional RNN is applied, most of the observed data information is noise data when the time t is small, and it is difficult for the RNN to accurately model the target structural characteristics. Therefore, the embodiment of the invention adopts the bidirectional recurrent neural network, the HRRP data are respectively input into two independent RNN models in positive and negative directions, and the obtained hidden layers are spliced, so that the defects of the unidirectional RNN can be improved, and the physical structure characteristics contained in the HRRP can be better modeled. The embodiment of the invention uses the stacked bidirectional cyclic neural network to enable the model to have a certain depth. The model organized in the mode can better abstract the structural features of the high layer step by step depending on the context of data, and the hidden states inside each bidirectional cyclic neural network layer contain structural representations of different layers, so that the HRRP can be better applied to recognition. And applying an attention model on the basis, namely considering the weight of judgment given by strengthening the middle signal gathering area during classification, and reducing the weight of judgment given by the noise areas on two sides. Namely, the deep neural network model in the embodiment of the invention is formed by stacking five layers of bidirectional LSTMs (long-time memory networks) with attention mechanisms, and finally, the softmax layer is adopted to classify the output of the network.
Specifically, the S6 further includes:
s601, supposing that the input is the feature FRNN
Figure BDA0002437405090000121
Wherein M isiDenotes the dimension of each time point of the ith bidirectional RNN, N denotes the length of the input sequence, and the output is assumed to be Foutput
Figure BDA0002437405090000122
Where H is the number of hidden units, and the vector corresponding to the kth time point in the sequence can be represented as:
Figure BDA0002437405090000123
wherein f (-) represents an activation function,
Figure BDA0002437405090000124
represents a hidden layer output matrix corresponding to a forward RNN included in the ith bi-directional RNN,
Figure BDA0002437405090000131
indicating the kth hidden layer state contained in the forward RNN contained in the ith bi-directional RNN, and, similarly,
Figure BDA0002437405090000132
represents a hidden layer output matrix corresponding to a backward RNN included in the ith bi-directional RNN,
Figure BDA0002437405090000133
represents a kth hidden layer state contained in a backward RNN contained in an ith bidirectional RNN, bFiIndicating the output layer bias for the ith bi-directional RNN.
S602, selecting hidden layer states obtained by the last layers of bidirectional RNNs at different moments for splicing, wherein the hidden layer state after the splicing of the ith layer is as follows:
Figure BDA0002437405090000134
finally, adding the spliced hidden layers of each layer to obtain a hidden layer state c processed by the attention modelATTComprises the following steps:
Figure BDA0002437405090000135
α thereinikRepresents the weight corresponding to the kth time point of the ith layer, M represents the number of hidden states contained in the forward RNN or backward RNN of each layer in the bidirectional RNN model, namely the time point dimension, N1Number of layers representing network stack, N0Means that taking the hidden state in the two-way RNN of the stack of several layers from the last layer for cATT。αikThe method of (a) is shown as follows:
Figure BDA0002437405090000136
wherein e isikThe energy added for the forward and backward hidden states in the ith bi-directional RNN can be expressed as:
eik=UATTtanh(WATThik)
wherein
Figure BDA0002437405090000137
They are the parameters used to calculate the energy of the hidden unit, l is the dimension of the hidden unit, and M is the dimension of the time point.
S603, splicing the output after the attention mechanism, and then connecting a full-connection layer with the node number being the radar category number, namely the output of the full-connection layer is the prediction result of the model, and the output can be expressed as:
output=f(C(cATT)Wo)
wherein C (-) is a splicing operation,
Figure BDA0002437405090000141
c represents the number of categories, and f (·) represents the softmax function.
S604, designing the loss function as cross entropy. The parameters are learned by calculating gradients of the loss function with respect to the parameters using training data, and the learned parameters are fixed at model convergence. The invention adopts a cost function based on cross entropy, which can be expressed as:
Figure BDA0002437405090000142
wherein N represents the number of training samples in a batch, enIs a one-hot vector representing the true label of the nth training sample, P (i | x)train) Representing the probability that the training sample corresponds to the ith target.
S605, initializing all weights and offsets to be trained in the model, setting training parameters including learning rate, training data volume of each batch and training batch, and starting model training.
S7, carrying out preprocessing operations of steps S2, S3 and S4 in a training phase on the test data collected in the S1;
s8, the sample processed by the S7 is sent to the model constructed by the S6 to be tested to obtain a result, namely, the output of the attention mechanism is finally classified through a softmax layer, and the ith HRRP test sample
Figure BDA0002437405090000143
The probability corresponding to a kth class radar target in the target set may be calculated as:
Figure BDA0002437405090000144
where exp (·) denotes an index operation, and c denotes the number of categories.
Testing HRRP sample x by maximum posterior probabilitytestK to maximum target probability0The method comprises the following steps:
Figure BDA0002437405090000145
through the 8 steps, the radar target recognition model based on the attention mechanism and the bidirectional stacking recurrent neural network can be obtained.
It is to be understood that the exemplary embodiments described herein are illustrative and not restrictive. Although one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (6)

1. A radar target identification method based on an attention mechanism and a bidirectional stacked cyclic neural network is characterized by comprising the following steps:
s1, collecting data sets, merging HRRP data sets collected by radar according to the types of targets, wherein the samples of each type are respectively in different typesSelecting training samples and test samples from the data segment, ensuring that the postures of the selected training set samples and radars cover the postures of the test set samples and radars in the selection process of the training set and the test set, wherein the ratio of the number of the various target training set samples to the number of the test set samples is 8:2, and recording the selected data set as T { (x)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykRepresenting that the sample belongs to the kth class, collecting c class targets, and representing the total number of the samples by n;
s2, preprocessing the original HRRP sample set, and determining the intensity of the HRRP including the radar transmitting power, the target distance, the radar antenna gain and the radar receiver gain, before identifying the target by using the HRRP2The intensity normalization method processes the original HRRP echo, thereby improving the intensity sensitivity problem of the HRRP; HRRP is intercepted from radar echo data through a distance window, and the recorded distance image is not fixed in the position of a range gate in the intercepting process, so that the translation sensitivity of the HRRP is caused, and the translation sensitivity of the HRRP is improved through a gravity center alignment method;
s3, because the amplitude difference of the echo in each distance unit in the HRRP is large, directly sending the data into the convolutional layer can cause the model to pay more attention to the distance unit with large amplitude, however, the distance unit with small amplitude may contain some characteristics with strong separability, which is helpful for radar target identification, a dynamic adjustment layer is added before segmenting the HRRP to adjust the whole dynamic range of the HRRP, and the adjustment layer can determine how to adjust the whole dynamic of the HRRP by model training on the premise that the relative relation of the sizes of the distance units is not changed, so as to achieve better identification effect;
s4, selecting a sliding window with a fixed length to segment the processed HRRP sample, wherein the data format after segmentation is the input format of a subsequent deep neural network;
s5, establishing an importance adjusting network to adjust the channel of the processed data, automatically acquiring the importance degree of each characteristic channel by the importance network in a learning mode, and then improving useful characteristics according to the importance degree and inhibiting characteristics with little use for the current task;
s6, building deep neural classification, adjusting parameters and optimizing, adopting a bidirectional recurrent neural network, inputting HRRP data into two independent RNN models in a positive direction and a negative direction respectively, and splicing the obtained hidden layers;
s7, carrying out preprocessing operations of steps S2, S3 and S4 in a training phase on the test data collected in the S1;
s8, the sample processed by the S7 is sent to the model constructed by the S6 to be tested to obtain a result, namely, the output of the attention mechanism is finally classified through a softmax layer, and the ith HRRP test sample
Figure FDA0002437405080000021
The probability corresponding to a kth class radar target in the target set may be calculated as:
Figure FDA0002437405080000022
where exp (·) denotes an index operation, and c denotes the number of categories.
2. The attention mechanism and bi-directional stacked recurrent neural network-based radar target recognition method of claim 1, wherein said S2 further comprises the steps of:
s201, intensity normalization, assuming original HRRP is represented as xraw=[x1,x2,…,xL]Where L represents the total number of range cells contained within the HRRP, the HRRP after intensity normalization is represented as:
Figure FDA0002437405080000023
s202, aligning the samples, translating the HRRP to move the gravity center g of the HRRP to be close to L/2, and distributing the distance units containing the information in the HRRP to be close to the center, wherein the calculation method of the gravity center g of the HRRP is as follows:
Figure FDA0002437405080000024
wherein x isiIs the i-th dimension signal unit in the original HRRP.
3. The attention mechanism and bi-directional stacked recurrent neural network-based radar target recognition method of claim 1, wherein the S3 further comprises: the HRRP sample is dynamically adjusted, namely the sample is subjected to power processing, the data is subjected to power processing, the diversity of target category difference is reflected from multiple angles, the information contained in the radar HRRP is reflected in multiple different forms from the multiple angles, the subsequent network can conveniently extract features from the multiple angles for identification, and the output of a dynamic adjustment layer can be expressed as follows:
Figure FDA0002437405080000031
wherein M is the number of channels of the dynamic adjustment layer, the ith dynamic adjustment channel
Figure FDA0002437405080000032
Can be expressed as:
Figure FDA0002437405080000033
wherein, αiRepresenting the coefficients of a power transformation.
4. The attention mechanism and bi-directional stacked recurrent neural network-based radar target recognition method of claim 3, wherein the S4 further comprises:
s401, performing sliding window segmentation on the dynamically adjusted HRRP sample, setting the length of a sliding window to be N, and setting the sliding distance to be d, wherein d is less than N, namely, two adjacent signals after the segmentation have an overlapping part with the length of N-d, the overlapping segmentation is larger, sequence characteristics in the HRRP sample are reserved, a subsequent deep neural network can also learn characteristics which are more useful for classification in the sample in a larger manner, wherein the number of the segmentation corresponds to the time point dimension in the input format of the subsequent deep neural network, and the length N of the sliding window corresponds to the input signal dimension of each time point;
s402, the output after sliding window slicing can be represented as:
Figure FDA0002437405080000034
wherein M is the number of sequences after segmentation, wherein the t-th segmentation sequence is
Figure FDA0002437405080000035
Wherein d is the sliding distance of the window, and N is the length of the sliding window.
5. The attention mechanism and bi-directional stacked recurrent neural network-based radar target recognition method of claim 4, wherein the S5 further comprises:
s501, the importance network carries out importance adjustment on the segmented HRRP, selectively emphasizes input sequences at certain time points with more separable information and inhibits input sequences at other time points with less importance by learning global information of a convolution channel, and after the importance network is adjusted, the model becomes more balanced, so that more important and more useful characteristics can be highlighted, the HRRP characterization capability of the model is improved, and the importance adjustment is divided into a compression characteristic and an excitation characteristic;
s502, compressing the characteristic part: the sample after being cut by the sliding window is
Figure FDA0002437405080000041
Figure FDA0002437405080000042
The feature is composed of M sequences, each sequence is an N-dimensional vector, and each sequence is compressed into a real number weight x representing the importance degree of the sequence through a full connection layer and an activation functionsq,xslideThe output through the full connection can be calculated by:
xsq=f(Wxslide+b)
wherein the activation function f (-) is a Sigmoid function,
Figure FDA0002437405080000043
s503, a characteristic excitation part: selectively adjusting the extracted features through an Excitation formula to obtain adjusted features FE
FE=xslide⊙xsq
Wherein xsq=[xsq(1),xsq(2),…,xsq(M)]It is an M-dimensional vector, ⊙ denotes xslideEach element in each channel is multiplied by xsqNumbers in the corresponding dimension of this vector, e.g. feature FEThe mth channel in (1) is adjusted to:
Figure FDA0002437405080000044
6. the attention mechanism and bidirectional stacked recurrent neural network-based radar target identification method of claim 5, wherein specifically, the S6 further comprises:
s601, designing the classification network into a multi-layer stacked bidirectional RNN, and assuming that the input is a characteristic FRNN
Figure FDA0002437405080000045
Wherein M isiDenotes the dimension of each time point of the ith bidirectional RNN, N denotes the length of the input sequence, and the output is assumed to be Foutput
Figure FDA0002437405080000046
Where H is the number of hidden units, and the vector corresponding to the kth time point in the sequence can be represented as:
Figure FDA0002437405080000051
wherein f (-) represents an activation function,
Figure FDA0002437405080000052
represents a hidden layer output matrix corresponding to a forward RNN included in the ith bi-directional RNN,
Figure FDA0002437405080000053
indicating the kth hidden layer state contained in the forward RNN contained in the ith bi-directional RNN, and, similarly,
Figure FDA0002437405080000054
represents a hidden layer output matrix corresponding to a backward RNN included in the ith bi-directional RNN,
Figure FDA0002437405080000055
represents a kth hidden layer state contained in a backward RNN contained in an ith bidirectional RNN, bFiRepresents the output layer bias of the ith bi-directional RNN;
s602, an attention mechanism in the network selects hidden layer states obtained by the last layers of bidirectional RNNs at different moments for splicing, wherein the hidden layer state after the splicing of the ith layer is as follows:
Figure FDA0002437405080000056
finally, adding the spliced hidden layers of each layer to obtain a hidden layer state c processed by the attention modelATTComprises the following steps:
Figure FDA0002437405080000057
α thereinikRepresents the weight corresponding to the kth time point of the ith layer, M represents the number of hidden states contained in the forward RNN or backward RNN of each layer in the bidirectional RNN model, namely the time point dimension, N1Number of layers representing network stack, N0Means that taking the hidden state in the two-way RNN of the stack of several layers from the last layer for cATT,αikThe method of (a) is shown as follows:
Figure FDA0002437405080000058
wherein e isikThe energy added for the forward and backward hidden states in the ith bi-directional RNN is represented as:
eik=UATTtanh(WATThik)
wherein
Figure FDA0002437405080000059
They are parameters for calculating the energy of the hidden unit, l is the dimension of the hidden unit, and M is the dimension of the time point;
s603, designing the loss function as cross entropy, learning parameters by calculating the gradient of the loss function relative to the parameters by using training data, fixing the learned parameters when the model converges, and expressing as follows by adopting a cost function based on the cross entropy:
Figure FDA0002437405080000061
wherein N represents the number of training samples in a batch, enIs a one-hot vector representing the true label of the nth training sample, P (i | x)train) Representing the probability that the training sample corresponds to the ith target.
CN202010256158.0A 2020-04-02 2020-04-02 Radar target identification method based on attention mechanism and bidirectional stacking cyclic neural network Active CN111736125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010256158.0A CN111736125B (en) 2020-04-02 2020-04-02 Radar target identification method based on attention mechanism and bidirectional stacking cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010256158.0A CN111736125B (en) 2020-04-02 2020-04-02 Radar target identification method based on attention mechanism and bidirectional stacking cyclic neural network

Publications (2)

Publication Number Publication Date
CN111736125A true CN111736125A (en) 2020-10-02
CN111736125B CN111736125B (en) 2023-07-07

Family

ID=72646547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010256158.0A Active CN111736125B (en) 2020-04-02 2020-04-02 Radar target identification method based on attention mechanism and bidirectional stacking cyclic neural network

Country Status (1)

Country Link
CN (1) CN111736125B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112764024A (en) * 2020-12-29 2021-05-07 杭州电子科技大学 Radar target identification method based on convolutional neural network and Bert
CN112782660A (en) * 2020-12-29 2021-05-11 杭州电子科技大学 Radar target identification method based on Bert
CN112986941A (en) * 2021-02-08 2021-06-18 天津大学 Radar target micro-motion feature extraction method
CN113238197A (en) * 2020-12-29 2021-08-10 杭州电子科技大学 Radar target identification and data judgment method based on Bert and BiLSTM
CN113486917A (en) * 2021-05-17 2021-10-08 西安电子科技大学 Radar HRRP small sample target identification method based on metric learning
CN114509736A (en) * 2022-01-19 2022-05-17 电子科技大学 Radar target identification method based on ultra-wideband electromagnetic scattering characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155660A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Action localization in sequential data with attention proposals from a recurrent network
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN109214452A (en) * 2018-08-29 2019-01-15 杭州电子科技大学 Based on the HRRP target identification method for paying attention to depth bidirectional circulating neural network
CN110334741A (en) * 2019-06-06 2019-10-15 西安电子科技大学 Radar range profile's recognition methods based on Recognition with Recurrent Neural Network
CN110418210A (en) * 2019-07-12 2019-11-05 东南大学 A kind of video presentation generation method exported based on bidirectional circulating neural network and depth

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155660A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Action localization in sequential data with attention proposals from a recurrent network
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN109214452A (en) * 2018-08-29 2019-01-15 杭州电子科技大学 Based on the HRRP target identification method for paying attention to depth bidirectional circulating neural network
CN110334741A (en) * 2019-06-06 2019-10-15 西安电子科技大学 Radar range profile's recognition methods based on Recognition with Recurrent Neural Network
CN110418210A (en) * 2019-07-12 2019-11-05 东南大学 A kind of video presentation generation method exported based on bidirectional circulating neural network and depth

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
侯春萍等: "基于卷积神经网络的雷达人体动作与身份多任务识别", 《激光与光电子学进展》 *
沈梦启: "基于卷积-循环神经网络的雷达高分辨距离像目标识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
潘勉: "雷达高分辨距离像目标识别技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764024A (en) * 2020-12-29 2021-05-07 杭州电子科技大学 Radar target identification method based on convolutional neural network and Bert
CN112782660A (en) * 2020-12-29 2021-05-11 杭州电子科技大学 Radar target identification method based on Bert
CN113238197A (en) * 2020-12-29 2021-08-10 杭州电子科技大学 Radar target identification and data judgment method based on Bert and BiLSTM
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112731309B (en) * 2021-01-06 2022-09-02 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112986941A (en) * 2021-02-08 2021-06-18 天津大学 Radar target micro-motion feature extraction method
CN112986941B (en) * 2021-02-08 2022-03-04 天津大学 Radar target micro-motion feature extraction method
CN113486917A (en) * 2021-05-17 2021-10-08 西安电子科技大学 Radar HRRP small sample target identification method based on metric learning
CN113486917B (en) * 2021-05-17 2023-06-02 西安电子科技大学 Radar HRRP small sample target recognition method based on metric learning
CN114509736A (en) * 2022-01-19 2022-05-17 电子科技大学 Radar target identification method based on ultra-wideband electromagnetic scattering characteristics
CN114509736B (en) * 2022-01-19 2023-08-15 电子科技大学 Radar target identification method based on ultra-wide band electromagnetic scattering characteristics

Also Published As

Publication number Publication date
CN111736125B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN111736125A (en) Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network
CN109214452B (en) HRRP target identification method based on attention depth bidirectional cyclic neural network
CN112364779B (en) Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110334741B (en) Radar one-dimensional range profile identification method based on cyclic neural network
CN110045015B (en) Concrete structure internal defect detection method based on deep learning
CN112764024B (en) Radar target identification method based on convolutional neural network and Bert
CN111580097A (en) Radar target identification method based on single-layer bidirectional cyclic neural network
CN111126386A (en) Sequence field adaptation method based on counterstudy in scene text recognition
CN111596276B (en) Radar HRRP target identification method based on spectrogram transformation and attention mechanism circulating neural network
CN110751044A (en) Urban noise identification method based on deep network migration characteristics and augmented self-coding
CN109239670B (en) Radar HRRP (high resolution ratio) identification method based on structure embedding and deep neural network
CN111580058A (en) Radar HRRP target identification method based on multi-scale convolution neural network
Liao et al. ARRU phase picker: Attention recurrent‐residual U‐Net for picking seismic P‐and S‐phase arrivals
CN115047421A (en) Radar target identification method based on Transformer
CN113344045A (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN111596292B (en) Radar target identification method based on importance network and bidirectional stacking cyclic neural network
CN112835008B (en) High-resolution range profile target identification method based on attitude self-adaptive convolutional network
CN113132931B (en) Depth migration indoor positioning method based on parameter prediction
CN112866156B (en) Radio signal clustering method and system based on deep learning
CN117131436A (en) Radiation source individual identification method oriented to open environment
CN113065520A (en) Multi-modal data-oriented remote sensing image classification method
CN116030304A (en) Cross-domain remote sensing image migration resisting method based on weighted discrimination and multiple classifiers
CN111580059A (en) Radar HRRP target identification method based on spectrogram segmentation preprocessing and convolutional neural network
CN112040408B (en) Multi-target accurate intelligent positioning and tracking method suitable for supervision places

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant