CN114636975A - LPI radar signal identification method based on spectrogram fusion and attention mechanism - Google Patents

LPI radar signal identification method based on spectrogram fusion and attention mechanism Download PDF

Info

Publication number
CN114636975A
CN114636975A CN202210236821.XA CN202210236821A CN114636975A CN 114636975 A CN114636975 A CN 114636975A CN 202210236821 A CN202210236821 A CN 202210236821A CN 114636975 A CN114636975 A CN 114636975A
Authority
CN
China
Prior art keywords
spectrogram
layer
fusion
network
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210236821.XA
Other languages
Chinese (zh)
Inventor
赵志强
唐京龙
张亚新
潘勉
吕帅帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210236821.XA priority Critical patent/CN114636975A/en
Publication of CN114636975A publication Critical patent/CN114636975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an LPI radar signal identification method based on spectrogram fusion and attention mechanism, which comprises the following steps: s1, constructing a data set, wherein the data set comprises a time-frequency diagram, a frequency spectrogram and a phase spectrogram; s2, preprocessing a data set; s3, training a network model; s4, SVM classification; removing the softmax layer of the previous network, sending the low-dimensional features into an SVM classification network, and utilizing the SVM classification network; and S5, outputting the classification result. Through the fusion of different characteristics, the characterization capability of the network is enhanced, the problem that the characteristics of a single spectrogram are not obvious can be solved, and the identification accuracy of the network is enhanced by considering the multi-surface characteristics of different LPI radar signals under the condition of not sacrificing large calculation cost.

Description

LPI radar signal identification method based on spectrogram fusion and attention mechanism
Technical Field
The invention belongs to the field of LPI radar signal identification, particularly relates to an LPI radar signal identification method based on spectrogram fusion and attention mechanism, and belongs to a non-cooperative target identification method in electronic warfare.
Background
In recent years, electronic warfare becomes a key part of whether a war can win or not, radar modulation signal identification is an important link of electronic investigation and becomes a key factor of electronic warfare, and the existence of threats and other information of radars can be identified based on a radar intra-pulse modulation mode of an intercepted signal so as to facilitate an expert system to make a correct response measure. The power spectral density of the radar signal is reduced by the pulse compression technology and other technologies at the present stage, so the excellent performance of the radar signal intra-pulse modulation mode identification becomes a key under the condition of low noise ratio.
The traditional modulation identification method relies on five conventional parameters including characteristic arrival direction and time (DOA/TOA), Radio Frequency (RF), pulse width (PA) and Pulse Repetition Interval (PRI), technicians manually extract the characteristics as important characteristics of individual radiation sources, but the new system radar appears, the combat environment is variable, and the method is difficult to meet the requirements of modern wars. In recent years, the field of artificial intelligence deep learning has attracted attention in terms of image, speech and natural language processing. The deep learning can use automatic feature extraction to replace the manual feature extraction process, find the association of features of shadow and hide, and then classify. The method is applied to detection and identification of LPI radar signals by utilizing the thought of deep learning, and research is conducted by scientific research personnel aiming at low identification accuracy under the condition of low signal-to-noise ratio. Li Peng adopts a multi-feature fusion algorithm to extract different features by using AE and CNN, then performs feature fusion, reduces redundant information on the features, can identify 12 different modulation signals including COSTAS, LFM, NLFM, BPSK, P1-P4 and T1-T4 codes, and has an average identification success rate of 95.5% under a signal-to-noise ratio of-6 dB. The XUE NI constructs a multi-resolution deep convolutional network to identify LPI radar signals, and under the condition of low signal-to-noise ratio of-8 db, the overall identification accuracy of 12 different LPI radar signals can reach 95.2%. However, there is still a large room for improvement in recognition rate at low signal-to-noise ratios.
Disclosure of Invention
Aiming at the prior art, the invention aims to provide an LPI radar signal identification method based on spectrogram fusion and attention mechanism, and solve the problem of low detection accuracy of the algorithm of non-cooperative LPI radar signal detection under the current low signal-to-noise ratio. The method includes firstly simulating time-frequency diagram, frequency spectrum diagram and phase spectrum diagram data of 12 different radar signals by using MATLAB, wherein the time-frequency diagram, frequency spectrum diagram and phase spectrum diagram data comprise COSTAS, LFM, Frank, BPSK, P1-P4 and T1-T4 codes, labeling the time-frequency diagram, frequency spectrum diagram and phase spectrum diagram of a single signal, inputting the labeling to a designed CNN extraction feature layer, carrying out channel fusion on the three spectrograms through the output of the CNN extraction feature layer, carrying out channel fusion on the fused features through a channel attention mechanism (SEnet) network, and finally outputting the fused features through an SVM classifier.
To achieve the above object, the present invention comprises the steps of:
an LPI radar signal identification method based on spectrogram fusion and attention mechanism comprises the following steps:
s1, constructing a data set, wherein the data set comprises a time-frequency diagram, a frequency spectrogram and a phase spectrogram;
s2, preprocessing of data set
S2-1, dividing a training set and a testing set;
s2-2, performing median filtering pretreatment;
s3 training network model
S3-1, extracting a characteristic image through CNN, outputting the characteristic image with multiple scales, and performing channel fusion operation to obtain a multi-channel fusion characteristic with time-frequency diagram, frequency spectrogram and phase spectrogram fusion;
s3-2, performing a channel attention mechanism layer on the data characteristics of the multi-channel three-spectrogram fusion, further performing linear layer mapping on the output characteristics of the channel attention mechanism layer to obtain low-dimensional characteristics, calculating loss errors through a softmax layer, then training network parameters, adjusting network model parameters through continuously reducing errors, and training to obtain an optimal network model;
s4 SVM classification
Removing the softmax layer of the previous network, sending the low-dimensional features into an SVM classification network, and utilizing the SVM classification network;
and S5, outputting the classification result.
Preferably, in step S1, 12 different LPI radar signals are generated through MATLAB simulation, where the LPI radar signals include COSTAS, LFM, Frank, BPSK, P1 code, P2 code, P3 code, P4 code, T1 code, T2 code, T3 code, and T4 code, and the LPI radar signals are subjected to Choi-Williams distribution time-frequency processing to obtain corresponding time-frequency diagram data sets; and carrying out Fourier transform on the LPI radar signals to obtain a corresponding spectrogram data set and a phase spectrogram data set.
Preferably, in step S1, the method for generating 12 different LPI radar signals through MATLAB simulation includes: and extracting an in-phase component I path and a quadrature component Q path, and respectively storing the data of the I path and the Q path into sampling signals with the length of N, wherein the range of N is 600-1200.
Preferably, in step S2-1, using an image cutting technique, the size of the obtained image is set to 256 × 256, gaussian white noise is added in the simulation process, the signal-to-noise ratio of the LPI radar signals is-10 db to 10db, 2db is an interval, the number of samples generated by each LPI radar signal in each signal-to-noise ratio is 1000, a single sample includes three data of a time-frequency diagram, a frequency spectrogram and a phase spectrogram, and the training set and the test set are based on the number of samples of 7: 3, randomly disordering the data and the labels, and recording a training set as D { (x)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykThe samples belong to the kth class, a class c target is collected, and n represents the total number of the samples.
Preferably, in step S2-2, all the samples are preprocessed by median filtering, with the window size set to 3 × 3, the image is scanned, the median is obtained through sorting processing, the median is set as the final value of the point, and the boundary of the image is filled with 0, where one image has a size of [ m, k ], and the size is changed to [ m +2k, n +2k ] after filling, where 2k +1 is the window size, and the preprocessed image is obtained through median filtering processing.
Preferably, in step S3-1, the feature extraction includes four first convolution modules, a second convolution module, a third convolution module, and a fourth convolution module connected in sequence, where the first convolution module includes a first convolution layer, the first convolution layer includes a first normalization layer and a ReLU activation function, the size of a convolution kernel of the first convolution layer is set to 7 × 7, the number of channels is set to 32, and stride and padding are set to 1; the second convolution module comprises a second convolution layer, the second convolution layer comprises a second average pooling layer, a second batch normalization layer and a ReLU activation function, the size of a convolution kernel of the second convolution layer is set to be 7 x 7, the number of channels is set to be 32, the size of the second average pooling layer is set to be 2 x 2, and stride and padding are set to be 1; the third convolution module comprises a third convolution layer, wherein the third convolution layer comprises a third batch normalization layer and a ReLU activation function, the size of a convolution kernel of the third convolution layer is set to be 3 x 3, the number of channels is set to be 64, and stride and padding are set to be 1; the fourth convolution module contains a fourth convolution layer that includes a fourth average pooling layer, a fourth batch of normalization layers, a ReLU activation function, a size of the fourth convolution kernel is set to 3 x 3, the number of channels is set to 64, a size of the fourth average pooling layer is set to 2 x 2, stride and padding are set to 1,
the batch normalization all adopts:
Figure RE-GDA0003633207700000041
wherein, Fn(k, l) denotes the l element in the k channel in the convolutional layer output for the image before batch normalization, Fn(k, l) is the image data after batch normalization, alphakAnd betakFor trainable parameters corresponding to the kth channel,. epsilon.is a very small number, to prevent divisor 0, size 10E-8, E (. quadrature.) is the averaging operation, and Var (. quadrature.) represents the variance operation;
the activation function adopts a ReLU function:
Figure RE-GDA0003633207700000051
Figure RE-GDA0003633207700000052
in order to be an input, the user can select,
Figure RE-GDA0003633207700000053
n represents the number of convolution layers, and after four convolution module layers, a time-frequency diagram, a frequency spectrogram and a phase spectrogram of one sample are output through a CNN module
Figure RE-GDA0003633207700000054
Respectively designated as { p1, p2, p3 }.
Preferably, in step S3-1, the channel fusion method includes: and (4) performing cat channel superposition operation on { p1, p2, p3} output by the CNN module to obtain output channel fusion characteristic output which is marked as rhoc
ρc=cat(p1,p2,p3)。
Preferably, in the step S3-2, the channel fusion feature output ρ iscThrough SEnet, firstly, through an Squeeze operation, the whole spatial feature on a channel is coded into a global feature, and the global feature is realized by adopting global average potential boosting:
Figure BDA0003540239950000055
wherein ZcFor the output of the Squeeze operation, H and W are ρcSize of the channel, ρc(i,j)The method comprises the following steps of (i, j) th value in a channel, then performing an Excitation operation, learning a nonlinear relation among channels, activating by adopting Sigmoid, wherein the value is 0-1, and adopting a gating mechanism:
s=σ(W2ReLU(W1Zc))
adopting the operation of two full connection layers, wherein the first full connection layer plays a role in reducing dimension, then adopting ReLU activation, finally recovering to the original dimension through the full connection layer, W1 and W2 are parameters of the two full connection layers, and finally multiplying the learned activation value of each channel by the original feature of U to obtain output xc
xc=s*ρc
Preferably, in step S3-2, the method for training the network model includes:
x to be obtainedcPerforming full connection operation, mapping output to one-dimensional data with the size of 12, and connecting a softmax layer to obtain classified output; then training network parameters, inputting the preprocessed training set samples into a radar radiation source recognition network to train the network, and updating the network weight by the network by adopting an Adam algorithm; the Adam algorithm is as follows:
g←▽θL(θ)β2Tαε
m←β1m+(1-β1)g
ν←β2ν+(1-β2)g2
Figure BDA0003540239950000061
wherein g is represented as the gradient of a loss function L (θ); θ is expressed as an iteration weight; vθRepresenting a gradient operator; m represents an first moment estimate of g initialized to 0; v represents a second-order moment estimate of g initialized to 0; beta is a1The exponential decay rate estimated for the first moment is 0.9; beta is a2The exponential decay rate estimated for the second moment is 0.9; t represents a transpose operation; α is the learning rate, initially set to 0.0001; epsilon is a smooth constant, the divisor of which is 0 is prevented, and the value is 10 e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent the generalization ability of the network from decreasing; the cross entropy loss function is expressed as follows:
Figure BDA0003540239950000062
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of the sample; q (x) represents the distribution predicted by the model; and carrying out supervised training on the network through the training set data so that the loss value reaches an optimal value.
Preferably, in step S4, the softmax layer is removed from the trained network model, and a one-dimensional output with a size of 12 is outputData, constructing SVM classifier, finding out a decision function y ═ f (x)c) The rule of (3) is used for predicting the class of data, a class-to-residual class method is adopted, each class is taken as a +1 class, all samples of the other 11 classes are-1 classes and are divided into 12 binary SVMs, svm is trained, and finally classified output of 12 signals is obtained.
The invention has the beneficial effects that:
1. according to the method, the characteristics of time-frequency domain, frequency domain and phase information are extracted through multiple layers of CNNs in a spectrogram fusion mode, the characterization capability of the network is enhanced through fusion of different characteristics, the problem that the characteristics of a single spectrogram are not obvious can be solved, the multi-surface characteristics of different LPI radar signals are considered under the condition that the large calculation cost is not sacrificed, and the identification accuracy of the network is enhanced.
2. According to the invention, through adopting the attention mechanism SENet network, the importance of different channels is represented through a channel attention mechanism for the output characteristics of a CNN characteristic extraction layer, the importance of each channel in the total characteristics is reflected, the detailed characteristic representation of signals is further strengthened, the network can learn more interesting characteristics, the subsequent classification task is more accurate, the anti-noise capability is strengthened, and the identification accuracy is higher.
Drawings
FIG. 1 is a schematic diagram of the principle of LPI radar signal identification of spectrogram fusion and attention mechanism of the present invention.
FIG. 2 is a schematic diagram of a channel attention mechanism (SENET) according to an embodiment of the present invention.
Detailed Description
The method of the invention is further described in the following with reference to the figures and examples.
The invention discloses an LPI radar signal identification method based on spectrogram fusion and attention mechanism, which comprises the following specific implementation steps of:
s1, constructing a data set, wherein the data set comprises a time-frequency diagram, a frequency spectrogram and a phase spectrogram;
specifically, 12 different LPI radar signals are generated by MATLAB simulation, including COSTAS, LFM, Frank, BPSK, P1-P4, and T1-T4 codes, an in-phase component I path and an orthogonal component Q path are extracted, data of the I path and the Q path are respectively stored as sampling signals with a length of N, the range of N is 600 to 1200, and the signal parameters are as follows:
Figure BDA0003540239950000081
performing Choi-Williams distribution time-frequency processing on the signals to obtain a signal time-frequency diagram data set; fourier transform is performed on the signal to obtain a spectrogram dataset and a phase spectrogram dataset of the signal.
S2, preprocessing of data set
S2-1, dividing a training set and a testing set;
it can be understood that before the training set and the test set are divided, the signal needs to be sampled, so that the training set and the test set are conveniently divided.
Specifically, the generation method of the sample comprises the following steps: using an image cutting technique, the size of the obtained image is set to 256 × 256, gaussian white noise is added in the simulation process, the signal to noise ratio of the signal is-10 db to 10db, 2db is an interval, and the number of generated samples of each signal is 1000 under each signal to noise ratio.
Further, a training set and a test set are established according to the number of signal samples: training set and test set ratios were kept at 7: 3, randomly disorganizing the data and the labels, and recording the training data set as D { (x)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykRepresenting that the sample belongs to the kth class, collecting c class targets, and representing the total number of the samples by n;
s2-2, carrying out median filtering pretreatment;
specifically, a median filtering preprocessing method is adopted for all samples (one sample comprises a time-frequency graph, a frequency spectrogram and a phase spectrogram), the window size is set to be 3 x 3, the images are scanned, the median is obtained through sorting processing, the final value of the point is determined, 0 is filled in the boundary of the images, the size of one image is [ m, k ], the size of the image is changed into [ m +2k, n +2k ] after filling, 2k +1 is the window size, and the preprocessed images are obtained through median filtering processing.
The method for obtaining the median is illustrated as follows: after the description, data of {40, 107, 5, 198, 226, 223, 37, 68, 193} is obtained, and the median 107 is obtained through sorting processing, and then the final value of the point is determined.
S3 training network model
S3-1, extracting a feature image through a CNN (convolutional neural network), extracting local features of spectrograms, outputting a feature image with multiple scales, and performing channel fusion operation to obtain a multi-channel fusion feature with time-frequency graph, spectrogram and phase spectrogram fusion, namely a data feature with three spectrogram fusion;
specifically, the feature extraction of the preprocessed data by the CNN module includes the following steps:
the method comprises the steps that preprocessed samples are subjected to feature extraction through a CNN module respectively, the feature extraction comprises a first convolution module, a second convolution module, a third convolution module and a fourth convolution module which are connected in sequence, the first convolution module comprises a first convolution layer, the first convolution layer comprises a first normalization layer and a ReLU activation function, the size of a convolution kernel of the first convolution layer is set to be 7 x 7, the number of channels is set to be 32, and stride and padding are set to be 1; the second convolution module comprises a second convolution layer, the second convolution layer comprises a second average pooling layer, a second batch normalization layer and a ReLU activation function, the size of a convolution kernel of the second convolution layer is set to be 7 x 7, the number of channels is set to be 32, the size of the second average pooling layer is set to be 2 x 2, and stride and padding are set to be 1; the third convolution module comprises a third convolution layer, wherein the third convolution layer comprises a third batch normalization layer and a ReLU activation function, the size of a convolution kernel of the third convolution layer is set to be 3 x 3, the number of channels is set to be 64, and stride and padding are set to be 1; the fourth convolution module contains a fourth convolution layer that includes a fourth average pooling layer, a fourth batch of normalization layers, a ReLU activation function, a size of the fourth convolution kernel is set to 3 x 3, the number of channels is set to 64, a size of the fourth average pooling layer is set to 2 x 2, stride and padding are set to 1,
batch normalization employed:
Figure RE-GDA0003633207700000101
wherein, Fn(k, l) denotes the l element in the k channel in the convolutional layer output for the image before batch normalization, Fn(k, l) is the image data after batch normalization, alphakAnd betakFor trainable parameters corresponding to the kth channel,. epsilon.is a very small number, to prevent divisor 0, size 10E-8, E (. quadrature.) is the averaging operation, and Var (. quadrature.) represents the variance operation;
the activation function adopts a ReLU function:
Figure RE-GDA0003633207700000102
Figure RE-GDA0003633207700000111
in order to be an input, the user can select,
Figure RE-GDA0003633207700000112
for the response output of ReLU, n represents the number of layers of convolution. After passing through four layers of convolution modules, the time-frequency diagram, the frequency spectrogram and the phase spectrogram of one sample are output through the convolution modules
Figure RE-GDA0003633207700000113
Respectively designated as { p1, p2, p3 }.
Further, the channel fusion method comprises the following steps: performing cat channel superposition operation on { p1, p2, p3} output by the CNN module to obtain output channel fusion characteristic output, and recording the output channel fusion characteristic output as rhoc
ρc=cat(p1,p2,p3)。
S3-2, performing a channel attention mechanism layer (SENet) on the data features of the multi-channel three-spectrogram fusion, reflecting the importance of each channel, emphasizing useful information, suppressing invalid information, performing adaptive adjustment on the channel features, outputting the most effective feature representation, performing linear layer mapping on the output features of the channel attention mechanism layer to obtain low-dimensional features, restoring the low-dimensional features to the original dimensions to obtain the original features, calculating loss errors through a softmax layer, training network parameters, adjusting the network model parameters through continuously reducing errors, and training to obtain the optimal network model.
Specifically, the channel fusion feature output ρcThrough SEnet, firstly, through an Squeeze operation, the whole spatial feature on a channel is coded into a global feature, and the global feature is realized by adopting global average potential boosting:
Figure BDA0003540239950000115
wherein ZcFor the output of the Squeeze operation, H, W are ρcSize of the channel, ρc(i,j)Is the (i, j) th value in the channel. Then, performing an Excitation operation, learning the nonlinear relation (activated by Sigmoid, with a value of 0-1) among the channels, and adopting a gating mechanism:
s=σ(W2ReLU(W1Zc))
the operation of two fully-connected layers is adopted, the first fully-connected layer plays a role in dimensionality reduction, then the ReLU is adopted for activation, and finally the original dimensionality is restored through the fully-connected layers, wherein W1 and W2 are parameters of the two fully-connected layers. And finally multiplying the learned activation value of each channel by the original characteristic on U to obtain an output xc
xc=s*ρc
Further, x of the outputcAnd performing full connection operation, mapping output to one-dimensional data with the size of 12, and connecting a softmax layer to obtain classified output. The network parameters are then trained. Inputting the preprocessed training set samples into a radar radiation source recognition network to train the network, and updating the network weight by the network by adopting an Adam algorithm; the Adam algorithm is as follows:
g←▽θL(θ)β2Tαε
m←β1m+(1-β1)g
ν←β2ν+(1-β2)g2
Figure BDA0003540239950000121
wherein g is represented as the gradient of a loss function L (θ); θ is expressed as an iteration weight; vθRepresenting a gradient operator; m represents an first moment estimate of g initialized to 0; v represents a second-order moment estimate of g initialized to 0; beta is a1The exponential decay rate estimated for the first moment is 0.9; beta is a2The exponential decay rate estimated for the second moment is 0.9; t represents a transpose operation; α is the learning rate, initially set to 0.0001; epsilon is a smooth constant, the divisor of which is 0 is prevented, and the value is 10 e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent the generalization ability of the network from decreasing; the cross entropy loss function is expressed as follows:
Figure BDA0003540239950000122
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of the sample; q (x) represents the distribution predicted by the model; and carrying out supervised training on the network through the training set data so that the loss value reaches an optimal value.
S4 SVM classification
Removing the softmax layer of the previous network, sending the low-dimensional features into an SVM classification network, and utilizing the SVM classification network;
specifically, a softmax layer of the trained network model is removed, one-dimensional data with the size of 12 is output, an SVM classifier is constructed, and the classifier is used for finding a decision function y (x) f (x)c) For predicting the class of data. Adopting a class-to-remainder class approach, taking each class as a +1 class, taking all samples of the other 11 classes as-1 classes, dividing the samples into 12 binary SVMs, training svm, and obtaining the final 1 classAnd 2 kinds of signals are classified and output.
And S5, outputting the classification result.
Through the steps S1-S4, the LPI radar signal identification method model based on spectrogram fusion and attention mechanism provided by the invention can be obtained, the LPI radar signal identification method based on spectrogram fusion and attention mechanism is further realized, the characterization capability of the network is enhanced through fusion of different features, the problem that the features of a single spectrogram are not obvious can be solved, the multi-surface features of different LPI radar signals are considered under the condition of not sacrificing large calculation cost, and the identification accuracy of the network is enhanced.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments, including the components, without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (10)

1. An LPI radar signal identification method based on spectrogram fusion and attention mechanism is characterized by comprising the following steps:
s1, constructing a data set, wherein the data set comprises a time-frequency diagram, a frequency spectrogram and a phase spectrogram;
s2, preprocessing of data set
S2-1, dividing a training set and a testing set;
s2-2, performing median filtering pretreatment;
s3 training network model
S3-1, extracting a characteristic image through CNN, outputting the characteristic image with multiple scales, and performing channel fusion operation to obtain a multi-channel fusion characteristic with time-frequency diagram, frequency spectrogram and phase spectrogram fusion;
s3-2, performing a channel attention mechanism layer on the data characteristics fused by the three spectrograms of the multiple channels, further performing linear layer mapping on the output characteristics passing through the channel attention mechanism layer to obtain low-dimensional characteristics, calculating loss errors through a softmax layer, then training network parameters, adjusting network model parameters through continuously reducing errors, and training to obtain an optimal network model;
s4 SVM classification
Removing the softmax layer of the previous network, sending the low-dimensional features into an SVM classification network, and utilizing the SVM classification network;
and S5, outputting the classification result.
2. The method for identifying LPI radar signals based on spectrogram fusion and attention mechanism as claimed in claim 1, wherein in said step S1, 12 different LPI radar signals are generated by MATLAB simulation, wherein said LPI radar signals include COSTAS, LFM, Frank, BPSK, P1 code, P2 code, P3 code, P4 code, T1 code, T2 code, T3 code and T4 code, and said LPI radar signals are processed by Choi-Williams distribution time-frequency to obtain corresponding time-frequency diagram data set; and carrying out Fourier transform on the LPI radar signals to obtain a corresponding spectrogram data set and a phase spectrogram data set.
3. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 2, wherein said step S1 comprises the steps of generating 12 different LPI radar signals by MATLAB simulation: and extracting an in-phase component path I and a quadrature component path Q, and respectively storing the path I and the path Q data as sampling signals with the length of N, wherein the range of N is 600-1200.
4. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 2 or 3, wherein in said step S2-1, using image segmentation technique, the obtained image size is set to 256 × 256, Gaussian white noise is added during simulation, the signal-to-noise ratio of LPI radar signal is-10 db to 10db, 2db is interval, the number of samples generated by each LPI radar signal in each signal-to-noise ratio case is 1000, each sample comprises three data of time-frequency diagram, frequency spectrogram and phase spectrogram, said training set and test set are according to the number of samples of 7: 3, randomly disordering the data and the labels, and recording a training set as D { (x)i,yk)}i∈[1,n],k∈[1,c]Wherein x isiDenotes the ith sample, ykThe samples belong to the kth class, a class c target is collected, and n represents the total number of the samples.
5. The spectrum fusion and attention mechanism based LPI radar signal identification method of claim 4, wherein in step S2-2, all the samples are preprocessed by median filtering, the window size is set to 3 x 3, the image is scanned, the median is obtained through sorting processing, the median is set as the final value of the point, the border of the image is filled with 0, one image has a size of [ m, k ], the filled size becomes [ m +2k, n +2k ], wherein 2k +1 is the window size, and the preprocessed image is obtained through median filtering processing.
6. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 4, wherein said step S3-1, said feature extraction comprises four sequentially connected first, second, third and fourth convolution modules, a first convolution module comprises a first convolution layer, said first convolution layer comprises a first normalization layer and a ReLU activation function, the size of convolution kernel of said first convolution layer is set to 7 x 7, the number of channels is set to 32, stride and padding are set to 1; the second convolution module comprises a second convolution layer, the second convolution layer comprises a second average pooling layer, a second batch normalization layer and a ReLU activation function, the size of a convolution kernel of the second convolution layer is set to be 7 x 7, the number of channels is set to be 32, the size of the second average pooling layer is set to be 2 x 2, and stride and padding are set to be 1; the third convolution module comprises a third convolution layer, wherein the third convolution layer comprises a third batch normalization layer and a ReLU activation function, the size of a convolution kernel of the third convolution layer is set to be 3 x 3, the number of channels is set to be 64, and stride and padding are set to be 1; the fourth convolution module contains a fourth convolution layer that includes a fourth average pooling layer, a fourth batch of normalization layers, a ReLU activation function, a size of the fourth convolution kernel is set to 3 x 3, the number of channels is set to 64, a size of the fourth average pooling layer is set to 2 x 2, stride and padding are set to 1,
the batch normalization all adopts:
Figure RE-FDA0003633207690000031
wherein, Fn(k, l) denotes the l element in the k channel in the convolutional layer output for the image before batch normalization, Fn(k, l) is the image data after batch normalization, alphakAnd betakFor trainable parameters corresponding to the kth channel,. epsilon.is a very small number, to prevent divisor 0, size 10E-8, E (. quadrature.) is the averaging operation, and Var (. quadrature.) represents the variance operation;
the activation function adopts a ReLU function:
Figure RE-FDA0003633207690000041
Figure RE-FDA0003633207690000042
in order to be an input, the user can select,
Figure RE-FDA0003633207690000043
n represents the number of convolution layers, and after four layers of convolution modules, a time-frequency diagram, a frequency spectrogram and a phase spectrogram of one sample are output through a CNN module
Figure RE-FDA0003633207690000044
Respectively designated as { p1, p2, p3 }.
7. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 6, wherein in said step S3-1, said channel fusion method is: inputting { p1, p2, p3} output by CNN module intoPerforming cat channel superposition operation to obtain output channel fusion characteristic output which is recorded as rhoc
ρc=cat(p1,p2,p3)。
8. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 7, wherein said step S3-2 is implemented by outputting p, a channel fusion feature outputcThrough SEnet, firstly, through an Squeeze operation, the whole spatial feature on a channel is coded into a global feature, and the global feature is realized by adopting global average potential boosting:
Figure FDA0003540239940000044
wherein ZcFor the output of the Squeeze operation, H and W are ρcSize of the channel, ρc(i,j)Performing an Excitation operation for the (i, j) th value in the channel, learning the nonlinear relation among channels, activating by adopting a Sigmoid, wherein the value is 0-1, and adopting a gating mechanism:
s=σ(W2ReLU(W1Zc))
adopting the operation of two full connection layers, wherein the first full connection layer plays a role in reducing dimension, then adopting ReLU activation, finally recovering to the original dimension through the full connection layer, W1 and W2 are parameters of the two full connection layers, and finally multiplying the learned activation value of each channel by the original feature of U to obtain output xc
xc=s*ρc
9. The LPI radar signal identification method based on spectrogram fusion and attention mechanism as claimed in claim 8, wherein in said step S3-2, the training method of network model comprises:
x obtainedcPerforming full connection operation, mapping output to one-dimensional data with the size of 12, and receiving a softmax layer to obtain classified output; then training network parameters, and collecting the pre-processed training setInputting the sample into a radar radiation source recognition network to train the network, and updating the network weight by the network by adopting an Adam algorithm; the Adam algorithm is as follows:
Figure FDA0003540239940000051
m←β1m+(1-β1)g
ν←β2ν+(1-β2)g2
Figure FDA0003540239940000052
wherein g is represented as the gradient of a loss function L (θ); θ is expressed as an iteration weight;
Figure FDA0003540239940000053
representing a gradient operator; m represents an first moment estimate of g initialized to 0; v represents a second-order moment estimate of g initialized to 0; beta is a1The exponential decay rate estimated for the first moment is 0.9; beta is a2The exponential decay rate estimated for the second moment is 0.9; t represents a transpose operation; α is the learning rate, initially set to 0.0001; epsilon is a smooth constant, the divisor of which is 0 is prevented, and the value is 10 e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent the generalization ability of the network from decreasing; the cross entropy loss function is expressed as follows:
Figure FDA0003540239940000061
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of the sample; q (x) represents the distribution predicted by the model; and carrying out supervised training on the network through the training set data so that the loss value reaches an optimal value.
10. The method according to claim 9The LPI radar signal identification method based on spectrogram fusion and attention mechanism is characterized in that in step S4, a softmax layer of a trained network model is removed, one-dimensional data with the size of 12 is output, an SVM classifier is constructed, and therefore a decision function y ═ f (x ═ x (x) is found outc) The rule of (3) is used for predicting the class of data, a class-to-remainder class approach is adopted, each class is taken as a +1 class, all samples of the rest 11 classes are-1 classes and are divided into 12 binary SVMs, svm is trained, and finally classification output of 12 signals is obtained.
CN202210236821.XA 2022-03-10 2022-03-10 LPI radar signal identification method based on spectrogram fusion and attention mechanism Pending CN114636975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210236821.XA CN114636975A (en) 2022-03-10 2022-03-10 LPI radar signal identification method based on spectrogram fusion and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210236821.XA CN114636975A (en) 2022-03-10 2022-03-10 LPI radar signal identification method based on spectrogram fusion and attention mechanism

Publications (1)

Publication Number Publication Date
CN114636975A true CN114636975A (en) 2022-06-17

Family

ID=81947657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210236821.XA Pending CN114636975A (en) 2022-03-10 2022-03-10 LPI radar signal identification method based on spectrogram fusion and attention mechanism

Country Status (1)

Country Link
CN (1) CN114636975A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604061A (en) * 2022-08-30 2023-01-13 电子科技大学(Cn) Radio frequency signal modulation mode identification method based on external attention mechanism
CN115828154A (en) * 2022-11-25 2023-03-21 中山大学 LPI radar signal identification method, system, equipment and storage medium
CN117238320A (en) * 2023-11-16 2023-12-15 天津大学 Noise classification method based on multi-feature fusion convolutional neural network
CN117452368A (en) * 2023-12-21 2024-01-26 西安电子科技大学 SAR load radiation signal detection method and device based on broadband imaging radar

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604061A (en) * 2022-08-30 2023-01-13 电子科技大学(Cn) Radio frequency signal modulation mode identification method based on external attention mechanism
CN115604061B (en) * 2022-08-30 2024-04-09 电子科技大学 Radio frequency signal modulation mode identification method based on external attention mechanism
CN115828154A (en) * 2022-11-25 2023-03-21 中山大学 LPI radar signal identification method, system, equipment and storage medium
CN115828154B (en) * 2022-11-25 2023-10-03 中山大学 LPI radar signal identification method, system, equipment and storage medium
CN117238320A (en) * 2023-11-16 2023-12-15 天津大学 Noise classification method based on multi-feature fusion convolutional neural network
CN117238320B (en) * 2023-11-16 2024-01-09 天津大学 Noise classification method based on multi-feature fusion convolutional neural network
CN117452368A (en) * 2023-12-21 2024-01-26 西安电子科技大学 SAR load radiation signal detection method and device based on broadband imaging radar
CN117452368B (en) * 2023-12-21 2024-04-02 西安电子科技大学 SAR load radiation signal detection method and device based on broadband imaging radar

Similar Documents

Publication Publication Date Title
CN114636975A (en) LPI radar signal identification method based on spectrogram fusion and attention mechanism
Jin et al. Deep learning for underwater image recognition in small sample size situations
CN107220606B (en) Radar radiation source signal identification method based on one-dimensional convolutional neural network
CN109597043B (en) Radar signal identification method based on quantum particle swarm convolutional neural network
CN110048827B (en) Class template attack method based on deep learning convolutional neural network
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN109495214B (en) Channel coding type identification method based on one-dimensional inclusion structure
CN113156376B (en) SACNN-based radar radiation source signal identification method
CN111126361B (en) SAR target identification method based on semi-supervised learning and feature constraint
CN109711314B (en) Radar radiation source signal classification method based on feature fusion and SAE
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
CN111582236B (en) LPI radar signal classification method based on dense convolutional neural network
CN112820322A (en) Semi-supervised audio event labeling method based on self-supervised contrast learning
CN115982613A (en) Signal modulation identification system and method based on improved convolutional neural network
CN114943245A (en) Automatic modulation recognition method and device based on data enhancement and feature embedding
CN114675249A (en) Attention mechanism-based radar signal modulation mode identification method
CN117331031A (en) LPI radar signal spectrogram fusion identification method
CN115856811A (en) Micro Doppler feature target classification method based on deep learning
CN115616503A (en) Radar interference signal type identification method based on convolutional neural network model
CN114936570A (en) Interference signal intelligent identification method based on lightweight CNN network
CN114296041A (en) Radar radiation source identification method based on DCNN and Transformer
Hou et al. FMRSS net: Fast matrix representation-based spectral-spatial feature learning convolutional neural network for hyperspectral image classification
CN112434716B (en) Underwater target data amplification method and system based on condition countermeasure neural network
CN110646350B (en) Product classification method, device, computing equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination