CN117331031A - LPI radar signal spectrogram fusion identification method - Google Patents

LPI radar signal spectrogram fusion identification method Download PDF

Info

Publication number
CN117331031A
CN117331031A CN202311298963.XA CN202311298963A CN117331031A CN 117331031 A CN117331031 A CN 117331031A CN 202311298963 A CN202311298963 A CN 202311298963A CN 117331031 A CN117331031 A CN 117331031A
Authority
CN
China
Prior art keywords
output
time
data
dimensional
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311298963.XA
Other languages
Chinese (zh)
Inventor
朱贺
赵志强
潘勉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Institute of Advanced Studies of UCAS
Original Assignee
Hangzhou Institute of Advanced Studies of UCAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Institute of Advanced Studies of UCAS filed Critical Hangzhou Institute of Advanced Studies of UCAS
Publication of CN117331031A publication Critical patent/CN117331031A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/021Auxiliary means for detecting or identifying radar signals or the like, e.g. radar jamming signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an LPI radar signal spectrogram fusion identification method, which comprises the following steps: s1, constructing a data set, wherein the data set comprises a time-frequency diagram and a time phase and time amplitude sequence after Fourier transformation; s2, extracting time frequency data through SKNet module characteristics; s3, outputting SKNet and transmitting the SKNet to a transducer module to carry out global feature association; s4, performing multi-head attention mechanism fusion processing on the time phase amplitude sequence after Concat operation through a transducer module and the processed time frequency data after Concat; s5, outputting the classified network. S6, training a network; s7, testing data. The invention provides an LPI radar signal spectrogram fusion identification method, which is used for solving the problem of low detection accuracy of an algorithm for detecting non-cooperative LPI radar signals under the current low signal-to-noise ratio.

Description

LPI radar signal spectrogram fusion identification method
Technical Field
The invention relates to the technical field of LPI radar signal identification, in particular to an LPI radar signal spectrogram fusion identification method.
Background
Radar signal intra-pulse modulation is a very critical part of modern electronic warfare, and electronic environments are rich in noisy noise, so that the identification of radar intra-pulse modulation signals becomes a big problem to be solved under the condition of low signal-to-noise ratio. The wide range of applications for the identification of low probability of interception radar (LPI) signals in complex war environments has also made it a major focus on war.
The traditional LPI radar modulation signal identification is mainly based on a method for extracting artificial features, the effect is poor, the careful correlation between signals is difficult to find, the traditional modulation identification method depends on five conventional parameters including the feature arrival direction and time (DOA/TOA), radio Frequency (RF), pulse width (PA) and Pulse Repetition Interval (PRI), but the characteristics of complex signals are difficult to show, so that the current requirements cannot be met, the current achievements are enough to prove that the deep learning is utilized for learning a large amount of data, the deep learning technology can replace the technology for manually extracting, the influence caused by noise can be overcome, and the accuracy is higher. For example, convolutional Neural Networks (CNNs) applied to feature extraction and cyclic neural networks (RNNs) applied to natural language processing are adopted, and as research technology goes deep, google proposes a transducer model in paper Attention is All you need, and the advantage of network parallel computing can be better reflected. The detection and identification of the LPI radar signal also enters a new stage by using the deep learning technology. The Zhang Ming converts the CWD change of the radar signal into time-frequency domain information, and the time-frequency domain information is identified by an Elman Neural Network (ENN) and a CNN, wherein the overall identification accuracy reaches 94.7% when the SNR is-2 dB, and the overall accuracy reaches 96.1% when the SNR is-6 dB. ER Zilberman adopts Choi-Williams distribution to carry out time-frequency processing on radar signals, extracts effective characteristics of pictures, realizes identification of 5 waveforms including BPSK, FRANK, P codes, FMCW and T1 codes, carries out STFT conversion on multiphase code signals, utilizes a 9-layer deep convolution network, and has an identification accuracy rate of 91.8% on 5 multiphase code signals at-8 db, but has a large rising space on the identification rate of LPI radar signals under the condition of low signal-to-noise ratio.
Disclosure of Invention
The invention aims to provide an LPI radar signal spectrogram fusion identification method, which solves the problem that the detection accuracy of the existing non-cooperative LPI radar signal detection algorithm under the low signal-to-noise ratio is not high.
The invention discloses a method for fusion recognition of LPI radar signal spectrograms, which adopts the following technical scheme:
an LPI radar signal spectrogram fusion identification method comprises the following steps:
s1, constructing a data set; 12 different LPI radar signals are generated through MATLAB simulation, the LPI radar signals comprise COSTAS, LFM, frank, BPSK, P-P4 codes and T1-T4 codes, time-frequency diagram, time phase and time amplitude sequences of the signals are obtained through time-frequency transformation and Fourier transformation of the signals, and a single sample comprises three data of the time-frequency diagram, the time phase and the time amplitude, and according to the number of samples 8:2, dividing the ratio into a training set and a testing set, and preprocessing median filtering data of all sample data;
s2, performing feature extraction on the preprocessed time-frequency picture data by using an SKNet module, extracting local features of a spectrogram, obtaining importance of each channel, performing characterization, and outputting a feature image with multiple scales;
s3, for the feature images extracted through the SKNet module, the output features processed through the channel attention mechanism module are processed through a transducer, so that the problem that the overall modeling capacity of the SKNnet local feature extraction is weak is solved;
s4, forming two one-dimensional sequences by the two-dimensional sequences of the time phase and the time amplitude after Fourier transformation through a transducer network, and inputting the two-dimensional sequences into a Multi-Head Attention sub-module in the transducer network of the S3 through Contact operation for fusion processing;
s5, performing fusion processing on the fused transducer network, obtaining 12 one-dimensional outputs through a full-connection layer, and classifying 12 signals;
s6, training a radar radiation source identification network model through training set data;
and S7, sending the test set into the model trained in the step S6 for testing.
Preferably, the step S1 comprises the following steps:
s1.1, 12 different LPI radar signals are generated by MATLAB simulation, an in-phase component I path and a quadrature component Q path are extracted, I path data and Q path data are respectively stored as sampling signals with the length of N, the range of N is 600 to 1200, choi-Williams distributed time-frequency processing is adopted on the signals, and a signal time-frequency diagram data set is obtained; performing Fourier transformation on the signals to obtain a two-dimensional time phase and two-dimensional time amplitude sequence of the signals, setting the size of the obtained time-frequency image to 256 x 256 by using an image cutting technology, adding Gaussian white noise in the simulation process, wherein the signal-to-noise ratio is-10 db, the interval is 2db, and the number of generated samples of each signal under each signal-to-noise ratio is 1500;
s1.2, a training set and a testing set are established according to the number of signal samples: the training set and test set ratios are maintained at 7:3, the data and the label are correspondingly randomly disordered, and the training data set is marked as D= { (x) i ,y k )} i∈[1,n],k∈[1,c] Wherein x is i Represents the i-th sample, y k Indicating that the samples belong to the kth class, a class c target is collected altogether, and n indicates the total number of the samples;
s1.3, for all samples, wherein one sample comprises a time-frequency diagram, a time-phase sequence and a time-amplitude sequence, a preprocessing method of median filtering is adopted for the time-frequency diagram, the window size is set to 3*3, the image is scanned, the boundary of the image is filled with 0, the size of one image is [ m, k ] and becomes [ m+2k, n+2k ] after filling, wherein 2k+1 is the window size, and the preprocessed image is obtained through median filtering.
Preferably, the step S2 comprises the following steps:
the preprocessed time-frequency picture passes through a convolution layer, the parameters of the convolution layer are set to be 1, the parameters of the output channel are set to be 64, the convolution kernel size is set to be 3 x 3, the stride is set to be 1, the stride is set to be 0, then the operation is carried out by SKNet in two stages, wherein the first stage comprises three SKUNITs, SKUNIT1 firstly passes through one convolution layer, the output channel is 128, the stride and the stride are 1, the output Z1 is obtained, the Z1 is decomposed, two convolution kernels are selected through two convolution layers, one is 3*3, one is 5*5, the output channels are all set to be 64, the first convolution layer stride is set to be 1, the stride is set to be 2, the operation of obtaining the average value of the channels is carried out by superposing the two outputs, the one-dimensional data of half of the number of each output channel is obtained by the operation of full connection fc, the one-dimensional data is the same as the number of the channels by fc expansion, the data is subjected to softmax function to obtain a weight value of a final two channels, the final two convolution layer outputs are subjected to channel superposition by multiplying the final two convolution layer outputs, finally, the output channel is 64 x 2 through one convolution layer to obtain output Z2, the output Z2 is added with Z1 to obtain the output of the SK unit1, the SK unit2 performs the same operation as the SK unit1, but the input channel of the output channel is set to 128, the input channel is equal to the output channel and does not perform the addition operation, the SK unit3 performs the same operation as the SK unit2, RELU activating function batch normalization operation is connected to the back of each SK unit, the output of the first stage is used as input in the second stage, the channel is set to 256, and the same operation as the first stage is performed to obtain the output of the SKNet; batch normalization adopts:
wherein F is n (k, l) represents the first element in the kth channel, F, in the convolutional layer output of the image before batch normalization n (k, l) is the image data after batch normalization, α k And beta k Epsilon is a small number for trainable parameters corresponding to the kth channel, and for preventing the divisor from being 0, the size is 10e-8, E is the mean operation, var is the variance operation;
the activation function employs a ReLU function:
F n (k, l) is the input,for response output of ReLU, n represents the number of convolved layers, after four convolution module layers, a time-frequency diagram of one sample is output +.>Marked as->
Preferably, the step S3 comprises the following steps:
s3.1, although the data p1 subjected to SKNet operation can extract the representation of the importance of each channel after the convolution layer, the existing model can not effectively capture the correlation characteristic between local features, a transducer is introduced, the transducer acts on the image operation, and the transducer is used as a global feature correlation part to classify signals;
for output through SKNetFlattening to_patch processing is carried out to obtain patches x_p epsilon N× (P×P, C), the P×P is mapped into low-dimensional data dim through linear layer mapping, and final flattening output is N× (dim, C), wherein (H, W) is resolution after feature extraction, (P, P) is resolution of each patch, dim is linear layer mapping size, and N is effective sequence length of a transducer.
Preferably, the step S4 comprises the following steps:
s4.1, a time amplitude phase two-dimensional sequence 256 x 2 after Fourier transformation is segmented according to 16 x 2 through a transducer network, N is 16, 16 x 2 is mapped into low-dimensional data dim through linear layer mapping, dim is 128, final flattening output is N× (dim, C), and C is channel number 1;
s4.2, extracting global effective characteristics through a multi-head attention mechanism, wherein the multi-head attention mechanism is used for cutting blocksThe 128-dimensional output of post-position coding is divided into 8 groups of head, each dimension characteristic is multiplied by three randomly initialized matrixes W q ,W k ,W V Three matrices of Q, K and V are obtained, self-saturation is carried out in each group, and then the results of 8 groups are spliced, wherein each header is expressed as:
head i =Attention(Q i ,K i ,Vi)
MultiHead(Q,K,V)=Concat(head 0 ,...,head 7 )WO
wherein d is k To input dimension number, head is then added 0 To head 7 Connected by columns, multiplied by a randomly initialized matrix W 0 (learnable parameters) to obtain final global effective feature MultiHead (Q, K, V), denoted as Z;
s4.3, inputting the obtained global effective features into a forward neural network, realizing high-dimensional to low-dimensional conversion through linear change, and further retaining the effective features; outputting a nonlinear result through a Relu activation function;
s4.4: normalizing the output of the forward neural network to obtain the final output
S4.5: derived separately for time phase and time amplitudePerforming Contact operation to obtain final output +.>
S4.6: output of step S4.5Output one from step S3The fusion processing is carried out in a Multi-Head Attention module serving as a time-frequency chart transducer.
Preferably, the step S5 comprises the following steps:
s5.1, normalizing the output of S5.1 through the output of the forward neural network, wherein the final output is
S5.2, outputProcessing the input sequence through the full connection layer, and mapping the normalized token sequence to a low-dimensional feature dim through the full connection layer, wherein each term formula of dim is as follows:
i is the ith neuron in the token sequence, W i B for the weight value of each neuron i As a result of the value of the deviation,response output for the full connection layer;
s5.3 by softmax layer pairClassification is performed, and the mathematical model of the activation function softmax is expressed as follows:
wherein z is j Represents the j-th element, m is the category number, p j For activating the response of the function softmax.
Preferably, the step S6 comprises the following detailed steps:
inputting the preprocessed training set sample into a training network in a radar radiation source identification network, and updating a network weight by adopting an Adam algorithm; the Adam algorithm is as follows:
m←β 1m +(1-β 1 )g
ν←β 2 ν+(1-β 2 )g2
wherein g is represented as a gradient of a loss function L (θ); θ is represented as an iteration weight;representing a gradient operator; m represents a first moment estimate of g initialized to 0; v represents a second moment estimate of g initialized to 0; β1 is the exponential decay rate of the first moment estimation, and the value is 0.9; β2 is the exponential decay rate of the second moment estimation, and the value is 0.9; t represents a transpose operation; alpha is learning rate, and is initially set to 0.001; epsilon is a smooth constant, the divisor 0 is prevented, and the value is 10e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent degradation of the generalization ability of the network; the cross entropy loss function is expressed as follows:
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of samples; q (x) represents the distribution predicted by the model; the smaller the cross entropy loss function, the closer the true distribution of the sample is to the distribution predicted by the model; introducing an early-stopping mechanism, taking the test accuracy as a standard, introducing learning rate attenuation, and setting the minimum learning rate to be 0; setting the maximum training round number as 100 rounds and the batch_size as 32; and taking the test accuracy as a standard, and storing a network model with highest recognition accuracy.
Preferably, the step S7 comprises the following detailed steps: and placing the divided test set into a trained network to obtain a final result.
The LPI radar signal spectrogram fusion identification method disclosed by the invention has the beneficial effects that: 1. the SKNet network for characterizing the convolved channel weights is adopted to perform feature extraction on the time-frequency diagram, the importance of each channel in the total features after feature extraction is reflected, the fine feature characterization of signals is further enhanced, the combination with the transducer network can combine local features with global features, the consideration is more comprehensive, the network can learn more interesting features, the subsequent classification task is more accurate, and the noise immunity is enhanced.
2. By means of sequence fusion, the time phase and time amplitude sequences obtained through Fourier transformation are fused with time-frequency diagram features extracted through SKNet features through a transducer network, the characterization capability of the network is enhanced, the problem that the features of a single time-frequency diagram are not obvious, the time-frequency diagram of a phase modulation mode is difficult to identify can be solved, multiple-surface features of different LPI radar signals are considered under the condition that large calculation cost is not sacrificed, and the network identification effect is improved.
Drawings
FIG. 1 is a diagram showing the LPI radar signal recognition structure by multi-transducer model fusion according to the LPI radar signal spectrogram fusion recognition method of the present invention.
Fig. 2 is a schematic diagram of time-frequency diagram feature extraction (SKNet) of the LPI radar signal spectrogram fusion recognition method of the present invention.
Detailed Description
The invention is further illustrated and described below in conjunction with the specific embodiments and the accompanying drawings:
referring to fig. 1 and 2, an LPI radar signal spectrogram fusion recognition method includes the following steps:
s1, constructing a data set; 12 different LPI radar signals are generated through MATLAB simulation, the LPI radar signals comprise COSTAS, LFM, frank, BPSK, P-P4 codes and T1-T4 codes, time-frequency diagram, time phase and time amplitude sequences of the signals are obtained through time-frequency transformation and Fourier transformation of the signals, and a single sample comprises three data of the time-frequency diagram, the time phase and the time amplitude, and according to the number of samples 8:2 is divided into a training set and a testing set, and median filtering data preprocessing is carried out on all sample data.
The method comprises the following specific steps:
s1.1, 12 different LPI radar signals are generated by MATLAB simulation, wherein the LPI radar signals comprise COSTAS, LFM, frank, BPSK, P1-P4 codes and T1-T4 codes, an in-phase component I path and a quadrature component Q path are extracted, I path data and Q path data are respectively stored as sampling signals with the length of N, the range of N is 600 to 1200, and signal parameters are as follows:
then, choi-Williams distributed time-frequency processing is adopted for the signals, so that a signal time-frequency diagram data set is obtained; and carrying out Fourier transformation on the signals to obtain a spectrogram data set and a spectrogram data set of the signals, setting the size of the obtained image to 256 x 256 by using an image cutting technology, adding Gaussian white noise in the simulation process, wherein the signal-to-noise ratio of the signals is-10 db, the interval is 2db, and the number of generated samples of each signal under each signal-to-noise ratio condition is 1500.
S1.2, a training set and a testing set are established according to the number of signal samples: the training set and test set ratios are maintained at 8:2, the data and the label are correspondingly randomly disturbed, and the training data set is marked as D= { (x) i ,y k )} i∈[1,n],k∈[1,c] Wherein x is i Represents the i-th sample, y k Indicating that the samples belong to the kth class, a class c target is collected altogether, and n indicates the total number of the samples;
s1.3, for all samples, wherein one sample comprises a time-frequency diagram, a time phase sequence and a time amplitude sequence, a preprocessing method of median filtering is adopted, images are scanned by using window size set as 3*3, for example {40, 107,5, 198, 226, 223, 37, 68, 193} data, median 107 is obtained through sorting processing, the final value of the point is determined, the boundary of the images is filled with 0, one image size is [ m, k ] size, the size after filling becomes [ m+2k, n+2k ], 2k+1 is the window size, and the preprocessed images are obtained through median filtering processing.
S2, performing feature extraction on the preprocessed time-frequency picture data by using the SKNet module, extracting local features of a spectrogram, obtaining importance of each channel, performing characterization, and outputting a feature image with multiple scales.
The method comprises the following specific steps:
firstly, through one convolution layer, the parameters of the convolution layer are set to be 1 of an input channel, 64 of an output channel, 3 x 3 of convolution kernel size, 1 of stride, 0 of stride, and then through SKNet operation of two stages, wherein the first stage comprises three SK units, SK unit1 firstly passes through one convolution layer, 128 of the output channel and 1 of stride, output Z1 is obtained, Z1 is decomposed, through two convolution layers, two convolution kernels are selected, one is 3*3, one is 5*5, 64 of the output channel, 1 of stride is set for the first convolution layer stride, 1 of stride is set for the second convolution layer stride, and 2 of stride is set for the second convolution layer stride.
And superposing the two outputs, performing channel mean value calculation operation, respectively outputting one-dimensional data with half of the number of channels through full connection fc operation, expanding the one-dimensional data with the same number of channels through fc, obtaining the weight value of the final two channels through softmax function, multiplying the two final convolution layer outputs, performing channel superposition, and finally obtaining output Z2 by a convolution layer with 64 x 2 output channels, and adding the output Z2 with Z1 to obtain the output of the SK unit 1. Skuunit 2 performs the same operation as skuunit 1, but the input channels of the output channels are all set to 128, the input channels are equal to the output channels, the addition operation is not performed, and skuunit 3 performs the same operation as SK unit 2.
Each SK unit is followed by a RELU activation function batch normalization operation. The second stage takes the output of the first stage as input, sets the channel to 256, and executes the same operation as the first stage to obtain the output batch normalization of SKNet by adopting:
wherein F is n (k, l) represents the first element in the kth channel, F, in the convolutional layer output of the image before batch normalization n (k, l) is the image data after batch normalization, α k And beta k Epsilon is a small number for trainable parameters corresponding to the kth channel, and for preventing the divisor from being 0, the size is 10e-8, E is the mean operation, var is the variance operation;
the activation function employs a ReLU function:
F n (k, l) is the input,for the response output of the ReLU, n represents the number of convolved layers.
S3, for the feature images extracted through the SKNnet module, the output features processed through the channel attention mechanism module are processed through a transducer, and the problem that the overall modeling capacity of the SKNnet local feature extraction is weak is solved.
The step S3 comprises the following detailed steps:
s3.1, although the data p1 subjected to SKNet operation can extract the representation of the importance of each channel after the convolution layer, the existing model can not effectively capture the correlation characteristic between local features, a transducer is introduced, the transducer acts on the image operation, and the transducer is used as a global feature correlation part to classify signals;
for output through SKNetFlattening to_patch processing is carried out to obtain patches x_p epsilon N× (P×P, C), the P×P is mapped into low-dimensional data dim through linear layer mapping, and final flattening output is N× (dim, C), wherein (H, W) is resolution after feature extraction, (P, P) is resolution of each patch, dim is linear layer mapping size, and N is effective sequence length of a transducer.
S4, forming two one-dimensional sequences by the two-dimensional sequences of the time phase and the time amplitude after Fourier transformation through a transducer network, and inputting the two-dimensional sequences into a Multi-Head Attention sub-module in the transducer network of S3 through Contact operation for fusion processing.
The step S4 comprises the following detailed steps:
s4.1, a time amplitude phase two-dimensional sequence 256 x 2 after Fourier transformation is segmented according to 16 x 2 through a transducer network, N is 16, 16 x 2 is mapped into low-dimensional data dim through linear layer mapping, dim is 128, final flattening output is N× (dim, C), and C is channel number 1;
s4.2, extracting global effective features through a multi-head attention mechanism, wherein the multi-head attention mechanism divides the 128-dimensional output of the block cut and the position code into 8 groups of heads respectively, and each one-dimensional feature is multiplied by three randomly initialized matrixes W respectively q ,W k ,W V Obtaining three matrixes Q, K and V, carrying out self-attribute in each group, and splicing the results of 8 groups, wherein each head i Expressed as:
head i =Attention(Q i ,K i ,V i )
MultiHead(Q,K,V)=Concat(head 0 ,...,head 7 )WO
wherein d is k To input dimension number, head is then added 0 To head 7 Connected by columns, multiplied by randomInitialized matrix W 0 (learnable parameters) to obtain final global effective feature MultiHead (Q, K, V), denoted as Z;
s4.3, inputting the obtained global effective features into a forward neural network, realizing high-dimensional to low-dimensional conversion through linear change, and further retaining the effective features; outputting a nonlinear result through a Relu activation function;
s4.4: normalizing the output of the forward neural network to obtain the final output
S4.5: derived separately for time phase and time amplitudePerforming Contact operation to obtain final output +.>
S4.6: output of step S4.5Fusion processing together with the output of step S3 in the Multi-Head Attention Module as a time frequency map transform
S5, performing fusion processing on the fused transducer network, obtaining 12 one-dimensional outputs through the full-connection layer, and classifying 12 signals.
The step S5 comprises the following detailed steps:
s5.1, normalizing the output of S5.1 through the output of the forward neural network, wherein the final output is
S5.2, outputProcessing the input sequence through the full connection layer by a full connectionThe layer maps the token sequence after normalization processing to a low-dimensional feature dim, and each term formula of dim is as follows:
i is the ith neuron in the token sequence, W i B for the weight value of each neuron i As a result of the value of the deviation,response output for the full connection layer;
s5.3 by softmax layer pairClassification is performed, and the mathematical model of the activation function softmax is expressed as follows:
wherein z is j Represents the j-th element, m is the category number, p j Response to activation function softmax
And S6, training a radar radiation source identification network model through training set data.
The step S6 comprises the following detailed steps:
inputting the preprocessed training set sample into a training network in a radar radiation source identification network, and updating a network weight by adopting an Adam algorithm; the Adam algorithm is as follows:
m←β 1m +(1-β 1 )g
ν←β 2 ν+(1-β 2 )g2
wherein g is represented as a gradient of a loss function L (θ); θ is represented as an iteration weight; (V) θ Representing a gradient operator; m represents a first moment estimate of g initialized to 0; v represents a second moment estimate of g initialized to 0; beta 1 The exponential decay rate estimated for the first moment is 0.9; beta 2 The exponential decay rate estimated for the second moment is 0.9; t represents a transpose operation; alpha is learning rate, and is initially set to 0.001; epsilon is a smooth constant, the divisor 0 is prevented, and the value is 10e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent degradation of the generalization ability of the network; the cross entropy loss function is expressed as follows:
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of samples; q (x) represents the distribution predicted by the model; the smaller the cross entropy loss function, the closer the true distribution of the sample is to the distribution predicted by the model; introducing an early-stopping mechanism, taking the test accuracy as a standard, introducing learning rate attenuation, and setting the minimum learning rate to be 0; setting the maximum training round number as 100 rounds and the batch_size as 32; taking the test accuracy as a standard, and storing the network model with highest recognition accuracy
S7, sending the test set into the trained model in the step S6 for testing, wherein the step S7 comprises the following steps: and placing the divided test set into a trained network to obtain a final result.
The invention provides an LPI radar signal spectrogram fusion identification method, which comprises the steps of 1, carrying out feature extraction on a time-frequency chart by adopting an SKNet network for representing the weight of a convolved channel, reflecting the importance of each channel in the total feature after feature extraction, further enhancing the fine feature representation of signals, combining local features with global features by combining with a Transformer network, taking more comprehensive consideration, learning more interesting features by the network, enabling the subsequent classification task to be more accurate, and enhancing the noise resistance.
2. According to the invention, a time phase and time amplitude sequence obtained through Fourier transformation is fused with a time-frequency diagram feature extracted through SKNet features through a transform network in a sequence fusion mode, so that the characterization capability of the network is enhanced, the problems that the feature of a single time-frequency diagram is not obvious and the time-frequency diagram of a phase modulation mode is difficult to identify can be solved, and the network identification effect is improved by considering the multi-face features of different LPI radar signals under the condition of not sacrificing large calculation cost.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (8)

1. The LPI radar signal spectrogram fusion identification method is characterized by comprising the following steps of:
s1, constructing a data set; 12 different LPI radar signals are generated through MATLAB simulation, the LPI radar signals comprise COSTAS, LFM, frank, BPSK, P-P4 codes and T1-T4 codes, time-frequency diagram, time phase and time amplitude sequences of the signals are obtained through time-frequency transformation and Fourier transformation of the signals, and a single sample comprises three data of the time-frequency diagram, the time phase and the time amplitude, and according to the number of samples 8:2, dividing the ratio into a training set and a testing set, and preprocessing median filtering data of all sample data;
s2, performing feature extraction on the preprocessed time-frequency picture data by using an SKNet module, extracting local features of a spectrogram, obtaining importance of each channel, performing characterization, and outputting a feature image with multiple scales;
s3, for the feature images extracted through the SKNet module, the output features processed through the channel attention mechanism module are processed through a transducer, so that the problem that the overall modeling capacity of the SKNnet local feature extraction is weak is solved;
s4, forming two one-dimensional sequences by the two-dimensional sequences of the time phase and the time amplitude after Fourier transformation through a transducer network, and inputting the two-dimensional sequences into a Multi-HeadAttention sub-module in the transducer network of the S3 through Contact operation for fusion processing;
s5, performing fusion processing on the fused transducer network, obtaining 12 one-dimensional outputs through a full-connection layer, and classifying 12 signals;
s6, training a radar radiation source identification network model through training set data;
and S7, sending the test set into the model trained in the step S6 for testing.
2. The LPI radar signal spectrogram fusion identification method of claim 1, wherein the step S1 comprises the following steps:
s1.1, 12 different LPI radar signals are generated by MATLAB simulation, an in-phase component I path and a quadrature component Q path are extracted, I path data and Q path data are respectively stored as sampling signals with the length of N, the range of N is 600 to 1200, choi-Williams distributed time-frequency processing is adopted on the signals, and a signal time-frequency diagram data set is obtained; performing Fourier transformation on the signals to obtain a two-dimensional time phase and two-dimensional time amplitude sequence of the signals, setting the size of the obtained time-frequency image to 256 x 256 by using an image cutting technology, adding Gaussian white noise in the simulation process, wherein the signal-to-noise ratio is-10 db, the interval is 2db, and the number of generated samples of each signal under each signal-to-noise ratio is 1500;
s1.2, a training set and a testing set are established according to the number of signal samples: the training set and test set ratios are maintained at 7:3, the data and the label are correspondingly randomly disordered, and the training data set is marked as D= { (x) i ,y k )} i∈[1,n],k∈[1,c] Wherein x is i Represents the i-th sample, y k Indicating that the samples belong to the kth class, a class c target is collected altogether, and n indicates the total number of the samples;
s1.3, for all samples, wherein one sample comprises a time-frequency diagram, a time-phase sequence and a time-amplitude sequence, a preprocessing method of median filtering is adopted for the time-frequency diagram, the window size is set to 3*3, the image is scanned, the boundary of the image is filled with 0, the size of one image is [ m, k ] and becomes [ m+2k, n+2k ] after filling, wherein 2k+1 is the window size, and the preprocessed image is obtained through median filtering.
3. The LPI radar signal spectrogram fusion identification method of claim 2, wherein the step S2 comprises the following steps:
the preprocessed time-frequency picture passes through a convolution layer, the parameters of the convolution layer are set to be 1, the parameters of the output channel are set to be 64, the convolution kernel size is set to be 3 x 3, the stride is set to be 1, the stride is set to be 0, then the operation is carried out by SKNet in two stages, wherein the first stage comprises three SKUNITs, SKUNIT1 firstly passes through one convolution layer, the output channel is 128, the stride and the stride are 1, the output Z1 is obtained, the Z1 is decomposed, two convolution kernels are selected through two convolution layers, one is 3*3, one is 5*5, the output channels are all set to be 64, the first convolution layer stride is set to be 1, the stride is set to be 2, the operation of obtaining the average value of the channels is carried out by superposing the two outputs, the one-dimensional data of half of the number of each output channel is obtained by the operation of full connection fc, the one-dimensional data is the same as the number of the channels by fc expansion, the data is subjected to softmax function to obtain a weight value of a final two channels, the final two convolution layer outputs are subjected to channel superposition by multiplying the final two convolution layer outputs, finally, the output channel is 64 x 2 through one convolution layer to obtain output Z2, the output Z2 is added with Z1 to obtain the output of SK unit1, the SK unit2 performs the same operation as the SKunit1, but the input channel of the output channel is set to 128, the input channel is equal to the output channel and does not perform the addition operation, the SK unit3 performs the same operation as the SK unit2, RELU activating function batch normalization operation is connected to the back of each SK unit, the output of the first stage is used as input in the second stage, the channel is set to 256, and the same operation as the first stage is performed to obtain the output of SKNet; batch normalization adopts:
wherein F is n (k, l) represents the first element in the kth channel, F, in the convolutional layer output of the image before batch normalization n (k, l) is the image data after batch normalization, α k And beta k Epsilon is a small number for trainable parameters corresponding to the kth channel, and for preventing the divisor from being 0, the size is 10e-8, E is the mean operation, var is the variance operation;
the activation function employs a ReLU function:
fn (k, l) is the input,for response output of ReLU, n represents the number of convolved layers, after four convolution module layers, a time-frequency diagram of one sample is output +.>Marked as->
4. The LPI radar signal spectrogram fusion identification method of claim 3, wherein the step S3 comprises the following steps:
s3.1, although the data p1 subjected to SKNet operation can extract the representation of the importance of each channel after the convolution layer, the existing model can not effectively capture the correlation characteristic between local features, a transducer is introduced, the transducer acts on the image operation, and the transducer is used as a global feature correlation part to classify signals;
for output through SKNetFlattening to_patch processing is carried out to obtain patches x_p epsilon N× (P×P, C), the P×P is mapped into low-dimensional data dim through linear layer mapping, and final flattening output is N× (dim, C), wherein (H, W) is resolution after feature extraction, (P, P) is resolution of each patch, dim is linear layer mapping size, and N is effective sequence length of a transducer.
5. The LPI radar signal spectrogram fusion identification method of claim 4, wherein the step S4 comprises the following steps:
s4.1, a time amplitude phase two-dimensional sequence 256 x 2 after Fourier transformation is segmented according to 16 x 2 through a transducer network, N is 16, 16 x 2 is mapped into low-dimensional data dim through linear layer mapping, dim is 128, final flattening output is N× (dim, C), and C is channel number 1;
s4.2, extracting global effective features through a multi-head attention mechanism, wherein the multi-head attention mechanism divides the 128-dimensional output of the block cut and the position code into 8 groups of heads respectively, and each one-dimensional feature is multiplied by three randomly initialized matrixes W respectively q ,W k ,W V Obtaining three matrixes Q, K and V, carrying out self-attribute in each group, and splicing the results of 8 groups, wherein each head i Expressed as:
head i =Attention(Q i ,K i ,V i )
MultiHead(Q,K,V)=Concat(head 0 ,...,head 7 )WO
wherein d is k To input dimension number, head is then added 0 To head 7 Connected by columns, multiplied by random primesInitialized matrix W 0 (learnable parameters) to obtain final global effective feature MultiHead (Q, K, V), denoted as Z;
s4.3, inputting the obtained global effective features into a forward neural network, realizing high-dimensional to low-dimensional conversion through linear change, and further retaining the effective features; outputting a nonlinear result through a Relu activation function;
s4.4: normalizing the output of the forward neural network to obtain the final output
S4.5: derived separately for time phase and time amplitudePerforming Contact operation to obtain final output
S4.6: output of step S4.5And (3) performing fusion processing together with the output of the step (S3) in a Multi-HeadAttention module serving as a time-frequency diagram converter.
6. The LPI radar signal spectrogram fusion identification method of claim 5, wherein the step S5 comprises the following steps:
s5.1, normalizing the output of S5.1 through the output of the forward neural network, wherein the final output is
S5.2, outputInput sequence through full connection layer pairProcessing, namely mapping the token sequence after normalization processing to a low-dimensional feature dim through a full-connection layer, wherein each term formula of dim is as follows:
i is the ith neuron in the token sequence, W i B for the weight value of each neuron i As a result of the value of the deviation,response output for the full connection layer;
s5.3 by softmax layer pairClassification is performed, and the mathematical model of the activation function softmax is expressed as follows:
wherein z is j Represents the j-th element, m is the category number, p j For activating the response of the function softmax.
7. The LPI radar signal spectrogram fusion identification method of claim 6, wherein the step S6 comprises the following steps:
inputting the preprocessed training set sample into a training network in a radar radiation source identification network, and updating a network weight by adopting an Adam algorithm; the Adam algorithm is as follows:
m←β 1m +(1-β 1 )g
ν←β 2 ν+(1-β 2 )g2
wherein g is represented as a gradient of a loss function L (θ); θ is represented as an iteration weight;representing a gradient operator; m represents a first moment estimate of g initialized to 0; v represents a second moment estimate of g initialized to 0; beta 1 The exponential decay rate estimated for the first moment is 0.9; beta 2 The exponential decay rate estimated for the second moment is 0.9; t represents a transpose operation; alpha is learning rate, and is initially set to 0.001; epsilon is a smooth constant, the divisor 0 is prevented, and the value is 10e-8;
adopting a cross entropy loss function; to avoid the occurrence of overfitting to prevent degradation of the generalization ability of the network; the cross entropy loss function is expressed as follows:
wherein H (p, q) represents a cross entropy loss function; p (x) represents the true distribution of samples; q (x) represents the distribution predicted by the model; the smaller the cross entropy loss function, the closer the true distribution of the sample is to the distribution predicted by the model; introducing an early-stopping mechanism, taking the test accuracy as a standard, introducing learning rate attenuation, and setting the minimum learning rate to be 0; setting the maximum training round number as 100 rounds and the batch_size as 32; and taking the test accuracy as a standard, and storing a network model with highest recognition accuracy.
8. The LPI radar signal spectrogram fusion identification method of claim 7, wherein the step S7 comprises the following steps: and placing the divided test set into a trained network to obtain a final result.
CN202311298963.XA 2023-01-06 2023-10-09 LPI radar signal spectrogram fusion identification method Pending CN117331031A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310018126 2023-01-06
CN2023100181260 2023-01-06

Publications (1)

Publication Number Publication Date
CN117331031A true CN117331031A (en) 2024-01-02

Family

ID=89276789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311298963.XA Pending CN117331031A (en) 2023-01-06 2023-10-09 LPI radar signal spectrogram fusion identification method

Country Status (1)

Country Link
CN (1) CN117331031A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743946A (en) * 2024-02-19 2024-03-22 山东大学 Signal type identification method and system based on fusion characteristics and group convolution ViT network
CN118520335A (en) * 2024-07-18 2024-08-20 长鹰恒容电磁科技(成都)有限公司 Radar signal identification system, method, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743946A (en) * 2024-02-19 2024-03-22 山东大学 Signal type identification method and system based on fusion characteristics and group convolution ViT network
CN117743946B (en) * 2024-02-19 2024-04-30 山东大学 Signal type identification method and system based on fusion characteristic and group convolution ViT network
CN118520335A (en) * 2024-07-18 2024-08-20 长鹰恒容电磁科技(成都)有限公司 Radar signal identification system, method, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107220606B (en) Radar radiation source signal identification method based on one-dimensional convolutional neural network
CN117331031A (en) LPI radar signal spectrogram fusion identification method
Chen et al. Spatial–temporal convolutional gated recurrent unit network for significant wave height estimation from shipborne marine radar data
Cain et al. Convolutional neural networks for radar emitter classification
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
Ghadimi et al. Deep learning-based approach for low probability of intercept radar signal detection and classification
CN113673312B (en) Deep learning-based radar signal intra-pulse modulation identification method
CN109711314B (en) Radar radiation source signal classification method based on feature fusion and SAE
CN114636975A (en) LPI radar signal identification method based on spectrogram fusion and attention mechanism
CN112990082B (en) Detection and identification method of underwater sound pulse signal
Wei et al. Intra-pulse modulation radar signal recognition based on Squeeze-and-Excitation networks
CN114895263A (en) Radar active interference signal identification method based on deep migration learning
Nuhoglu et al. Image segmentation for radar signal deinterleaving using deep learning
Bhatti et al. Radar signals intrapulse modulation recognition using phase-based stft and bilstm
CN116797796A (en) Signal identification method based on time-frequency analysis and deep learning under DRFM intermittent sampling
CN116797846A (en) Method and device for identifying small sample radar radiation source based on RoAtten-PN network
Ye et al. Recognition algorithm of emitter signals based on PCA+ CNN
CN114898773A (en) Synthetic speech detection method based on deep self-attention neural network classifier
CN114296041A (en) Radar radiation source identification method based on DCNN and Transformer
Yao et al. Wideband DOA estimation based on deep residual learning with Lyapunov stability analysis
Delamou et al. Deep learning-based estimation for multitarget radar detection
Yang et al. A Lightweight Theory-Driven Network and Its Validation on Public Fully-Polarized Ship Detection Dataset
CN112686297A (en) Radar target motion state classification method and system
CN116894207A (en) Intelligent radiation source identification method based on Swin transducer and transfer learning
CN115267713B (en) Intermittent sampling interference identification and suppression method based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination