CN112906591A - Radar radiation source identification method based on multi-stage jumper residual error network - Google Patents

Radar radiation source identification method based on multi-stage jumper residual error network Download PDF

Info

Publication number
CN112906591A
CN112906591A CN202110232097.9A CN202110232097A CN112906591A CN 112906591 A CN112906591 A CN 112906591A CN 202110232097 A CN202110232097 A CN 202110232097A CN 112906591 A CN112906591 A CN 112906591A
Authority
CN
China
Prior art keywords
residual error
image
radiation source
time
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110232097.9A
Other languages
Chinese (zh)
Inventor
闫文君
谭凯文
凌青
张立民
张兵强
徐涛
方君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
School Of Aeronautical Combat Service Naval Aeronautical University Of People's Liberation Army
Original Assignee
School Of Aeronautical Combat Service Naval Aeronautical University Of People's Liberation Army
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by School Of Aeronautical Combat Service Naval Aeronautical University Of People's Liberation Army filed Critical School Of Aeronautical Combat Service Naval Aeronautical University Of People's Liberation Army
Priority to CN202110232097.9A priority Critical patent/CN112906591A/en
Publication of CN112906591A publication Critical patent/CN112906591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar radiation source identification method based on a multi-stage jumper residual error network, and belongs to the field of image processing. The radar radiation source identification method comprises the following steps: performing time-frequency transformation on the radar radiation source signal to generate a time-frequency image of the radar radiation source signal; preprocessing the time-frequency image through an ImageDataGenerator to obtain a one-dimensional time-frequency image; and inputting the one-dimensional time-frequency image into a depth residual error network for feature extraction and signal classification so as to identify and obtain a radar radiation source. According to the radar radiation source identification method based on the multi-stage jumper residual error network, the eight sequentially connected residual error blocks are arranged in the depth residual error network, and the four residual error units are formed by jumper connection, so that the depth residual error network with the total convolution layer number of 18 can extract the deep level information of a signal time-frequency image, and the problems of gradient disappearance, gradient explosion and the like of the network are avoided.

Description

Radar radiation source identification method based on multi-stage jumper residual error network
Technical Field
The invention relates to the field of image processing, in particular to a radar radiation source identification method based on a multi-stage jumper residual error network.
Background
With the rapid development of artificial intelligence technology in recent years, the intelligent processing of radar signals has become an important development trend. The deep learning makes breakthrough progress in the aspects of image classification, computer vision, language processing and the like by virtue of the advantages of automatic feature extraction, strong model generalization and the like.
In recent years, compared with the traditional algorithm, the effect of the feature extraction method based on machine learning is greatly improved, but the following defects still exist: firstly, the characteristics of the signal are not fully considered, so that the characteristic loss is easily caused; secondly, the existing deep feature learning ability and the recognition performance under low signal-to-noise ratio need to be improved; thirdly, the number of network layers is increasing due to too many features of the neural network input, resulting in "dimensionality disaster".
Disclosure of Invention
In order to solve at least one aspect of the above problems and disadvantages in the related art, the present invention provides a radar radiation source identification method based on a multi-stage jumper residual error network. The technical scheme is as follows:
the invention aims to provide a radar radiation source identification method based on a multi-stage jumper residual error network.
According to one aspect of the invention, a radar radiation source identification method based on a multi-stage jumper residual error network is provided, and comprises the following steps:
step S1: performing time-frequency transformation on the radar radiation source signal to generate a time-frequency image of the radar radiation source signal;
step S2: preprocessing the time-frequency image through an ImageDataGenerator to obtain a one-dimensional time-frequency image;
step S3: inputting the one-dimensional time-frequency image into a depth residual error network for feature extraction and signal classification so as to identify a radar radiation source;
the depth residual error network comprises a plurality of residual error units and a full connection layer, each residual error unit in the residual error units comprises two residual error blocks which are sequentially connected end to end in an end-to-end mode, the two residual error blocks are connected through beta-time jumpers, and each residual error block in the two residual error blocks comprises a coiling layer, a batch standardization layer, an activation layer, a coiling layer, a batch standardization layer and an activation layer which are sequentially arranged and is connected through a lambdaiAnd in the deep residual error network, convolution kernels arranged in different residual error units are different, and convolution kernels of four convolution layers in the same residual error unit are arranged the same.
Specifically, in step S1, performing time-frequency transformation on the radar radiation source signal includes the following steps:
step S11: performing Hilbert transform on the radar radiation source signal to obtain an analytic signal;
step S12: and performing time-frequency transformation on the analytic signal through smooth pseudo Wigner-Ville distribution to generate the time-frequency image.
Specifically, in step S2, the preprocessing the time-frequency image by the ImageDataGenerator includes the following steps:
step S21: converting the time-frequency image into a single-channel gray image through gray processing;
step S22: performing an on operation on the single-channel gray scale image to obtain an enhanced energy signal image;
step S23: carrying out mean value filtering on the enhanced energy signal image to obtain a noise-reduced signal image;
step S24: and sequentially carrying out normalization and size resetting on the denoised signal image to obtain a one-dimensional time-frequency image.
Further, in step S23, the mean filtering is to process noise through a mean filter;
the time-frequency image is a three-channel time-frequency image, and the size resetting is to reset the image size by adopting a bicubic interpolation method so as to obtain a high-resolution one-dimensional time-frequency image.
Specifically, in step S22, the on operation includes the steps of:
step S221: passing the single-channel grayscale map through an erosion operation to obtain a low line-shaped noise image;
step S222: and processing the low linear noise image through expansion to obtain an enhanced energy signal map.
Specifically, in step S3, inputting the enhanced signal data set into a depth residual error network for feature extraction and signal classification includes the following steps:
step S31: passing zero-padding through the one-dimensional time-frequency image to filter edges of the one-dimensional time-frequency image;
step S32: inputting the edge-filtered image into a convolutional layer to extract shallow features of the image;
step S33: inputting the feature map after extracting the shallow feature into the pooling layer to reduce the size of the feature map;
step S34: passing the pooled feature maps through a plurality of residual error units to obtain high-dimensional feature vectors;
step S35: sequentially passing the high-dimensional feature vector through a pooling layer and a flattening layer to obtain a one-dimensional feature vector;
step S36: and inputting the one-dimensional characteristic vector into a full-connection layer, and outputting the maximum prediction probability value predicted by each type of signal modulation mode through a Softmax function to obtain the type of the signal.
Preferably, in step S34, passing the pooled feature maps through a plurality of residual units to obtain a high-dimensional feature vector includes the following steps:
step S341: inputting the pooled feature map into two residual blocks of a first residual unit in a plurality of residual units, sequentially performing convolution, batch standardization, activation, convolution, batch standardization and activation in each residual block, and then adding the output of the latter residual block and the original feature mapping connected by beta-time jumpers to obtain the feature vector output of the first residual unit;
step S342: and outputting the feature vector of the previous residual error unit as the input of the next residual error unit, and outputting the high-dimensional feature vector after passing through all the residual error units.
Preferably, in step S342, the method of image processing in the residual unit is the same as the method of image processing in the first residual unit.
Further, in step S33, the pooling layer reduces the size of the feature map in a manner of maximum pooling;
in step S35, the pooling layer reduces the dimensionality of the high-dimensional feature vector by means of average pooling.
Specifically, the value range of the jumper coefficient beta is 0.5-1.0, and the jumper coefficient lambda isiThe value range of (A) is 0.5-1.0;
in step S32, the convolution kernel size of the convolution layer is 7 × 7, the number of convolution kernels is 64, and the step size is 2;
in step S34, the residual error units include a first residual error unit, a second residual error unit, a third residual error unit, and a fourth residual error unit that are sequentially arranged, and convolution kernels used in the first residual error unit to the fourth residual error unit have the same size, the same step size, and the number of the convolution kernels is sequentially increased.
The radar radiation source identification method based on the multi-stage jumper residual error network has at least one of the following advantages:
(1) the radar radiation source identification method based on the multi-stage jumper residual error network provided by the invention is characterized in that eight sequentially connected residual error blocks are arranged in a depth residual error network, and four residual error units are formed by jumper connection, so that the depth residual error network with 18 layers of total convolution layers can extract the deep level information of a signal time-frequency image, and simultaneously, the problems of gradient disappearance, gradient explosion and the like of the network are avoided;
(2) the radar radiation source identification method based on the multi-stage jumper residual error network can improve the back propagation efficiency of the gradient through a multi-stage short connection structure, greatly reduces the fitting difficulty of the original convolutional neural network, can fully utilize the convolutional layer information between two adjacent residual error blocks through jumper connection between the two residual error blocks, and improves the efficiency of extracting the fine features of the image;
(3) the radar radiation source identification method based on the multi-level jumper residual error network can fit deep and shallow features of an image on multiple scales and fit multiple functions with high efficiency, so that the fine feature extraction capability of the signal is superior to AlexNet;
(4) according to the radar radiation source identification method based on the multi-level jumper residual error network, cross terms can be effectively inhibited through a time-frequency image obtained by performing smooth pseudo Wigner-Ville distribution analysis on signals, so that cross term noise is reduced;
(5) according to the radar radiation source identification method based on the multi-level jumper residual error network, the image data generator is used for preprocessing the time-frequency image obtained by smooth pseudo Wigner-Ville distribution transformation, so that pattern noise caused by cross terms can be effectively suppressed, and further, the characteristic information of signals can be effectively enhanced;
(6) compared with the ResNet-101 network with 101 convolutional layers, the radar radiation source identification method based on the multi-level jumper residual error network provided by the invention has the advantages that the number of training parameters of the deep residual error network with a plurality of residual error units is less, the network is more simplified, and the identification rate is higher.
Drawings
These and/or other aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for radar radiation source identification based on a multi-stage jumper residual error network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the structure of the depth residual network shown in FIG. 1;
FIG. 3 is a schematic diagram of the structure of the residual unit shown in FIG. 2;
FIG. 4 is a flow diagram of the time-frequency image pre-processing shown in FIG. 1;
FIG. 5 is a graph of the results of each step process shown in FIG. 4;
FIG. 6 is a graph comparing the accuracy of the present invention with other network identifications;
FIG. 7 is a comparison of different jumper coefficient identification accuracy rates.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of the embodiments of the present invention with reference to the accompanying drawings is intended to explain the general inventive concept of the present invention and should not be construed as limiting the invention.
Referring to fig. 1, a flow of a radar radiation source identification method based on a multi-stage jumper residual error network according to an embodiment of the present invention is shown. The invention provides a depth residual error network based on time-frequency feature extraction. Firstly, smooth pseudo Wigner-Ville time frequency transformation is carried out on radiation source signals to generate a time frequency image, the time frequency image is input into a residual error network optimal model after pre-processing to obtain the maximum probability value predicted by each type of signal modulation mode, and the corresponding radiation source type is obtained through the maximum probability value.
In one example, a radar radiation source identification method includes the steps of:
step S1: performing time-frequency transformation on the radar radiation source signal to generate a time-frequency image of the radar radiation source signal;
step S2: preprocessing the time-frequency image through an ImageDataGenerator to obtain a one-dimensional time-frequency image;
step S3: and inputting the one-dimensional time-frequency image into a depth residual error network for feature extraction and signal classification so as to identify and obtain a radar radiation source.
Because the feature information between two adjacent layers in the existing residual error network is easy to lose, the identification accuracy of the time-frequency image is reduced, the signals are not favorably classified, and in order to solve the problem of extracting the relevant features of the convolutional layer in the front residual error block and the back residual error block, the deep residual error network (shown in a combined graph 2 and a graph 3) is designed into a neural network with 18 layers in total convolutional layer. The neural network comprises a zero padding layer, a convolution layer, a zero padding layer, a pooling layer, four residual error units and a full connection layer, wherein the four residual error units are sequentially connected in series. Eight residual blocks which are sequentially connected are arranged in the four residual units, and each two residual blocks which are connected end to end and are connected through two jumpers form one residual unit. Each residual unit is designed with four convolution layers, four batch normalization layers and four activation functions. That is, each residual block in each residual unit is provided with 2 convolutional layers, that is, a convolutional layer, a batch normalization layer, an active layer, a convolutional layer, a batch normalization layer, and an active layer are sequentially provided in the same residual block. The previous residual block passes through λiMultiple jumper connections, the latter residual block passing through lambda2Multiple jumpers are connected, and then the two residual blocks are connected into a residual unit through beta-multiple jumpers. Through the design, deep information in a signal time-frequency image can be extracted, and the problems of gradient disappearance, gradient explosion and the like of a network are avoided. In one example, the jumper coefficient λiSet to 0.5, 0.8, 1, preferably to 1, e.g. λ in fig. 2, 31、λ2Is set to 1; jumper coefficient beta (i.e. lambda in fig. 2, 3)3) Set to 0.5, 0.8, 1, preferably to 1, e.g. λ in FIGS. 2-33(i.e., the jumper coefficient β) is set to 1. The present example is only an illustrative example, and those skilled in the art can set the jumper coefficient λ accordingly according to the requirementiThe sum β may be set to the same value or different values as long as the accuracy of the residual error network identification can be improved.
Referring again to fig. 1, in the radar radiation source identification method, first, a signal needs to be sampled to obtain an original signal s (t). During the sampling, the sampling frequency is 200 Hz. Because the radar radiation source signal s (t) is a non-steady signal, the time-frequency change of the signal s (t) can be carried out, and an accurate change rule of signal energy along with frequency and time can be obtained. So that it needs to be time-frequency converted after the original signal s (t) is obtained. The time-frequency conversion method comprises the following steps:
step S11: performing Hilbert transform on the original signal s (t) to obtain an analytic signal x (t);
step S12: the analytic signal x (t) is analyzed by a smooth pseudo Wigner-Ville distribution, thereby generating a time-frequency image of the signal.
The signals are subjected to time-frequency transformation through the combination of Hilbert conversion and smooth pseudo Wigner-Ville distribution, so that cross terms in the obtained time-frequency image are effectively inhibited, noise generated by the cross terms in the image is reduced, and the identification precision of the image is improved.
As shown in fig. 4, after obtaining a time-frequency image of a signal, the image is preprocessed by:
step S21: converting the RGB time-frequency image with three channels into a gray image with a single channel through an ImageDataGenerator, wherein gray values of different pixel points in the gray image correspond to energy values of time-frequency points in the time-frequency image;
step S22: obtaining an enhanced energy signal diagram by performing an open operation in an imagedata generator on the gray level image of the single channel, wherein the specific method comprises the following steps:
step S221: carrying out corrosion operation on the single-channel gray image to reduce linear noise stripes caused by cross terms;
step S222: expanding the image after the erosion operation to enhance the energy part of the signals in the image, thereby retaining the subtle characteristics of different signals;
step S23: performing mean filtering on the signal image subjected to energy enhancement to remove Gaussian noise in the signal; in an example, in the process of mean filtering, a filter2 command (i.e., a mean filter) in matlab is adopted, and the pixel mean of 8 surrounding pixels of a central pixel is used as the pixel of the central pixel to perform noise reduction operation, the window size of the mean kernel of the mean filter is 3 × 3, this example is only an illustrative example, and a person skilled in the art can select different window sizes, such as 5 × 5, as needed, and the person skilled in the art should not understand this example as a limitation to the present invention;
step S24: normalizing the denoised image, compressing the pixel value range of the denoised image to (0,1) so as to avoid the problem of gradient disappearance in a neural network and accelerate the convergence speed of the network, then reducing the size of the image to 224 multiplied by 224 by using a bicubic interpolation method, so that the image with the scaled size has higher resolution and smoother edge, and finally obtaining the time-frequency image with one-dimensional matrix dimension.
In an example, taking a binary amplitude keying signal (SNR-3 dB) as an example, the result of the time-frequency image preprocessing is shown in fig. 5, where the cross-term linear noise of the preprocessed image is obviously reduced, the energy of the signal in the image is obviously enhanced, the image size is obviously reduced, the feature dimension is also obviously reduced, and the method is very suitable for being sent to a depth residual error network for feature extraction and image classification.
Because the convolutional neural network and various network models derived from the convolutional neural network, such as networks of VGGNET, AlexNet, GoogLeNet and the like, have good application effects in the fields of computer vision, image recognition, target detection and the like, the networks usually adopt deeper network structures and more network parameters to extract more image features, so that the network performance begins to degrade after the number of network layers reaches a certain number, and the accuracy of the models is seriously reduced. Therefore, the invention adopts a depth residual error network to extract image features, and combines with fig. 1-3, and the specific method is as follows:
step S31: inputting the one-dimensional time-frequency image obtained in step S24 into a neural network, and filling, for example, an image dimension (224, 1) into an image with a dimension (230, 1) by zero-padding, thereby implementing edge filtering on the image; in one example, the zero padding has a width of 3.
Step S32: and carrying out image shallow feature extraction on the zero-padded image through the convolution layer. In one example, the convolution kernel size of the convolutional layer is 7 × 7, the number of convolution kernels is 64, and the step size is 2.
Step S33: the feature map obtained after convolution is again padded with zero padding of width 1, after which the feature map is reduced in size by a max pooling operation, with a portion of the signal energy remaining. In one example, the maximum pooled convolution kernel size is 3 x 3 with a step size of 2.
Step S34: and passing the pooled feature maps through a plurality of residual error units to obtain high-dimensional feature vectors, wherein the specific method comprises the following steps:
step S341: inputting the pooled feature map into a first residual error unit, sequentially performing image processing through two residual error blocks, including sequentially performing convolution, batch standardization, activation, convolution, batch standardization and activation in each residual error block, and then adding the feature map output by the next residual error block and the original feature map connected by the beta-time jumper (as shown in fig. 2 and 3) to obtain the output of the first residual error unit; in one example, the plurality of residual units includes a first residual unit, a second residual unit, a third residual unit and a fourth residual unit, and the sizes, the numbers and the step sizes of convolution kernels used by four convolution layers in the same residual unit are the same. However, the number of convolution kernels of different residual units is set to be different, that is, the sizes of the convolution kernels of the first residual unit to the fourth residual unit are all 3 × 3, the step size is 2, the number of convolution kernels of each convolution layer in the first residual unit is 64, the number of convolution kernels of each convolution layer in the second residual unit is 128, the number of convolution kernels of each convolution layer in the third residual unit is 256, and the number of convolution kernels of each convolution layer in the fourth residual unit is 512. This example is merely an illustrative example, and those skilled in the art can set the convolution kernel size, number, and step size of different residual units to be different as needed. In one example, the activation function in each residual block employs a relu function.
Step S342: the output of the first residual unit is used as the input of the second residual unit, namely the output of the previous residual unit is used as the input of the next residual unit, and because the design of the second to the fourth residual units is completely the same as the design and principle of the first residual unit, the image processing method in the second to the fourth residual units is the same as the image processing method in the first residual unit. After passing through the four residual units, the feature vector with high dimension is output, for example, the image filled with the dimension (230, 1) is passed through the four residual units, and then the feature vector with dimension (7,7,512) is output.
Step S35: the high-dimensional feature vector is changed into a one-dimensional feature vector after being sequentially operated by a pooling layer and a flattening layer; in one example, the pooling layer employs an average pooling approach to reduce the dimensionality of the high-dimensional feature vector, such as changing the (7,7,512) dimensional feature vector to a (3,3,512) dimensional feature vector. In one example, the flattening layer changes the feature vector of dimension (3,3,512) into the feature vector of dimension (3,3, 1).
Step S36: and inputting the one-dimensional characteristic vector into a full-connection layer, and outputting the maximum prediction probability value of each type of signal through a Softmax function, so that the type corresponding to the signal can be obtained, and the corresponding radar radiation source can be obtained.
As shown in fig. 6, in order to verify the radar identification method provided by the present invention, four network models, namely ResNet-101, AlexNet, VGG-16 and google lenet, which are commonly used in the field of image classification at present are selected for comparison, an experimental data set is a 12-class signal time-frequency image under 8 signal-to-noise ratios, wherein the size of a training set is 28800, the size of a test set is 9600, the number of iterations is 50, and an average value of 10 test results is selected for comparison.
As can be seen from fig. 6, the accuracy of the other 5 selected networks can reach more than 90% when the signal-to-noise ratio is-3 dB, and the accuracy can reach about 95% when the signal-to-noise ratio is greater than-1 dB, because under the condition of high signal-to-noise ratio, the signal time-frequency image obtained through SPWVD transformation has obvious characteristic difference in the energy region of the signal, and after image preprocessing, pattern noise caused by cross terms is effectively suppressed, and the characteristic information of the signal is effectively enhanced.
The radar identification method provided by the invention can fit the deep and shallow features of the image on multiple scales, and fit multiple functions with high efficiency, so that the fine feature extraction capability of the signal is better than AlexNet, the identification accuracy of DRN reaches 95% under the signal-to-noise ratio of-5 dB, and the accuracy reaches about 99% when the signal-to-noise ratio is-1 dB. Compared with the ResNet-101 network with 101 convolutional layers, the ResNet-101 network has a poor effect due to the lack of correlation among the characteristic maps in the input image.
In an example, after step S2 in the radar radiation source identification method, that is, after the image data generator is used to preprocess the time-frequency grayscale image to realize data enhancement and an iterator is constructed, the depth residual error network may be trained, where the specific training step is:
step S25: dividing samples into training set and testing set, setting the size and iteration times of batch training samples, and selecting jumper wire parameter value lambda1、λ2、λ3Inputting the training set into a network for training;
step S26: observing the convergence rate of the depth residual error network model, if the model is not converged and does not reach the maximum iteration times, continuing training, otherwise, storing the model;
step S27: and inputting the test set into the model for prediction, and reserving the network parameters with the highest accuracy and the best convergence performance.
In one example, the training process of the present invention includes forward propagation and backward propagation. The error between the output value and the real value in the deep network can be reduced through forward propagation, and the network weight value w and the network bias b can be updated through backward propagation according to a loss function.
In forward propagation, the loss function employed by the present invention is a cross-entropy function J (w, b), where w is a network weight value and b is a network bias, and the cross-entropy function J (w, b) can be expressed as:
Figure BDA0002958893470000091
in the formula (1),
Figure BDA0002958893470000092
As a predictor of the network, xi(i-1, 2, … n) is the time-frequency image of the input, yiAre true values.
In the back propagation, the update formula of the network weight value w and the network bias b is as follows:
Figure BDA0002958893470000093
Figure BDA0002958893470000094
in the formulas (2) and (3), alpha is a learning rate, Adam is selected as an optimization algorithm for updating gradient, accuracy is used as an index for measuring network performance, the learning rate alpha is set to be 0.001, and one-hot coding is adopted to convert the label.
In one example, the building, training and testing of the deep residual network model of the invention are completed under the keras deep learning framework of Tensorflow; simulating 12 types of radar signals (CW, LFM, BFSK, QFSK, P1, P2, P3, P4, 2ASK, QPSK, BPSK + LFM) by adopting MATLAB R2016a, drawing an SPWVD time-frequency image of each type of radar signals by utilizing a self-carried time-frequency analysis tool box tftb-0.2 in an MATLAB environment, wherein the signal-to-noise ratio value range of the signals is-9 dB to 5dB, the step length is 2dB, the sampling frequency is 200MHz, the modulation frequencies of the BFSK and the QFSK are KHz, each type of signals generate 400 pulses under the same signal-to-noise ratio, the signal sampling points are 1024, obtaining 400 pieces of normalized gray level images (224, 1) after time-frequency conversion and preprocessing, randomly selecting 300 pieces of the normalized gray level images as a training set, and 100 pieces of the normalized gray level images as a test set.
Because different parameters have large influence on the network performance, the invention adopts a variable parameter strategy to optimize the network performance when training the network, and the selected parameters mainly comprise batch sample number (BatchSize) and jumper coefficients, and are used for selecting indexes for measuring the network performance as a function of recognition rate and loss during fitting.
As the number of samples in batches gradually increases to 64, the accuracy under each signal-to-noise ratio condition is increased, and the identification accuracy basically reaches more than 90% when the signal-to-noise ratio reaches more than-3 dB. However, when the number of the batch samples is increased to 64, the accuracy rate is reduced on the contrary, because the larger the number of the batch samples is, the more accurate the gradient descending direction of the model is, the smaller the training oscillation is, and the more easily the convergence state is reached; when the number of the batch samples is increased to 64, the times required by training the data set are reduced, so that the training time is greatly increased, the parameter correction is slow, and the accuracy is reduced; therefore, for the deep residual network model of the present invention, the optimal batch sample size should be chosen to be 64.
Another important factor affecting network performance is the jumper factor. As can be seen from fig. 7, when the value of the jumper coefficient becomes larger, the accuracy gradually increases, because the transfer effect of the characteristic parameter between the convolution layer inside the residual block and the adjacent residual block is enhanced, which is more beneficial to extracting the correlation information between the previous and subsequent characteristic maps. Thus, identification accuracy increases as the jumper coefficient value increases. When the jumper coefficient is set to 0.2, the loss value of the network in the iteration process suddenly increases because the jumper coefficient is set to be too small, and the weight is too small during gradient reverse transmission, so that the parameters are not updated timely, and therefore, for the invention, the jumper coefficient lambda is set to be 0.2i(i.e., Lambda1, Lambda2 in fig. 7), beta (i.e., Lambda3 in fig. 7) should all be set to 1.0.
The radar radiation source identification method based on the multi-stage jumper residual error network has at least one of the following advantages:
(1) according to the radar radiation source identification method based on the multi-level jumper residual error network, 8 sequentially connected residual error blocks are arranged in the depth residual error network, and 4 residual error units are formed by jumper connection, so that the depth residual error network with 18 layers of total convolution layers can extract the deep level information of a signal time-frequency image, and the problems of gradient disappearance, gradient explosion and the like of the network are avoided;
(2) the radar radiation source identification method based on the multi-stage jumper residual error network can improve the back propagation efficiency of the gradient through a multi-stage short connection structure, greatly reduces the fitting difficulty of the original convolutional neural network, can fully utilize the convolutional layer information between two adjacent residual error blocks through jumper connection between the two residual error blocks, and improves the efficiency of extracting the fine features of the image;
(3) the radar radiation source identification method based on the multi-level jumper residual error network can fit deep and shallow features of an image on multiple scales and fit multiple functions with high efficiency, so that the fine feature extraction capability of the signal is superior to AlexNet;
(4) according to the radar radiation source identification method based on the multi-level jumper residual error network, cross terms can be effectively inhibited through a time-frequency image obtained by performing smooth pseudo Wigner-Ville transformation on a signal, so that cross term noise is reduced;
(5) according to the radar radiation source identification method based on the multi-level jumper residual error network, the image data generator is used for preprocessing the time-frequency image obtained by smooth pseudo Wigner-Ville transformation, so that pattern noise caused by cross terms can be effectively inhibited, and further characteristic information of signals can be effectively enhanced;
(6) compared with the ResNet-101 network with 101 convolutional layers, the radar radiation source identification method based on the multi-level jumper residual error network provided by the invention has the advantages that the number of training parameters of the deep residual error network with a plurality of residual error units is less, the network is more simplified, and the identification rate is higher.
Although a few embodiments of the present general inventive concept have been shown and described, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A radar radiation source identification method based on a multi-stage jumper residual error network comprises the following steps:
step S1: performing time-frequency transformation on the radar radiation source signal to generate a time-frequency image of the radar radiation source signal;
step S2: preprocessing the time-frequency image through an ImageDataGenerator to obtain a one-dimensional time-frequency image;
step S3: inputting the one-dimensional time-frequency image into a depth residual error network for feature extraction and signal classification so as to identify and obtain a radar radiation source;
the depth residual error network comprises a plurality of residual error units and a full connection layer, each residual error unit in the residual error units comprises two residual error blocks which are sequentially connected end to end in an end-to-end mode, the two residual error blocks are connected through beta-time jumpers, each residual error block in the two residual error blocks comprises a coiling layer, a batch standardization layer, an activation layer, a coiling layer, a batch standardization layer and an activation layer which are sequentially arranged, and the two residual error blocks pass through a lambdaiThe double-time jumper wire is connected with the double-time jumper wire,
in the depth residual error network, convolution kernels set by different residual error units are different, and the convolution kernels of four convolution layers in the same residual error unit are set to be the same.
2. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 1,
in step S1, the time-frequency transforming the radar radiation source signal includes the following steps:
step S11: performing Hilbert transform on the radar radiation source signal to obtain an analytic signal;
step S12: and performing time-frequency transformation on the analytic signal through smooth pseudo Wigner-Ville distribution to generate the time-frequency image.
3. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 2,
in step S2, preprocessing the time-frequency image by ImageDataGenerator includes the following steps:
step S21: converting the time-frequency image into a single-channel gray image through gray processing;
step S22: performing an on operation on the single-channel gray scale image to obtain an enhanced energy signal image;
step S23: carrying out mean value filtering on the enhanced energy signal image to obtain a noise-reduced signal image;
step S24: and sequentially carrying out normalization and size resetting on the denoised signal image to obtain a one-dimensional time-frequency image.
4. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 3,
in step S23, the mean filtering is to process noise through a mean filter;
the time-frequency image is a three-channel time-frequency image, and the size resetting is to reset the image size by adopting a bicubic interpolation method so as to obtain a high-resolution one-dimensional time-frequency image.
5. The radar radiation source identification method based on the multi-level jumper residual error network according to the claim 3 or 4,
in step S22, the on operation includes the steps of:
step S221: passing the single-channel grayscale map through an erosion operation to obtain a low line-shaped noise image;
step S222: and processing the low linear noise image through expansion to obtain an enhanced energy signal map.
6. The radar radiation source identification method based on the multi-stage jumper residual error network according to any one of the claims 1 to 5,
in step S3, inputting the enhanced signal data set into a depth residual network for feature extraction and signal classification includes the following steps:
step S31: passing zero-padding through the one-dimensional time-frequency image to filter edges of the one-dimensional time-frequency image;
step S32: inputting the edge-filtered image into a convolutional layer to extract shallow features of the image;
step S33: inputting the feature map after extracting the shallow feature into the pooling layer to reduce the size of the feature map;
step S34: passing the pooled feature maps through a plurality of residual error units to obtain high-dimensional feature vectors;
step S35: sequentially passing the high-dimensional feature vector through a pooling layer and a flattening layer to obtain a one-dimensional feature vector;
step S36: and inputting the one-dimensional feature vector into a full-connection layer, and outputting the maximum prediction probability value of each type of signal through a Softmax function to obtain signal classification.
7. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 6,
in step S34, the step of passing the pooled feature maps through a plurality of residual units to obtain high-dimensional feature vectors includes the steps of:
step S341: inputting the pooled feature map into two residual blocks of a first residual unit in a plurality of residual units, sequentially performing convolution, batch standardization, activation, convolution, batch standardization and activation in each residual block, and then adding the output of the latter residual block and the original feature mapping connected by beta-time jumpers to obtain the feature vector output of the first residual unit;
step S342: and outputting the feature vector of the previous residual error unit as the input of the next residual error unit, and outputting the high-dimensional feature vector after passing through all the residual error units.
8. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 7,
in step S342, the method of image processing in the residual unit is the same as that in the first residual unit.
9. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 6,
in step S33, the pooling layer reduces the feature size by maximizing pooling;
in step S35, the pooling layer reduces the dimensionality of the high-dimensional feature vector by means of average pooling.
10. The radar radiation source identification method based on the multi-level jumper residual error network according to claim 7,
the value range of the jumper coefficient beta is 0.5-1.0, and the jumper coefficient lambda isiThe value range of (A) is 0.5-1.0;
in step S32, the convolution kernel size of the convolution layer is 7 × 7, the number of convolution kernels is 64, and the step size is 2;
in step S34, the residual error units include a first residual error unit, a second residual error unit, a third residual error unit, and a fourth residual error unit that are sequentially arranged, and convolution kernels used in the first residual error unit to the fourth residual error unit have the same size, the same step size, and the number of the convolution kernels is sequentially increased.
CN202110232097.9A 2021-03-02 2021-03-02 Radar radiation source identification method based on multi-stage jumper residual error network Pending CN112906591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110232097.9A CN112906591A (en) 2021-03-02 2021-03-02 Radar radiation source identification method based on multi-stage jumper residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110232097.9A CN112906591A (en) 2021-03-02 2021-03-02 Radar radiation source identification method based on multi-stage jumper residual error network

Publications (1)

Publication Number Publication Date
CN112906591A true CN112906591A (en) 2021-06-04

Family

ID=76108606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110232097.9A Pending CN112906591A (en) 2021-03-02 2021-03-02 Radar radiation source identification method based on multi-stage jumper residual error network

Country Status (1)

Country Link
CN (1) CN112906591A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887502A (en) * 2021-10-21 2022-01-04 西安交通大学 Communication radiation source time-frequency feature extraction and individual identification method and system
CN114509604A (en) * 2022-04-18 2022-05-17 国网江西省电力有限公司电力科学研究院 GIS shell transient state ground potential rise waveform spectrum analysis method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778610A (en) * 2016-12-16 2017-05-31 哈尔滨工程大学 A kind of intra-pulse modulation recognition methods based on time-frequency image feature
CN109507648A (en) * 2018-12-19 2019-03-22 西安电子科技大学 Recognition Method of Radar Emitters based on VAE-ResNet network
CN110147812A (en) * 2019-04-04 2019-08-20 中国人民解放军战略支援部队信息工程大学 Recognition Method of Radar Emitters and device based on expansion residual error network
CN111612130A (en) * 2020-05-18 2020-09-01 吉林大学 Frequency shift keying communication signal modulation mode identification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778610A (en) * 2016-12-16 2017-05-31 哈尔滨工程大学 A kind of intra-pulse modulation recognition methods based on time-frequency image feature
CN109507648A (en) * 2018-12-19 2019-03-22 西安电子科技大学 Recognition Method of Radar Emitters based on VAE-ResNet network
CN110147812A (en) * 2019-04-04 2019-08-20 中国人民解放军战略支援部队信息工程大学 Recognition Method of Radar Emitters and device based on expansion residual error network
CN111612130A (en) * 2020-05-18 2020-09-01 吉林大学 Frequency shift keying communication signal modulation mode identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
秦鑫 等: "基于扩张残差网络的雷达辐射源信号识别", 《电子学报》 *
赵小强 等: "多级跳线连接的深度残差网络超分辨率重建", 《电子与信息学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887502A (en) * 2021-10-21 2022-01-04 西安交通大学 Communication radiation source time-frequency feature extraction and individual identification method and system
CN113887502B (en) * 2021-10-21 2023-10-17 西安交通大学 Communication radiation source time-frequency characteristic extraction and individual identification method and system
CN114509604A (en) * 2022-04-18 2022-05-17 国网江西省电力有限公司电力科学研究院 GIS shell transient state ground potential rise waveform spectrum analysis method and system

Similar Documents

Publication Publication Date Title
CN112364779B (en) Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
CN112731309B (en) Active interference identification method based on bilinear efficient neural network
CN110648292B (en) High-noise image denoising method based on deep convolutional network
CN110349185B (en) RGBT target tracking model training method and device
CN113094993B (en) Modulation signal denoising method based on self-coding neural network
CN111861906A (en) Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN112560918B (en) Dish identification method based on improved YOLO v3
CN112906591A (en) Radar radiation source identification method based on multi-stage jumper residual error network
CN113657491A (en) Neural network design method for signal modulation type recognition
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN112183742A (en) Neural network hybrid quantization method based on progressive quantization and Hessian information
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN116258874A (en) SAR recognition database sample gesture expansion method based on depth condition diffusion network
CN115327544B (en) Little-sample space target ISAR defocus compensation method based on self-supervision learning
CN117034060A (en) AE-RCNN-based flood classification intelligent forecasting method
CN117079005A (en) Optical cable fault monitoring method, system, device and readable storage medium
CN115345207A (en) Self-adaptive multi-meteorological-element prediction method
CN113420870B (en) U-Net structure generation countermeasure network and method for underwater sound target recognition
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN115861779A (en) Unbiased scene graph generation method based on effective feature representation
CN111931412A (en) Underwater target noise LOFAR spectrogram simulation method based on generative countermeasure network
CN117974736B (en) Underwater sensor output signal noise reduction method and system based on machine learning
CN114785649B (en) Satellite communication signal identification method based on multiport neural network
CN115169407B (en) Weak transient signal detection method and system based on time domain depth feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication