CN116087880A - Radar radiation source signal sorting system based on deep learning - Google Patents

Radar radiation source signal sorting system based on deep learning Download PDF

Info

Publication number
CN116087880A
CN116087880A CN202211656352.3A CN202211656352A CN116087880A CN 116087880 A CN116087880 A CN 116087880A CN 202211656352 A CN202211656352 A CN 202211656352A CN 116087880 A CN116087880 A CN 116087880A
Authority
CN
China
Prior art keywords
layer
output
convolution
size
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211656352.3A
Other languages
Chinese (zh)
Inventor
万艳玲
闫肃
张颂
叶松
孙伟
刘佳琪
谌夏
杨水锋
刘芳
李洋
赵慧雅
张永健
刘兴高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202211656352.3A priority Critical patent/CN116087880A/en
Publication of CN116087880A publication Critical patent/CN116087880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning-based radar radiation source signal sorting system which mainly comprises a database and an upper computer, wherein the upper computer comprises a training set creation module, a signal conversion module, a feature extraction module and a model training module, and the time-frequency analysis is carried out on radar radiation source signals by acquiring the radar radiation source signal data set as a training set, the time signal data are converted into time-frequency images, a convolutional neural network is constructed, and finally the network is trained based on the time-frequency images, so that the classification and identification of signals to be classified are realized. The method mainly solves the problem of low training and identifying accuracy of the existing radar radiation source signal identifying system, and the high-accuracy and high-robustness radar signal sorting system with task adaptability is obtained by converting the time domain signal into the time-frequency image and fully utilizing the characteristic extraction capability of the convolutional neural network to realize high-accuracy classification and identification.

Description

Radar radiation source signal sorting system based on deep learning
Technical Field
The invention relates to a radar radiation source signal sorting system based on deep learning.
Background
In modern high-tech warfare, electronic warfare has been incorporated into land, sea, air and world fields and has become the most important means of combat. The electronic reconnaissance is taken as a lead in the electronic battle, is the most critical part of the electronic battle, and the reconnaissance result directly leads to the success and failure of the subsequent fire control guidance and interference defense, and the evaluation of the information battle field situation of the enemy obtained by the electronic reconnaissance, the adopted battle strategy, the equipment selection and the power deployment have important roles. Electronic reconnaissance refers to searching and intercepting electromagnetic radiation information sent by an electronic system of an enemy through radar emission signals to obtain relevant tactics or equipment information of the other party, and timely making corresponding defense strategies or sending interference information. The radar signal sorting technology and the identification technology are key links of a radar detection signal processing system. Radar signal sorting is the first step of radar signal processing, and is also the most important step of radiation source working mode identification and situation analysis.
Researchers have proposed a number of methods of signal sorting including time domain analysis, frequency domain analysis, spectral correlation, etc., and some methods of machine learning have also been applied to radar radiation source signal recognition, such as support vector machines, template matching, etc. These methods have disadvantages in that the recognition applicability is poor and the recognition accuracy is low.
Disclosure of Invention
In order to overcome the defects of poor recognition applicability and low recognition accuracy of the traditional machine learning method signal sorting method, the invention aims to provide a radar radiation source signal sorting system based on deep learning.
The technical scheme adopted for solving the technical problems is as follows:
a radar radiation source signal sorting system based on deep learning comprises a database and an upper computer. Wherein the host computer includes: the system comprises a training set creation module, a signal transformation module, a characteristic extraction module and a model training module.
Further, the training set creation module creates a radar radiation source signal dataset; the working parameters of the radar model are researched and used as characteristics for describing radar signals, a simulation platform is built, and a radar signal data set is generated. 80% of which are used as training sets and 20% are used as test sets.
Further, the signal conversion module performs wavelet conversion on the radar radiation source signal to convert the time signal into an image signal:
Figure BDA0004011665620000011
where α is a scaling variable and τ is a translation variable. Psi is the scale function, f (t) is the input time signal, and WT (α, τ) is the output time-frequency image.
The method utilizes the strong high-frequency and low-frequency extraction advantages of wavelet transformation and the strong processing capacity of actual non-Gaussian non-stationary signals to fully extract important parameter information of radar signals, accurately characterizes and dynamically tracks and learns characteristic parameters of radar radiation sources, and fully prepares for the next network training.
Further, a depth separable convolutional neural network is constructed in the feature extraction module:
3.1 A depth separable convolutional neural network includes the following basic modules:
3.1.1 A) a convolutional layer; the convolution layer is used for extracting the input characteristic information and consists of a plurality of convolution units, the parameters of each convolution unit are obtained by optimizing a back propagation algorithm, the input picture is regularly moved through a receptive field (filter), and the convolution operation is carried out on the input picture and the corresponding region to extract the characteristic; the low-level convolution can only extract low-level features, such as edges, lines, etc., and the high-level convolution can extract deeper features. The parameters of the convolution layer comprise filter size, step size (stride) and boundary filling (pad), and the three determine the size of the output characteristic diagram of the convolution layer together; the size of the receptive field is smaller than the size of the input picture, and the larger the receptive field is, the more complex the extractable features are; stride defines the distance of position of the filter as it sweeps across the adjacent region; the pad is a method for filling around the feature map to avoid the loss of excessive edge information of the output feature, and the pad value represents the filling layer number.
3.1.2 A pooling layer; the pooling layer is used for compressing the feature map, extracting main features and simplifying the complexity of network calculation. There are generally two modes of pooling, average pooling and maximum pooling. Maximum pooling, i.e. finding the maximum in the region of the filter; the averaging is performed by averaging each 2×2 region to obtain the main feature. Pooling reduces feature maps, potentially affecting network accuracy, but can be offset by increasing network depth.
3.1.3 A dense module; in each dense module, 12 feature map channels will be output, both to improve parameter efficiency and to prevent the network from becoming too wide. Before entering the dense module, a convolution layer with a convolution kernel size of 3×3 is performed on the input feature map. The variable classification module is used at the end of the network to classify the extracted feature information of the image.
3.1.4 A full connection layer; the full connected layer is positioned at the end of the convolutional neural network, and gives the final classification result, in the full connected layer, the feature map loses the space structure and is unfolded into feature vectors, and the output obtained by nonlinear combination of the features extracted from the previous layers can be expressed by the following formula.
f(x)=W·x+b
Wherein x is the input of the full connection layer, W is the weight coefficient, and b is the bias coefficient. The fully connected layer connects all features output to the output layer, which outputs classification labels using a logic function or a normalized exponential function (softmax function) for image classification problems. In the recognition (object detection) problem, the output layer outputs as the center coordinates, size, and classification of the object. In semantic segmentation, the classification result of each pixel is directly output.
3.2 The specific composition of the depth separable convolutional neural network structure is as follows:
3.2.1 Convolution pooling layer one): the input of 224×224 is convolved, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 254×254, the output of the convolution layer is subjected to ReLU nonlinear transformation and back propagation, and then is subjected to maximum pooling, the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 127×127;
3.2.2 Convolution pooling layer two): convolving 127×127 input, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 125×125, performing ReLU nonlinear transformation and back propagation on the output of a convolution layer, performing maximum pooling, wherein the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 62×62;
3.2.2 Dense module one): the output size is 62×62, including 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual connecting layer are added to obtain the output of the densely connected residual block I, and the output is used as the input of the densely connected residual block II.
3.2.3 Densely connected residual block two): the output size is 31×31, comprising 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual connecting layer are added to obtain the output of the densely connected residual block I, and the output is used as the input of the densely connected residual block III.
3.2.4 Densely connected residual block three: the output size is 15 multiplied by 15, and comprises 4 connecting layers, 1 transition layer and 1 residual connecting layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the moving step number is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then is subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving step number is 2, and the pooled layer output is subjected to ReLU nonlinear transformation.
3.2.5 Pooling layer): the input size is 15×15, the output size is 7×7, and the maximum pooling with the size of 2×2 is performed.
3.2.6 Variable full connection classification layer): the 1 multiplied by 1 convolution kernel is adopted, and the model training module inputs the gray time-frequency image training set into the model to perform model training after flattening, full connection layer, dropout and softmax are sequentially carried out.
The model adopts a novel random gradient descent algorithm of Nesterov momentum to carry out parameter optimization. The model was trained for 15 cycles using a batch size of 32, with other hyper parameters as follows: the initial learning rate is set to 0.1, and the weight decay rate is 1e-6. At the completion of the 60% training period, the learning rate was reduced to 0.01, and the Nesterov momentum was selected to be 0.9. The class weight parameters are used to weight the loss function during training, which helps tell the model to "pay more attention" to classes from a small number of samples. In this experiment, a weight of 1.0 was used. In order to solve the problem of unbalanced radar radiation source data sets, the system provides a mean square cross entropy loss function, has the function of automatically adjusting the weights of difficult and easy samples, effectively performs on-line difficult sample mining, and gives the loss of the difficult samples to larger weight values, so that the sorting effect of an unknown radar radiation source is improved.
Figure BDA0004011665620000041
Wherein L is CE Is the mean square cross entropy loss function value, y i Is a sample tag, y' i Is the network output value of the ith sample, K is the number of categories, and is the total number of samples.
Further, the signals to be sorted are converted into time-frequency images to be input into a model for recognition, and recognition results are output.
The beneficial effects of the invention are mainly shown in the following steps: the nonlinear network structure for obtaining deep features is learned, complex function approximation is realized, the distributed representation of input data is represented, the strong capability of learning the essential features of a data set from a few sample sets is presented, meanwhile, the nonlinear network structure has the function of automatically adjusting the weights of difficult and easy samples, on-line difficult sample mining is effectively carried out, and the loss of the difficult samples is given to a larger weight value, so that the signal sorting precision of a radiation source can be remarkably improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings.
A radar radiation source signal sorting system based on deep learning comprises a database 1 and an upper computer 2. Wherein the host computer 2 comprises: the training set creation module 3, the signal transformation module 4, the characteristic extraction module 5 and the model training module 6.
The training set creation module 3 creates a radar radiation source signal data set; the working parameters of the radar model are researched and used as characteristics for describing radar signals, a simulation platform is built, and a radar signal data set is generated. 80% of which are used as training sets and 20% are used as test sets.
The signal conversion module 4 performs wavelet conversion on the radar radiation source signal, and converts the time signal into an image signal:
Figure BDA0004011665620000042
where α is a scaling variable and τ is a translation variable. ψ is the scale function and f (t) is the input signal.
The method utilizes the strong high-frequency and low-frequency extraction advantages of wavelet transformation and the strong processing capacity of actual non-Gaussian non-stationary signals to fully extract important parameter information of radar signals, accurately characterizes and dynamically tracks and learns characteristic parameters of radar radiation sources, and fully prepares for the next network training.
The depth separable convolutional neural network is constructed in the feature extraction module 5:
3.1 A depth separable convolutional neural network includes the following basic modules:
3.1.1 A) a convolutional layer; the convolution layer is used for extracting the input characteristic information and consists of a plurality of convolution units, the parameters of each convolution unit are obtained by optimizing a back propagation algorithm, the input picture is regularly moved through a receptive field (filter), and the convolution operation is carried out on the input picture and the corresponding region to extract the characteristic; the low-level convolution can only extract low-level features, such as edges, lines, etc., and the high-level convolution can extract deeper features. The parameters of the convolution layer comprise filter size, step size (stride) and boundary filling (pad), and the three determine the size of the output characteristic diagram of the convolution layer together; the size of the receptive field is smaller than the size of the input picture, and the larger the receptive field is, the more complex the extractable features are; stride defines the distance of position of the filter as it sweeps across the adjacent region; the pad is a method for filling around the feature map to avoid the loss of excessive edge information of the output feature, and the pad value represents the filling layer number.
3.1.2 A pooling layer; the pooling layer is used for compressing the feature map, extracting main features and simplifying the complexity of network calculation. There are generally two modes of pooling, average pooling and maximum pooling. Maximum pooling, i.e. finding the maximum in the region of the filter; the averaging is performed by averaging each 2×2 region to obtain the main feature. Pooling reduces feature maps, potentially affecting network accuracy, but can be offset by increasing network depth.
3.1.3 A dense module; in each dense module, 12 feature map channels will be output, both to improve parameter efficiency and to prevent the network from becoming too wide. Before entering the dense module, a convolution layer of convolution kernel size is performed on the input feature map. The variable classification module is used at the end of the network to classify the extracted feature information of the image.
3.1.4 A full connection layer; the full connected layer is positioned at the end of the convolutional neural network, and gives the final classification result, in the full connected layer, the feature map loses the space structure and is unfolded into feature vectors, and the output obtained by nonlinear combination of the features extracted from the previous layers can be expressed by the following formula.
f(x)=W·x+b
Wherein x is the input of the full connection layer, W is the weight coefficient, and b is the bias coefficient. The fully connected layer connects all features output to the output layer, which outputs classification labels using a logic function or a normalized exponential function (softmax function) for image classification problems. In the recognition (object detection) problem, the output layer outputs as the center coordinates, size, and classification of the object. In semantic segmentation, the classification result of each pixel is directly output.
3.2 The specific composition of the depth separable convolutional neural network structure is as follows:
3.2.1 Convolution pooling layer one): the input of 224×224 is convolved, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 254×254, the output of the convolution layer is subjected to ReLU nonlinear transformation and back propagation, and then is subjected to maximum pooling, the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 127×127;
3.2.2 Convolution pooling layer two): convolving 127×127 input, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 125×125, performing ReLU nonlinear transformation and back propagation on the output of a convolution layer, performing maximum pooling, wherein the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 62×62;
3.2.2 Dense module one): the output size is 62×62, including 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual connecting layer are added to obtain the output of the densely connected residual block I, and the output is used as the input of the densely connected residual block II.
3.2.3 Densely connected residual block two): the output size is 31×31, comprising 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual connecting layer are added to obtain the output of the densely connected residual block I, and the output is used as the input of the densely connected residual block III.
3.2.4 Densely connected residual block three: the output size is 15 multiplied by 15, and comprises 4 connecting layers, 1 transition layer and 1 residual connecting layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the moving step number is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then is subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving step number is 2, and the pooled layer output is subjected to ReLU nonlinear transformation.
3.2.5 Pooling layer): the input size is 15×15, the output size is 7×7, and the maximum pooling with the size of 2×2 is performed.
3.2.6 Variable full connection classification layer): the 1 multiplied by 1 convolution kernel is adopted, and the gray time-frequency image training set is input into a model by a dropout and softmax model training module 6 through flattening and full-connection layers in sequence to perform model training.
The model adopts a novel random gradient descent algorithm of Nesterov momentum to carry out parameter optimization. The model was trained for 15 cycles using a batch size of 32, with other hyper parameters as follows: the initial learning rate is set to 0.1, and the weight decay rate is 1e-6. At the completion of the 60% training period, the learning rate was reduced to 0.01, and the Nesterov momentum was selected to be 0.9. The class weight parameters are used to weight the loss function during training, which helps tell the model to "pay more attention" to classes from a small number of samples. In this experiment, a weight of 1.0 was used. In order to solve the problem of unbalanced radar radiation source data sets, the system provides a mean square cross entropy loss function, has the function of automatically adjusting the weights of difficult and easy samples, effectively performs on-line difficult sample mining, and gives the loss of the difficult samples to larger weight values, so that the sorting effect of an unknown radar radiation source is improved.
Figure BDA0004011665620000071
Wherein L is CE Is the mean square cross entropy loss function value, y i Is a sample tag, y' i Is the network output value of the ith sample, K is the number of categories, and is the total number of samples.
Further, the signals to be sorted are converted into time-frequency images to be input into a model for recognition, and recognition results are output.
The above-described embodiments are intended to illustrate the present invention, not to limit it, and any modifications and variations made thereto are within the spirit of the invention and the scope of the appended claims.

Claims (6)

1. A radar radiation source signal sorting system based on deep learning is characterized in that: the system consists of a database and an upper computer, wherein the upper computer comprises: the system comprises a training set creation module, a signal transformation module, a characteristic extraction module and a model training module.
2. The deep learning-based radar radiation source signal sorting system of claim 1, wherein: the training set creation module researches working parameters of the radar model, takes the working parameters as characteristics for describing radar signals, builds a simulation platform and generates a radar signal data set.
3. The deep learning-based radar radiation source signal sorting system of claim 1, wherein: the signal conversion module performs wavelet conversion on the radar radiation source signal and converts the time signal into a time-frequency image signal:
Figure FDA0004011665610000011
where α is a scaling variable and τ is a translation variable; ψ is the scale function and f (t) is the input signal.
4. The deep learning-based radar radiation source signal sorting system of claim 1, wherein: the feature extraction module is used for constructing a depth separable convolutional neural network to extract features, and the method specifically comprises the following steps:
3.1 A depth separable convolutional neural network includes the following basic modules:
3.1.1 A) a convolutional layer; the convolution layer is used for extracting the input characteristic information and consists of a plurality of convolution units, the parameters of each convolution unit are obtained by optimizing a back propagation algorithm, the input picture is regularly moved through a receptive field (filter), and the convolution operation is carried out on the input picture and the corresponding region to extract the characteristic; the low-level convolution can only extract low-level features, such as edges, lines and the like, and the high-level convolution can extract deeper features; the parameters of the convolution layer comprise filter size, step size (stride) and boundary filling (pad), and the three determine the size of the output characteristic diagram of the convolution layer together; the size of the receptive field is smaller than the size of the input picture, and the larger the receptive field is, the more complex the extractable features are; stride defines the distance of position of the filter as it sweeps across the adjacent region; the pad is a method for filling around the feature map to avoid the loss of excessive edge information of the output feature, and the pad value represents the filling layer number;
3.1.2 A pooling layer; the pooling layer is used for compressing the feature map, extracting main features and simplifying the complexity of network calculation; the pooling modes are generally two, namely average pooling and maximum pooling; maximum pooling, i.e. finding the maximum in the region of the filter; the mean value pooling is to calculate the mean value in each 2×2 region to obtain the main characteristics; the pooling operation reduces the feature map, possibly affecting the accuracy of the network, but can be compensated by increasing the depth of the network;
3.1.3 A dense module; in each dense module, 12 feature map channels are output, so that the parameter efficiency is improved, and the network is prevented from becoming too wide; before entering the dense module, performing a convolution layer with a convolution kernel size of 3×3 on the input feature map; the variable classification module is used for classifying the characteristic information extracted from the image at the end of the network;
3.1.4 A full connection layer; the full connection layer is positioned at the end of the convolutional neural network, a final classification result is given, in the full connection layer, the feature map loses a space structure and is unfolded into feature vectors, and the non-linear combination of the features extracted from the previous layer is carried out to obtain an output which can be expressed by the following formula;
f(x)=W·x+b
wherein x is the input of the full connection layer, W is the weight coefficient, and b is the bias coefficient; the full connection layer connects all the characteristics to be output to the output layer, and for the image classification problem, the output layer outputs classification labels by using a logic function or a normalized exponential function (softmax function); in the problem of object detection, the output layer outputs as the center coordinates, size, and classification of the object; in semantic segmentation, the classification result of each pixel is directly output;
3.2 The specific composition of the depth separable convolutional neural network structure is as follows:
3.2.1 Convolution pooling layer one): the input of 224×224 is convolved, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 254×254, the output of the convolution layer is subjected to ReLU nonlinear transformation and back propagation, and then is subjected to maximum pooling, the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 127×127;
3.2.2 Convolution pooling layer two): convolving 127×127 input, the number of convolution kernels is 64, the size is 3×3, the number of moving steps is 2, the output size is 125×125, performing ReLU nonlinear transformation and back propagation on the output of a convolution layer, performing maximum pooling, wherein the pooling window size is 2×2, the number of moving steps is 2, and finally the dimension of the output feature map is 62×62;
3.2.2 Dense module one): the output size is 62×62, including 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual error connecting layer convolves the input of the whole densely connected residual error blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual error connecting layer are added to obtain the output of the densely connected residual error blocks I and serve as the input of the densely connected residual error blocks II;
3.2.3 Dense module two): the output size is 31×31, comprising 4 connection layers, 1 transition layer, 1 residual connection layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual error connecting layer convolves the input of the whole densely connected residual error blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the moving steps are 2, the pooled layer output is subjected to ReLU nonlinear transformation, and finally the output of the transition layer and the output of the residual error connecting layer are added to obtain the output of the densely connected residual error block I, and the output is used as the input of the densely connected residual error block III;
3.2.4 Dense module three: the output size is 15 multiplied by 15, and comprises 4 connecting layers, 1 transition layer and 1 residual connecting layer; each connecting layer sequentially carries out ReLU nonlinear transformation, the convolution operation with the number of convolution kernels being 48, the size being 3 multiplied by 3 and the number of moving steps being 1, and the convolution operation with the number of convolution kernels being 12, the size being 3 multiplied by 3 and the number of moving steps being 1; the transition layer convolves the feature diagram output by the last connection layer, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation, and then the output of the convolution layer is subjected to maximum pooling with a pooling window size of 2 multiplied by 2, and the number of moving steps is 2; the residual connecting layer convolves the input of the whole densely connected residual blocks, the number of convolution kernels is 136, the size is 1 multiplied by 1, the number of moving steps is 1, the output of the convolution layer is subjected to ReLU nonlinear transformation and then is subjected to maximum pooling, the pooling window size is 2 multiplied by 2, the number of moving steps is 2, and the pooled layer output is subjected to ReLU nonlinear transformation;
3.2.5 Pooling layer): input size is 15×15, output size is 7×7, and the maximum pooling with size of 2×2 is performed;
3.2.6 Variable full connection classification layer): a 1 x 1 convolution kernel is adopted, and the flattening and full-connection layers, dropout and softmax are sequentially adopted.
5. The deep learning-based radar radiation source signal sorting system of claim 1, wherein: the model training module inputs the gray time-frequency image training set into a model to perform model training;
the model adopts a random gradient descent algorithm of Nesterov momentum to carry out parameter optimization; the model was trained for 15 cycles using a batch size of 32, with other hyper parameters as follows: the initial learning rate is set to be 0.1, and the weight attenuation rate is set to be 1e-6; at the completion of the 60% training period, the learning rate was reduced to 0.01, and the Nesterov momentum was selected to be 0.9; using a mean square cross entropy loss function to solve the problem of radar radiation source dataset imbalance:
Figure FDA0004011665610000031
wherein L is CE Is the mean square cross entropy loss function value, y i Is a sample tag, y i Is the network output value of the ith sample, K is the number of categories, and is the total number of samples.
6. The deep learning-based radar radiation source signal sorting system of claim 1, wherein: and inputting the data to be sorted into a multi-layer convolutional neural network through signal transformation, and outputting a radiation source signal sorting result.
CN202211656352.3A 2022-12-22 2022-12-22 Radar radiation source signal sorting system based on deep learning Pending CN116087880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211656352.3A CN116087880A (en) 2022-12-22 2022-12-22 Radar radiation source signal sorting system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211656352.3A CN116087880A (en) 2022-12-22 2022-12-22 Radar radiation source signal sorting system based on deep learning

Publications (1)

Publication Number Publication Date
CN116087880A true CN116087880A (en) 2023-05-09

Family

ID=86207412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211656352.3A Pending CN116087880A (en) 2022-12-22 2022-12-22 Radar radiation source signal sorting system based on deep learning

Country Status (1)

Country Link
CN (1) CN116087880A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116774154A (en) * 2023-08-23 2023-09-19 吉林大学 Radar signal sorting method
CN117994636A (en) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 Puncture target identification method, system and storage medium based on interactive learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116774154A (en) * 2023-08-23 2023-09-19 吉林大学 Radar signal sorting method
CN116774154B (en) * 2023-08-23 2023-10-31 吉林大学 Radar signal sorting method
CN117994636A (en) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 Puncture target identification method, system and storage medium based on interactive learning

Similar Documents

Publication Publication Date Title
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
CN109993220B (en) Multi-source remote sensing image classification method based on double-path attention fusion neural network
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN108182450A (en) A kind of airborne Ground Penetrating Radar target identification method based on depth convolutional network
CN110033473B (en) Moving target tracking method based on template matching and depth classification network
CN106709486A (en) Automatic license plate identification method based on deep convolutional neural network
CN111126134B (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN110619352A (en) Typical infrared target classification method based on deep convolutional neural network
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN114564982B (en) Automatic identification method for radar signal modulation type
CN113705580B (en) Hyperspectral image classification method based on deep migration learning
CN112257741B (en) Method for detecting generative anti-false picture based on complex neural network
CN112613350A (en) High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN112684427A (en) Radar target identification method based on serial quadratic reinforcement training
CN116087880A (en) Radar radiation source signal sorting system based on deep learning
CN115859142A (en) Small sample rolling bearing fault diagnosis method based on convolution transformer generation countermeasure network
CN112560948A (en) Eye fundus map classification method and imaging method under data deviation
CN112766381B (en) Attribute-guided SAR image generation method under limited sample
CN114444544A (en) Signal classification and identification method based on convolutional neural network and knowledge migration
CN115205602A (en) Zero-sample SAR target identification method based on optimal transmission distance function
CN115329821A (en) Ship noise identification method based on pairing coding network and comparison learning
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN110555483B (en) Polarized SAR classification method based on FW-DCGAN feature generation
CN114283296B (en) Target identification self-evaluation method based on structural similarity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination