CN112346056A - Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals - Google Patents

Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals Download PDF

Info

Publication number
CN112346056A
CN112346056A CN202110028601.3A CN202110028601A CN112346056A CN 112346056 A CN112346056 A CN 112346056A CN 202110028601 A CN202110028601 A CN 202110028601A CN 112346056 A CN112346056 A CN 112346056A
Authority
CN
China
Prior art keywords
pulse radar
radar signal
characteristic diagram
resolution
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110028601.3A
Other languages
Chinese (zh)
Other versions
CN112346056B (en
Inventor
李骥
张会强
王威
王新
欧建平
李刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202110028601.3A priority Critical patent/CN112346056B/en
Publication of CN112346056A publication Critical patent/CN112346056A/en
Application granted granted Critical
Publication of CN112346056B publication Critical patent/CN112346056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The application relates to a resolution characteristic fusion extraction method and a recognition method of a multi-pulse radar signal. The method comprises the following steps: respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient; obtaining a channel weight characteristic diagram according to the first channel weight coefficient, the second channel weight coefficient, the activation function and the multi-pulse radar signal characteristic diagram, and respectively performing maximum pooling and average pooling on the channel weight characteristic diagram in the channel dimension to obtain a characteristic diagram with the channel number of 2; and obtaining a high-resolution characteristic diagram according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram, adding the high-resolution characteristic diagram with the multi-pulse radar signal characteristic diagram to obtain a high-resolution characteristic diagram with complete information, and performing characteristic fusion extraction by adopting multiple convolution kernels to obtain high-resolution characteristics. By adopting the method, the high-resolution characteristics in the multi-pulse radar signals can be rapidly and accurately extracted.

Description

Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals
Technical Field
The application relates to the technical field of radar signal identification, in particular to a resolution characteristic fusion extraction method and an identification method of a multi-pulse radar signal.
Background
With the rapid development of digital technology, the modulation modes of radar signals are more and more complex, the modulation types are more and more, the external electromagnetic environment is more and more complex, and a severe challenge is brought to an electronic reconnaissance and electronic countermeasure system. In the electronic countermeasure process, the intercepted signals are quickly and accurately identified, the information control right can be preferentially obtained, and the method plays a vital role in the situation of a battlefield. However, intercepted enemy signals are not just monopulse signals, so how to quickly and accurately identify multi-pulse radar signals at low signal-to-noise ratios is a key issue in the field of electronic countermeasures.
The traditional radar signal identification technology generally utilizes a pulse description word (PWD) to carry out matching of conventional parameters, and designs a feature extraction algorithm and a classifier to carry out identification. However, with the increasingly complex electromagnetic environment of modern battlefield, the signal characteristics are easily submerged by external interference, and the traditional radar signal identification method needs to perform complex characteristic design, which is difficult to implement and poor in generalization.
With the development of artificial intelligence, deep learning is widely applied, wherein Convolutional Neural Networks (CNNs) are hot spots of research. The network has the capability of representation learning, namely high-order features can be extracted from input information, translation invariance of the input features can be responded, similar features at different positions in space can be identified, and the network is widely applied to image classification, semantic segmentation, target detection and other directions. However, different radar signal two-dimensional time-frequency graphs have larger repeated similar areas, and the distinctive characteristic areas are relatively small, so that the multi-pulse radar signal identification capability by utilizing the convolutional neural network is poor, and the parameter number and the calculated amount are large.
Disclosure of Invention
In view of the above, it is necessary to provide a method for extracting and identifying a multi-pulse radar signal by fusion of the resolution characteristics of the multi-pulse radar signal, which can extract high-resolution characteristics of the multi-pulse radar signal.
A method of resolution feature fusion extraction of a multi-pulse radar signal, the method comprising:
and acquiring a multi-pulse radar signal characteristic diagram.
And respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient of the multi-pulse radar signal characteristic diagram.
And obtaining a channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, a preset first activation function, a preset second activation function and the multi-pulse radar signal characteristic diagram.
And respectively carrying out maximum pooling and average pooling on the channel weight characteristic diagram in channel dimensions to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram.
Obtaining a feature map with the channel number of 2 according to the first space weight feature matrix and the second space weight feature matrix; and obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram.
And adding the high-resolution characteristic diagram and the multi-pulse radar signal characteristic diagram to obtain a high-resolution characteristic diagram with complete information.
And performing multi-scale fusion extraction by adopting various convolution kernels according to the high-resolution characteristic diagram with complete information to obtain the high-resolution characteristic of the multi-pulse radar signal.
In one embodiment, the first activation function is a ReLU function, and the second activation function is a sigmoid function. Obtaining a channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, the first activation function, the second activation function and the multi-pulse radar signal characteristic diagram, and further comprising:
and activating and superposing the first channel weight coefficient and the second channel weight coefficient by using a ReLU function to obtain a one-dimensional channel weight vector.
And multiplying an output value obtained by activating the one-dimensional channel weight vector by using a sigmoid function by the multi-pulse radar signal characteristic diagram to obtain the channel weight characteristic diagram of the multi-pulse radar signal.
In one embodiment, a feature map with the channel number of 2 is obtained according to the first spatial weight feature matrix and the second spatial weight feature matrix; obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram, and further comprising: splicing the first space weight characteristic matrix and the second space weight characteristic matrix together according to channel dimensions to obtain a characteristic diagram with the channel number being 2.
And activating a value obtained by performing convolution operation on the feature diagram with the channel number of 2 by using a convolution kernel 7 x 7 by using a sigmoid function to obtain a comprehensive space weight feature matrix.
And multiplying the comprehensive space weight characteristic matrix and the channel weight characteristic diagram to obtain a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel.
In one embodiment, according to the high-resolution feature map, performing multi-scale fusion extraction by using multiple convolution kernels to obtain the high-resolution feature of the multi-pulse radar signal, further includes: and respectively carrying out multi-scale fusion extraction on the high-resolution characteristic map by adopting a convolution kernel 7 x 7, a convolution kernel 5 x 5, a convolution kernel 3 x 3 and a convolution kernel 1 x 1 to obtain the high-resolution characteristic of the multi-pulse radar signal.
A method of identifying a multi-pulse radar signal, the method comprising:
a multi-pulse radar signal is acquired.
Performing CWD time-frequency analysis on the multi-pulse radar signal to be detected to obtain a two-dimensional time-frequency graph of the multi-pulse radar signal; and marking the two-dimensional time-frequency graph as a training sample.
And constructing a high-resolution feature fusion extraction network, wherein the high-resolution feature fusion extraction network comprises a convolution network, a feature extraction network and an output network.
The feature extraction network comprises an attention feature extraction module and a resolution feature fusion feature extraction module; the feature extraction network is used for extracting high-resolution features of the training samples; the attention feature extraction module is used for extracting the space attention feature and the channel attention feature of the training sample to obtain an attention feature map of the multi-pulse radar signal; the resolution feature fusion feature extraction module executes any one of the above resolution feature fusion extraction methods for a multi-pulse radar signal, and is configured to extract the resolution feature of the attention feature map.
And carrying out reverse training on the high-resolution feature fusion extraction network according to the training samples to obtain a multi-pulse radar signal identification model.
And acquiring the multi-pulse radar signal to be detected.
And performing CWD time-frequency analysis on the multi-pulse radar signal to be detected to obtain a two-dimensional time-frequency diagram to be detected.
And inputting the two-dimensional time-frequency diagram to be detected into the multi-pulse radar signal identification model to obtain the category of the multi-pulse radar signal.
In one embodiment, the attention feature extraction module is composed of 6 layers, including: a first convolutional layer, a second convolutional layer, a channel pooling layer, a space pooling layer, a third convolutional layer and a fourth convolutional layer.
The convolution kernel of the first convolution layer is convolution kernel 1 × 1, the convolution kernel of the second convolution layer is convolution kernel 3 × 3, the convolution kernel of the third convolution layer is convolution kernel 7 × 7, and the convolution kernel of the fourth convolution layer is convolution kernel 1 × 1.
In one embodiment, the method further comprises inputting the training sample into a convolution network to obtain a convolution characteristic;
inputting the convolution characteristic into the characteristic extraction network, and outputting a high-resolution characteristic; the feature extraction network includes:
Figure 437785DEST_PATH_IMAGE001
an attention feature extraction module, a multi-pulse radar signal resolution feature fusion extraction module,
Figure 230160DEST_PATH_IMAGE002
An attention feature extraction module, a multi-pulse radar signal resolution feature fusion extraction module,
Figure 367881DEST_PATH_IMAGE003
An attention feature extraction module, a multi-pulse radar signal resolution feature fusion extraction module,
Figure 689140DEST_PATH_IMAGE004
An attention feature extraction module, wherein
Figure 912311DEST_PATH_IMAGE005
Is an integer greater than 0, and
Figure 242799DEST_PATH_IMAGE006
and inputting the high-resolution features into an output network, outputting a classification prediction result, and performing reverse training according to the classification prediction result and the training samples to obtain a multi-pulse radar signal identification model.
In one embodiment, the training sample is used for training the high-resolution feature fusion extraction networkThe multi-pulse radar signal identification model further comprises the step of inputting the high-resolution features into a global average pooling layer of an output network to obtain
Figure 31763DEST_PATH_IMAGE007
The characteristic diagram of (1).
Will be described in
Figure 930449DEST_PATH_IMAGE007
The feature map of (2) is input to the fully-connected layer of the output network, and the classification prediction result is output.
And carrying out reverse training according to the classification prediction result and the training samples to obtain a multi-pulse radar signal identification model.
In one embodiment, the method further comprises the step of
Figure 499970DEST_PATH_IMAGE007
The value obtained by inputting the characteristic diagram into the full connection layer of the output network is calculated by using Softmax, and the classification prediction result is output.
An apparatus for resolution feature fusion extraction of a multipulse radar signal, the apparatus comprising:
the multi-pulse radar signal characteristic diagram acquisition module: the method is used for acquiring the multi-pulse radar signal characteristic diagram.
A channel weight coefficient determination module: the method is used for respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in space dimension to obtain a first channel weight coefficient, a characteristic diagram and a second channel weight coefficient of the multi-pulse radar signal characteristic diagram.
A channel weight feature map determination module: and the channel weight characteristic diagram is used for obtaining the channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, a preset first activation function, a preset second activation function and the multi-pulse radar signal characteristic diagram.
The weight feature matrix determination module: and the channel weight characteristic diagram processing unit is used for respectively carrying out maximum value pooling and average value pooling on the channel dimension to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram.
A high resolution profile determination module: the characteristic diagram with the channel number of 2 is obtained according to the first space weight characteristic matrix and the second space weight characteristic matrix; and obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram.
The complete information high-resolution characteristic diagram determining module: and the high-resolution characteristic diagram is used for adding the high-resolution characteristic diagram and the multi-pulse radar signal characteristic diagram to obtain a high-resolution characteristic diagram with complete information.
High-resolution feature extraction module: and the multi-scale fusion extraction is carried out by adopting various convolution kernels according to the high-resolution characteristic diagram with complete information to obtain the high-resolution characteristic of the multi-pulse radar signal.
The method comprises the steps of respectively carrying out maximum pooling and average pooling on a multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient; obtaining a channel weight characteristic diagram according to the first channel weight coefficient, the second channel weight coefficient, the first activation function, the second activation function and the multi-pulse radar signal characteristic diagram, wherein the channel weight characteristic diagram highlights high-correlation channels in the multi-pulse radar signal characteristic diagram, inhibits the irrelevant channels and focuses on the channels with higher resolution; respectively carrying out maximum pooling and average pooling on the channel weight characteristic graph on channel dimensions to obtain a characteristic graph with the channel number being 2; according to the feature map with the channel number of 2 and the channel weight feature map, high-resolution feature maps on the space and the channel are obtained, the high-resolution feature maps are added with the multi-pulse radar signal feature maps to obtain a high-resolution feature map with complete information, and the problem that image information is lost and network is degraded along with the deepening of the network depth is solved by adding the high-resolution feature maps and the multi-pulse radar signal feature maps; and performing multi-scale fusion extraction by adopting various convolution kernels to obtain the high-resolution characteristics of the multi-pulse radar signal.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for fusion extraction of resolution features of a multi-pulse radar signal according to an embodiment;
FIG. 2 is a network structure diagram of a method for implementing the fusion extraction of the resolution features of the multi-pulse radar signal according to an embodiment;
FIG. 3 is a schematic flow chart of a method for identifying a multi-pulse radar signal according to another embodiment;
FIG. 4 is a block diagram of an embodiment of a device for fusion extraction of discriminative features of a multi-pulse radar signal;
FIG. 5 is a two-dimensional time-frequency diagram of 10 radar signals in one embodiment;
FIG. 6 is a graph of HRF-Nets recognition rate at different net depths in one embodiment;
FIG. 7 is a confusion matrix for HRF-Net157 at-14 dB noise in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a method for extracting resolution feature fusion of a multi-pulse radar signal is provided, which includes the following steps:
step 100: and acquiring a multi-pulse radar signal characteristic diagram.
The multi-pulse radar signal characteristic diagram can be a time-frequency characteristic diagram obtained by time-frequency processing of multi-pulse radar signals, and can also be a multi-pulse radar signal characteristic diagram output by a network training sample in a certain layer of a characteristic extraction network.
Step 102: and respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient of the multi-pulse radar signal characteristic diagram.
And respectively carrying out maximum pooling (MaxPool) and average pooling (AvgPool) on the input multi-pulse radar signal characteristic diagram in the space dimension, and compressing the space dimension to obtain a first channel weight coefficient and a second channel weight coefficient.
The first channel weight coefficients and the second channel weight coefficients are one-dimensional vectors.
Step 104: and obtaining a channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, the preset first activation function, the preset second activation function and the multi-pulse radar signal characteristic diagram.
The first channel weight coefficient and the second channel weight coefficient are subjected to ReLU and Sigmoid activation functions to enhance the nonlinear feature expression capability of the network. And then, the two channel weight coefficients are added together to carry out comprehensive analysis, so that the channels with high correlation are highlighted, the irrelevant channels are inhibited, and the focused input channels are more distinguished. And finally, multiplying the obtained one-dimensional channel weight vector by the input multi-pulse radar signal characteristic diagram, keeping the input size unchanged, and obtaining the channel weight characteristic diagram of the multi-pulse radar signal.
Step 106: and respectively carrying out maximum value pooling and average value pooling on the channel weight characteristic diagram in the channel dimension to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram.
And in the channel dimension, continuously carrying out MaxPoint and AvgPool on the channel weight characteristic diagram obtained at the upper layer, and carrying out channel dimension compression to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram.
The first spatial weight characteristic matrix and the second spatial weight characteristic matrix are two-dimensional spatial weight characteristic matrices.
Step 108: obtaining a feature map with the channel number of 2 according to the first space weight feature matrix and the second space weight feature matrix; and obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram.
And splicing the first space weight characteristic matrix and the second space weight characteristic matrix together according to the channel dimension to obtain a characteristic diagram with the channel number being 2. And then carrying out convolution operation, and activating by using a Sigmoid function, so that the spatial position with high correlation is emphasized, the spatial position with small correlation is weakened, and the focused input image has higher resolution in which regions. And finally, multiplying the obtained comprehensive space weight characteristic matrix with the channel weight characteristic diagram to obtain a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel.
Step 110: and adding the high-resolution characteristic diagram and the multi-pulse radar signal characteristic diagram to obtain a high-resolution characteristic diagram with complete information.
With the increase of the network depth, the problems of image information loss, network degradation and the like can be generated, and the problems can be well solved by adding the jump connection. Therefore, the obtained characteristic diagram Out2 is added to the initially input multi-pulse radar signal characteristic diagram, and a high-resolution characteristic diagram in space and channel is obtained through the activation function ReLU.
And step 112, performing multi-scale fusion extraction by adopting various convolution kernels according to the high-resolution feature map with complete information to obtain the high-resolution feature of the multi-pulse radar signal.
The feature map is subjected to multi-scale feature extraction, multiple convolution kernels are adopted for multi-scale fusion extraction, the larger convolution kernel has larger receptive field, the semantic information representation capability is strong, the smaller convolution kernel has smaller receptive field, the geometric detail information representation capability is strong, and the resolution is high. Therefore, the convolution kernels with various sizes are adopted to carry out feature fusion extraction on the high-resolution feature map, and the high-resolution features are extracted.
The method comprises the steps of respectively carrying out maximum pooling and average pooling on a multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient; obtaining a channel weight characteristic diagram according to the first channel weight coefficient, the second channel weight coefficient, the first activation function, the second activation function and the multi-pulse radar signal characteristic diagram, wherein the channel weight characteristic diagram highlights high-correlation channels in the multi-pulse radar signal characteristic diagram, inhibits the irrelevant channels and focuses on the channels with higher resolution; respectively carrying out maximum pooling and average pooling on the channel weight characteristic graph on channel dimensions to obtain a characteristic graph with the channel number being 2; according to the feature map with the channel number of 2 and the channel weight feature map, high-resolution feature maps on the space and the channel are obtained, the high-resolution feature maps are added with the multi-pulse radar signal feature maps to obtain a high-resolution feature map with complete information, and the problem that image information is lost and network is degraded along with the deepening of the network depth is solved by adding the high-resolution feature maps and the multi-pulse radar signal feature maps; and performing multi-scale fusion extraction by adopting various convolution kernels to obtain the high-resolution characteristics of the multi-pulse radar signal.
In one embodiment, step 104 further comprises: the first activation function is a ReLU function, and the second activation function is a sigmoid function; and activating and superposing the first channel weight coefficient and the second channel weight coefficient by using a ReLU function to obtain a one-dimensional channel weight vector.
And multiplying an output value obtained by activating the one-dimensional channel weight vector sigmoid function by the initially input multi-pulse radar signal characteristic diagram to obtain the channel weight characteristic diagram of the multi-pulse radar signal.
In one embodiment, step 108 further includes splicing the first spatial weight feature matrix and the second spatial weight feature matrix together according to the channel dimension to obtain a feature map with a channel number of 2; activating a value obtained by performing convolution operation on the feature diagram with the channel number of 2 by using a convolution kernel 7 x 7 by using a sigmoid function to obtain a comprehensive space weight feature matrix; and multiplying the comprehensive space weight characteristic matrix and the channel weight characteristic diagram to obtain a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel.
The integrated spatial weight feature matrix is a two-dimensional spatial weight feature matrix.
In one embodiment, step 112 further comprises: and respectively carrying out multi-scale fusion extraction on the high-resolution characteristic diagram by adopting a convolution kernel 7 x 7, a convolution kernel 5 x 5, a convolution kernel 3 x 3 and a convolution kernel 1 x 1 to obtain the high-resolution characteristic of the multi-pulse radar signal.
The convolution kernel selects the reason: because even convolution kernels can cause image position information to shift, and the number of channels is not easy to determine, odd convolutions are generally adopted, the convolution kernels 7 x 7, 5 x 5 belong to a large convolution kernel, 3 x 3 and 1 x 1 belong to a small convolution kernel, different advantages of feature extraction of the large convolution kernel and the small convolution kernel are fused, and parameter number and calculated amount can be increased by selecting the larger convolution kernel without significance.
In one embodiment, as shown in fig. 2, a network structure for implementing a method for extracting a fusion of resolution features of a multi-pulse radar signal is provided.
Firstly, MaxPool and AvgPool are simultaneously carried out on an input feature map (Base) in a space dimension, and the space dimension is compressed to obtain two one-dimensional vectors which respectively represent channel weight coefficients of the two feature maps. And then, through a ReLU activation function, the two channel weight coefficients are added together for comprehensive analysis, channels with high correlation are highlighted, irrelevant channels are suppressed, which input channels are focused have higher resolution, and then a Sigmoid activation function is adopted to enhance the nonlinear feature expression capability of the network. And finally, multiplying the obtained one-dimensional channel weight vector by the input feature map, keeping the input size unchanged, and obtaining a channel weight feature map Out 1.
And in the channel dimension, continuously performing MaxPool and AvgPool on the channel weight characteristic diagram Out1 obtained at the upper layer, performing channel dimension compression to obtain two-dimensional space weight characteristic matrixes, and splicing the two-dimensional space weight characteristic matrixes together according to the channel dimension to obtain a characteristic diagram with the channel number of 2. And then, carrying out convolution operation by using Conv7, and activating by using a Sigmoid function to obtain a comprehensive two-dimensional spatial weight characteristic matrix, wherein the spatial position with high correlation is emphasized, the spatial position with low correlation is weakened, and which regions of the focused input image have higher resolution. And finally, multiplying the comprehensive two-dimensional space weight characteristic matrix by Out1, keeping the input size unchanged, and obtaining a high-resolution characteristic diagram Out2 of the multi-pulse radar signals on the space and the channel.
With the increase of the network depth, the problems of image information loss, network degradation and the like can be generated, and the problems can be well solved by adding the jump connection. The resulting high-resolution profile Out2 is thus added to the initially input profile Base, and, via the activation function ReLU, a more informative and spatially and channel-resolved profile Out3 results. And then, performing multi-scale feature extraction on the Out3, and adopting Conv7, Conv5, Conv3 and Conv1 respectively, wherein the larger convolution kernel has larger receptive field, strong semantic information representation capability, the smaller convolution kernel has smaller receptive field, strong geometric detail information representation capability and high resolution. Therefore, the convolution kernel with various sizes is adopted to perform feature fusion extraction on Out3, and features with high resolution are extracted.
In one embodiment, as shown in fig. 3, there is provided a method of identifying a multi-pulse radar signal, the method comprising:
step 300: a multi-pulse radar signal is acquired.
Through GNU Radio, USRP N210 and USRP-LW N210 Universal Software Radio Peripheral (Universal Software Radio Peripheral) simulation radar signal transmitting and receiving processes, 10 multi-pulse radar signals are generated between signal-to-noise ratios of-14 dB to-4 dB, wherein the signals are Barker, Chaotic, EQFM, Frank, FSK, LFM, LOFM, OFDM, P1 and P2 respectively, and the near-real multi-pulse radar signals are obtained, so that the reliability is high.
Step 302: performing CWD time-frequency analysis on the multi-pulse radar signal to obtain a two-dimensional time-frequency graph of the multi-pulse radar signal; and marking the two-dimensional time-frequency graph as a training sample.
The two-dimensional time-frequency diagram is a digital image, the loss of image information is small, and the processing and analysis of a computer are facilitated. The time-frequency analysis method is to transform one-dimensional time-domain signals to a two-dimensional time-frequency plane, and different time-frequency analysis methods have special time-frequency characteristics. Among them is Gabor transform, which is a simultaneous localization of time and frequency, and can better describe the transient structure in the signal, where the frequency resolution is completely determined by the gaussian window. Wavelet Transform (CWT) is the correlation operation between original signal and the wavelet function family after expansion and contraction, and wavelets with different time-frequency widths can be obtained by adjusting the scale to match different positions of the original signal, so as to achieve the local analysis of the signal. Wegener Weili distribution (WVD) distributes the energy of signals in a time-frequency plane, has good time-frequency focusing performance, but is influenced by cross term interference, and various smooth improvement methods of the Wegener Weili distribution can eliminate the influence of the cross term interference to a certain extent, but reduce the time-frequency focusing performance.
In order to improve the recognition rate of signals, the invention needs high-resolution images, so a Choi-Williams distribution function (CWD) is selected, which is one of the series functions of the Enclais distribution (Cohen's class distribution), the distribution uses an exponential kernel function to filter out cross terms, has the characteristic of minimum cross term interference, and has higher definition and resolution for different signals.
Step 304: constructing a high-resolution feature fusion extraction network, wherein the high-resolution feature fusion extraction network comprises a convolution network, a feature extraction network and an output network; the feature extraction network comprises an attention feature extraction module and a resolution feature fusion feature extraction module; the feature extraction network is used for extracting high-resolution features of the training samples; the attention feature extraction module is used for extracting the space attention feature and the channel attention feature of the training sample to obtain an attention feature map of the multi-pulse radar signal; the resolution feature fusion feature extraction module executes the resolution feature fusion extraction method of any multi-pulse radar signal, and is used for extracting the resolution feature of the attention feature map.
The convolutional network includes convolutional layers and a maximum pooling layer. The output network includes a global average pooling layer and a fully connected layer.
Step 306: and training the high-resolution feature fusion extraction network according to the training samples to obtain a multi-pulse radar signal identification model.
Inputting the training sample into a convolution network to obtain convolution characteristics; and inputting the convolution characteristics into a characteristic extraction network, outputting high-resolution characteristics, inputting the high-resolution characteristics into an output network, outputting a classification prediction result, and performing reverse training according to the classification prediction result and a training sample to obtain a multi-pulse radar signal identification model.
Step 308: and acquiring the multi-pulse radar signal to be detected.
Step 310: and performing CWD time-frequency analysis on the multi-pulse radar signal to be detected to obtain a two-dimensional time-frequency diagram to be detected.
Step 312: and inputting the two-dimensional time-frequency diagram to be detected into the multi-pulse radar signal identification model to obtain the category of the multi-pulse radar signal.
In one embodiment, step 304 further comprises that the attention feature extraction module is composed of 6 layers, including: a first convolution layer, a second convolution layer, a channel pooling layer, a space pooling layer, a third convolution layer and a fourth convolution layer; the convolution kernel of the first convolution layer is convolution kernel 1 × 1, the convolution kernel of the second convolution layer is convolution kernel 3 × 3, the convolution kernel of the third convolution layer is convolution kernel 7 × 7, and the convolution kernel of the fourth convolution layer is convolution kernel 1 × 1.
In one embodiment, step 306 further includes inputting the training samples into a convolution network to obtain convolution characteristics; inputting the convolution characteristic into a characteristic extraction network, and outputting a high-resolution characteristic; the feature extraction network includes: a attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module, b attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module, c attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module and d attention feature extraction modules, wherein the attention feature extraction modules are connected in series, and the attention feature extraction modules are connected in series and are connected in series
Figure 243936DEST_PATH_IMAGE008
Is an integer greater than 0, and
Figure 152986DEST_PATH_IMAGE009
. Inputting the high-resolution features into an output network, outputting a classification prediction result, carrying out reverse training according to the classification prediction result and a training sample,
and obtaining a multi-pulse radar signal identification model.
In one of the embodiments
Figure 81627DEST_PATH_IMAGE010
In one of the embodiments
Figure 279391DEST_PATH_IMAGE011
In one of the embodiments
Figure 686101DEST_PATH_IMAGE012
In one embodiment, the step of the previous embodiment further comprises inputting the high resolution features into a global average pooling layer of the output network to obtain
Figure 652920DEST_PATH_IMAGE013
A characteristic diagram of (1); will be provided with
Figure 690146DEST_PATH_IMAGE013
Inputting the characteristic diagram into a full connection layer of an output network, and outputting a classification prediction result; and carrying out reverse training according to the classification prediction result and the training samples to obtain the multi-pulse radar signal identification model.
In one embodiment, the steps of the previous embodiment further comprise adding a second layer of a second material
Figure 234260DEST_PATH_IMAGE013
The value obtained by inputting the characteristic diagram into the full connection layer of the output network is calculated by using Softmax, and the classification prediction result is output.
It should be understood that although the steps in the flowcharts of fig. 1 and 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 and 3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a device for extracting resolution feature fusion of a multi-pulse radar signal, including: the device comprises a multi-pulse radar signal characteristic diagram acquisition module, a channel weight coefficient determination module, a channel weight characteristic diagram determination module, a weight characteristic matrix determination module, a high-resolution characteristic diagram determination module with complete information and a high-resolution characteristic extraction module, wherein:
the multi-pulse radar signal characteristic diagram acquisition module: the method is used for acquiring the multi-pulse radar signal characteristic diagram.
A channel weight coefficient determination module: the method is used for respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in the space dimension to obtain a first channel weight coefficient, a characteristic diagram and a second channel weight coefficient of the multi-pulse radar signal characteristic diagram.
A channel weight feature map determination module: the channel weight characteristic diagram of the multi-pulse radar signal is obtained according to the first channel weight coefficient, the second channel weight coefficient, the preset first activation function, the preset second activation function and the multi-pulse radar signal characteristic diagram.
The weight feature matrix determination module: and the method is used for respectively carrying out maximum value pooling and average value pooling on the channel weight characteristic diagram on the channel dimension to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram.
A high resolution profile determination module: the characteristic diagram is used for obtaining a characteristic diagram with the channel number of 2 according to the first space weight characteristic matrix and the second space weight characteristic matrix; and obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram.
The complete information high-resolution characteristic diagram determining module: and the method is used for adding the high-resolution characteristic diagram and the multi-pulse radar signal characteristic diagram to obtain the high-resolution characteristic diagram with complete information.
High-resolution feature extraction module: the method is used for carrying out multi-scale fusion extraction by adopting various convolution kernels according to the high-resolution characteristic diagram with complete information to obtain the high-resolution characteristic of the multi-pulse radar signal.
In one embodiment, the channel weight feature map determining module further includes: the first activation function is a ReLU function, and the second activation function is a sigmoid function; and activating and superposing the first channel weight coefficient and the second channel weight coefficient by using a ReLU function to obtain a one-dimensional channel weight vector.
And multiplying an output value obtained by activating the one-dimensional channel weight vector sigmoid function by the initially input multi-pulse radar signal characteristic diagram to obtain the channel weight characteristic diagram of the multi-pulse radar signal.
In one embodiment, the high-resolution feature map determining module further includes a feature map obtaining module configured to obtain a feature map with a channel number of 2 according to the first spatial weight feature matrix and the second spatial weight feature matrix; and obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram.
In one embodiment, the high-resolution feature extraction module further comprises: and respectively carrying out multi-scale fusion extraction on the high-resolution characteristic diagram by adopting a convolution kernel 7 x 7, a convolution kernel 5 x 5, a convolution kernel 3 x 3 and a convolution kernel 1 x 1 to obtain the high-resolution characteristic of the multi-pulse radar signal.
For specific limitations of the device for extracting the fusion of the resolution features of the multi-pulse radar signal, reference may be made to the above limitations of the method for extracting the fusion of the resolution features of the multi-pulse radar signal, and details are not repeated here. All or part of the modules in the resolution characteristic fusion extraction device of the multi-pulse radar signals can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, three High-resolution feature fusion extraction network (HRF-Net) structures are constructed, namely HRF-Net157, HRF-Net187 and HRF-Net 217. Wherein "Conv" represents a composite structure including "convolution", "batch normalization" and "activation function", C- [ MaxPool, AvgPool ] represents compressing the image space dimension to obtain a feature map with channel weight coefficients, and S- [ MaxPool, AvgPool ] represents compressing the image channel dimension to obtain a feature map with space weight coefficients. The network structure is shown in table 1.
TABLE 1 HRF-Net parameters
Figure 320028DEST_PATH_IMAGE014
When 10 kinds of multi-pulse radar signals are identified using different classifiers, the parameters and the calculation amount of the network are different. Setting the size of the last layer of output characteristic graph of the network as
Figure 263057DEST_PATH_IMAGE015
When 3 full link layers are used, the parameters in the classifier are
Figure 674447DEST_PATH_IMAGE016
And (4) respectively. Parameters in the classifier are given when a single full-link layer is used
Figure 377961DEST_PATH_IMAGE017
And (4) respectively. When GAP is classified, the global average pooling replaces the full connection layer, and the pooling layer has no parameters, so that the parameter number can be further reduced, the memory can be saved, and the parameter number is
Figure 126474DEST_PATH_IMAGE018
. The parameters of the different networks are shown in table 2, and the calculated amounts of the different networks are shown in table 3.
Table 2: parameter quantities of different networks
Figure 864623DEST_PATH_IMAGE019
Table 3: computational load of different networks
Figure 446914DEST_PATH_IMAGE020
As can be seen from table 2, the parameter amount of the networks of the same type is gradually increased along with the increase of the network depth, which indicates that the network depth has a certain influence on the parameter amount. VGG-Nets adopt three full connection layers as classifiers, HRF-Nets adopt GAP plus single layer full connection, although VGG13 has only 13 layers of network depth, the parameter quantity is 4.18 times of HRF-Nets 157, 3.64 times of HRF-Nets 187 and 3.16 times of HRF-Nets 217, so that the classifiers can be seen as key factors influencing the network parameter quantity. Parameters of SKNet152, SENET152 and ResNet152 are larger than those of HRF-Net157, and the parameter of ResNet152 is 1.89 times that of HRF-Net157, and the parameter is about 28.37 millions. ResNet152 and HRF-Net157 adopt the same classifier, but ResNet152 parameters with shallow Net depth are still larger than HRF-Net157, because most of ResNet152 are convolutional layers, and the parameters of convolutional layers are convolutional layers without considering bias
Figure 965620DEST_PATH_IMAGE021
Wherein
Figure 393190DEST_PATH_IMAGE022
Representing the size of the convolution kernel,
Figure 48163DEST_PATH_IMAGE023
in order to input the number of channels,
Figure 598093DEST_PATH_IMAGE024
in order to output the channel number, the HRF-Nets adopt a large number of pooling layers, and the pooling layers have no parameters, so that the parameter amount of the HRF-Nets is reduced more obviously under the condition of deeper depth.
As can be seen from Table 3, the VGG network is very large in computation amount, the floating point number computation amount of the VGG network at the layer 13 is 113.21 hundred million, which is 1.56 times of that of the HRF-Net at the layer 157, the HRF-Net157 is deeper than the ResNet152 network, but the computation amount of the ResNet152 is largeThe HRF-Net157 is 1.59 times as much as 43.09 hundred million. This is because ResNet152 uses a large number of convolutional layers, which are calculated without considering the offset
Figure 479461DEST_PATH_IMAGE025
Wherein
Figure 835356DEST_PATH_IMAGE026
The length and width of the output characteristic diagram are shown, the HRF-Nets comprise a plurality of pooling layers, and the calculated amount of the pooling layers is
Figure 485780DEST_PATH_IMAGE027
Thus, HRF-Net157 has a smaller computational load than ResNet152, with the difference becoming more and more pronounced as the depth of the Net increases. The network structure and the network depth have larger influence on the calculated amount, the calculated amount of HRF-Net217 is 30.32 percent more than that of HRF-Net157, and the calculated amount of HRF-Net187 is 15.21 percent more than that of HRF-Net 157. The radar system requires high real-time performance, particularly when the radar system is in small equipment such as missile-borne equipment, the memory is insufficient, hardware conditions do not support excessive parameters and calculated quantities, and the HRF-Net157 is relatively small in parameters and calculated quantities, so that when the signal identification rate is small, the HRF-Net157 has the highest cost performance, and is a better choice.
In the embodiment, the experimental data set is generated by simulating the transmitting and receiving processes of real radar signals through GNU Radio and USRP N210, USRP-LW N210. Since the enemy signal is intercepted by the enemy signal, the pulse number is 4, and therefore the radar signal is generated to be closer to a real radar signal. And performing CWD on the generated signal to obtain TFI, wherein the TFI is a digital image, the image information loss is small, and the processing and analysis of a computer are facilitated.
In the radar signal data set in this embodiment, 10 types of signals are counted, each type of signal generates 2880 TFIs, each type of signal contains 288 samples every 2dB within-14-4 dB of the signal-to-noise ratio, 28800 samples are counted, fig. 5 shows the TFIs of 10 types of radar signals after passing through the CWD, where: fig. 5 (a) is TFI of Barker signal after CWD; fig. 5 (b) is a TFI of a Chaotic signal after passing through a CWD, fig. 5 (c) is a TFI of an EQFM signal after passing through a CWD, fig. 5 (d) is a TFI of a Frank signal after passing through a CWD, fig. 5 (e) is a TFI of an FSK signal after passing through a CWD, fig. 5 (f) is a TFI of an LFM signal after passing through a CWD, fig. 5 (g) is a TFI of an LOFM signal after passing through a CWD, fig. 5 (h) is a TFI of an OFDM signal after passing through a CWD, fig. 5 (i) is a TFI of a P1 signal after passing through a CWD, and fig. 5 (j) is a TFI of a P2 signal after passing through a CWD.
As can be seen from fig. 5, different radar signals TFI all have a large number of repeated similar regions, and the regions of the distinctive features are relatively small, so that the design of the distinguishing feature fusion extraction module herein can focus on extracting the region features with strong distinguishing property, thereby achieving the purpose of improving the signal recognition rate.
In the experiment, we down-sample the training and test set samples to a fixed resolution of 224 x 224, then data expansion, random horizontal flipping, random vertical flipping and random rotation of 90 ° of the image, the data set expanded by a factor of 3, preventing network overfitting. To maintain the uniformity of the experiments, the experiments were all performed under the same platform and environment, and the platform for signal generation is shown in table 4.
TABLE 4 Signal Generation Table parameters
Figure 941032DEST_PATH_IMAGE028
The software and hardware configuration information for the experimental platform is shown in table 5. In the experiment, the blocksize is 16, the learning rate is 0.001, the weight attenuation is 5e-4, the impulse is 0.9, and the experiment is performed for 60 cycles in total.
TABLE 5 test platform parameters
Figure 434331DEST_PATH_IMAGE029
In order to enable the radar signal to be identified more truly, noise is added into the signal to simulate the interference of an external complex environment, and the signal to noise ratio is kept at-14-4 dB. And then, through GNU Radio and USRP N210, the USRP-LW N210 generates a more real multi-pulse radar signal. We identify multi-pulse radar signals at different signal-to-noise ratios by HRF-Net at different depths. The results of the experiment are shown in FIG. 6.
As can be seen from FIG. 6, under the condition that the signal-to-noise ratio of HRF-Nets is-8 dB, the signal recognition rates are all over 99%, and when the signal-to-noise ratio is-14 dB, noise already generates great interference on the signal, but the recognition rate is still over 97%, which indicates that the HRF-Nets network has good robustness. The recognition rate of the HRF-Net157 is slightly lower than that of the other two networks and is within 1%, which indicates that the extraction capability of the signal features is nearly saturated along with the continuous deepening of the network depth, and the signal recognition rate is not obviously improved. The parameters of HRF-Net217 and HRF-Net187 are 32.46% and 14.93% more than HRF-Net157, respectively, and the calculated amounts are 30.32% and 15.21% more, respectively. According to the experimental results, the recognition rate of the HRF-Net217 is found to be the highest, and is improved by less than 1% compared with the HRF-Net157, but the network parameters and the calculated amount are greatly increased, the comprehensive analysis shows that the cost performance of the HRF-Net157 is the highest, the HRF-Net157 is compared with other CNN networks, and the experimental results are shown in Table 6.
Table 6 other CNN network identification filters (%)
Figure 469283DEST_PATH_IMAGE030
As can be seen from Table 6, other CNNs have good identification rates for radar signals above the signal-to-noise ratio of-2 dB, but the HRF-Nets have relatively high identification performance between the low signal-to-noise ratios of-14 dB to-2 dB. Because the electromagnetic environment of modern battlefields is more and more complex, and the interference to signals is more and more big, the radar signal identification under low signal-to-noise ratio has more important meaning and the identification difficulty is also bigger. Under the condition of-14 dB, the signal recognition rate of VGG-Nets is about 7% lower than that of HRF-Nets 157, the signal recognition rate is lower due to the fact that the network depth is shallow and the characteristics of images cannot be fully extracted, the number and the calculation amount of VGG-Nets are too large, the requirement on hardware is too high, the calculation time is too long, and the method is not suitable for the field of radar electronic countermeasure with high real-time performance.
The recognition rate of HRF-Net157 is improved by about 2% compared with ResNet152, SENET152 and SKNet152, and is 2.418% higher than ResNet152, and the calculation amount and parameter amount are relatively less. Although ResNet152 employs a hopping connection, the "network degradation" problem is solved to some extent, preserving information integrity. However, the TFI distinctive feature region of the multi-pulse radar signal is small, the repeatability region is large, when the ResNet152 extracts the image features, the same processing is performed on the image overall, and the distinctive feature region is not targeted. The DFFE module provided by the text solves the problem to a certain extent, focuses on and extracts the high-resolution regional features of the image, improves the signal identification rate and enhances the generalization.
Further comparison of HRF-Net157 with other radar signal identification methods resulted in the results shown in Table 7.
According to the table 7, the CLDNN network has better recognition rate of more than-8 dB, which is more than 90%, but the recognition rate between-14 to-8 dB is relatively poor, and the comprehensive recognition rate of the HRF-Net157 under-14 dB is still up to 97.500%, which indicates that the HRF-Net157 can still fully extract image features under low signal-to-noise ratio, has stronger anti-interference capability and better robustness. The FCBF-AdaBoost adopts the traditional feature selection and classifier design, has good recognition rate under the condition of small interference, but the method mainly aims at recognizing certain type of images, and the recognition rate is relatively poor under the environment with multiple tasks and low signal to noise ratio. CNN-KCRDP, AlexNet and I-CNN are combined with deep learning to identify images to a certain extent, image features can be extracted in a self-adaptive mode, the difference between the recognition rate of more than-6 dB and the recognition rate of HRF-Net157 is not large, but under the condition of more serious interference, signal features are submerged by noise, common CNN is difficult to extract the signal features, HRF-Net157 can extract the features with higher resolution to a greater extent through a DFFE module, and the high recognition rate can still be achieved at low signal-to-noise ratio.
TABLE 7 HRF-Net157 compares the recognition rates (%) -with other radar signal recognition methods
Figure 98847DEST_PATH_IMAGE031
At 3 depths of HRF-Net, the radar signal recognition rates for the different types are shown in Table 8.
TABLE 8 radar signal recognition rate of different types (-14dB) (%)
Figure 928263DEST_PATH_IMAGE032
It can be seen from table 8 that, in an environment with a low signal-to-noise ratio (-14dB), the difference of the recognition effect of the HRF-Net networks with 3 depths on the same type of radar signals is not large, which indicates that when the network depth reaches a certain degree, the signal features can be fully extracted, and the increase of the network depth increases a large number of parameters and calculated amounts on the contrary, and the recognition effect is not significantly improved. The radar signal identification effect difference of different types in the same network is large, wherein the identification rates of Barker, Chaotic, Frank, OFDM, FSK, LFM, EQFM and LOFM are all more than 94%, the identification effects of P1 and P2 are relatively poor and are about 90%, and the fluctuation is large.
And (3) selecting the HRF-Net157 with the optimal cost performance, and generating a confusion matrix at-14 dB for further analysis, as shown in FIG. 7. It can be seen from fig. 7 that all the classes of P1 recognized errors are P2, and 5 of 6 error classes of P2 are P1, because the TFIs of P1 and P2 are very similar, under the environment of-14 dB, the energy of noise is much larger than that of signal, the characteristic of signal is covered by noise, which results in that P1 and P2 are more similar, the recognition difficulty is greatly increased, but the recognition rates of P1 and P2 still reach about 90%, and the comprehensive recognition rate of HRF-Net157 under-14 dB reaches 97.500%, and the recognition performance is higher than that of other methods.
The HRF-Nets provided by the invention can focus and extract the image features with high resolution aiming at the images with smaller distinguishing areas, so as to obtain better identification effect. Other traditional methods are mainly designed and feature selection of classifiers for specific images, when the image change is large, the recognition effect is poor, the artificially designed feature extraction algorithm is also complex, the generalization performance is low, and compared with other CNNs, the HRF-Nets still have 97.500% recognition rate under a-14 dB signal-to-noise ratio, which is higher than that of other CNNs, and have better robustness.
According to experimental results, the recognition rate of signals is over 99% under the condition of-6 dB, and the recognition rate of the signals is up to 97.500% under the condition of-14 dB, along with the increase of network depth, the difference of the recognition rates of the three networks is only within 1%, but the parameters and the calculated amount are greatly increased, and comprehensively considered, the HRF-Net157 is considered to have the highest cost performance, and the recognition rate of the HRF-Net157 between the signal to noise ratio of-14 dB and the signal to noise ratio of-6 dB is higher than that of other CNNs in a comparison process with the other CNNs, so that the HRF-Net157 is more obvious under the condition of low signal to noise ratio. When the method is compared with other methods, the signal identification rate of the HRF-Net157 is higher than that of other methods under the condition of-14 dB, and the method has better robustness. Under the condition of-14 dB, the HRF-Nets have better identification effect on different types of radar signals.
According to the radar signal TFI, the similarity region between different images is large, and the distinguishing region is small, so that the importance degree of different regions of the images is considered when image features are extracted, the region features with higher resolution are focused and extracted, the network depth is kept appropriate, the image features cannot be fully extracted when the network depth is too shallow, the network identification rate is not changed remarkably, the network degradation problem can occur, the parameter quantity and the calculated quantity are increased greatly, and the network degradation problem can be solved to a certain extent by adopting jump connection, so that the integrity of image information is kept. The classifier adopts GAP and then adds single-layer full connection, so that the parameter quantity and the calculation quantity of the network can be reduced. The DFFE module designed in the text can perform resolution feature fusion extraction on images, and firstly, the spatial dimensions are compressed by adopting Maxpool and Avgppool to obtain a feature map with channel weight. And then compressing the channel dimension by adopting Maxpool and Avgpol to obtain a high-resolution characteristic diagram. And adding the high-resolution feature map and the input feature map to obtain a high-resolution feature map with complete information, and performing multi-scale fusion extraction on the features of the high-resolution feature map and the input feature map through Conv1, Conv3, Conv5 and Conv7 to obtain features with high resolution and improve the signal recognition rate.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method for extracting resolution feature fusion of a multi-pulse radar signal, the method comprising:
acquiring a multi-pulse radar signal characteristic diagram;
respectively carrying out maximum pooling and average pooling on the multi-pulse radar signal characteristic diagram in a space dimension to obtain a first channel weight coefficient and a second channel weight coefficient of the multi-pulse radar signal characteristic diagram;
obtaining a channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, a preset first activation function, a preset second activation function and the multi-pulse radar signal characteristic diagram;
respectively carrying out maximum pooling and average pooling on the channel weight characteristic diagram in channel dimensions to obtain a first space weight characteristic matrix and a second space weight characteristic matrix of the channel weight characteristic diagram;
obtaining a feature map with the channel number of 2 according to the first space weight feature matrix and the second space weight feature matrix; obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram;
adding the high-resolution characteristic diagram and the multi-pulse radar signal characteristic diagram to obtain a high-resolution characteristic diagram with complete information;
and performing multi-scale fusion extraction by adopting various convolution kernels according to the high-resolution characteristic diagram with complete information to obtain the high-resolution characteristic of the multi-pulse radar signal.
2. The method of claim 1, wherein the first activation function is a ReLU function and the second activation function is a sigmoid function;
obtaining a channel weight characteristic diagram of the multi-pulse radar signal according to the first channel weight coefficient, the second channel weight coefficient, the first activation function, the second activation function and the multi-pulse radar signal characteristic diagram, wherein the channel weight characteristic diagram comprises:
activating the first channel weight coefficient and the second channel weight coefficient by using a ReLU function, and then adding to obtain a one-dimensional channel weight vector;
and multiplying an output value obtained by activating the one-dimensional channel weight vector by using a sigmoid function by the multi-pulse radar signal characteristic diagram to obtain the channel weight characteristic diagram of the multi-pulse radar signal.
3. The method according to claim 1, wherein a feature map with a channel number of 2 is obtained according to the first spatial weight feature matrix and the second spatial weight feature matrix; obtaining a high-resolution characteristic diagram of the multi-pulse radar signal on a space and a channel according to the characteristic diagram with the channel number of 2 and the channel weight characteristic diagram, wherein the high-resolution characteristic diagram comprises the following steps:
splicing the first space weight characteristic matrix and the second space weight characteristic matrix together according to channel dimensions to obtain a characteristic diagram with the channel number being 2;
activating a value obtained by performing convolution operation on the feature diagram with the channel number of 2 by using a convolution kernel 7 x 7 by using a sigmoid function to obtain a comprehensive space weight feature matrix;
and multiplying the comprehensive space weight characteristic matrix and the channel weight characteristic diagram to obtain a high-resolution characteristic diagram of the multi-pulse radar signal on the space and the channel.
4. The method of claim 1, wherein performing multi-scale fusion extraction using a plurality of convolution kernels according to the high-resolution feature map to obtain high-resolution features of the multi-pulse radar signal comprises:
and respectively carrying out multi-scale fusion extraction on the high-resolution characteristic map by adopting a convolution kernel 7 x 7, a convolution kernel 5 x 5, a convolution kernel 3 x 3 and a convolution kernel 1 x 1 to obtain the high-resolution characteristic of the multi-pulse radar signal.
5. A method of identifying a multi-pulse radar signal, the method comprising:
acquiring a multi-pulse radar signal;
performing CWD time-frequency analysis on the multi-pulse radar signal to obtain a two-dimensional time-frequency graph of the multi-pulse radar signal; marking the two-dimensional time-frequency graph as a training sample;
constructing a high-resolution feature fusion extraction network, wherein the high-resolution feature fusion extraction network comprises a convolution network, a feature extraction network and an output network; the feature extraction network comprises an attention feature extraction module and a resolution feature fusion feature extraction module; the attention feature extraction module is used for extracting the space attention feature and the channel attention feature of the training sample to obtain an attention feature map of the multi-pulse radar signal; the resolution feature fusion feature extraction module executes a resolution feature fusion extraction method of any one of the multi-pulse radar signals of claims 1-4 for extracting high-resolution features of the attention feature map;
training the high-resolution feature fusion extraction network according to the training samples to obtain a multi-pulse radar signal identification model;
acquiring a multi-pulse radar signal to be detected;
performing CWD time-frequency analysis on the multi-pulse radar signal to be detected to obtain a two-dimensional time-frequency diagram to be detected;
and inputting the two-dimensional time-frequency diagram to be detected into the multi-pulse radar signal identification model to obtain the category of the multi-pulse radar signal.
6. The method of claim 5, wherein the attention feature extraction module is comprised of 6 layers, including: a first convolution layer, a second convolution layer, a channel pooling layer, a space pooling layer, a third convolution layer and a fourth convolution layer;
the convolution kernel of the first convolution layer is convolution kernel 1 × 1, the convolution kernel of the second convolution layer is convolution kernel 3 × 3, the convolution kernel of the third convolution layer is convolution kernel 7 × 7, and the convolution kernel of the fourth convolution layer is convolution kernel 1 × 1.
7. The method of claim 5, wherein training the high-resolution feature fusion extraction network according to the training samples to obtain a multi-pulse radar signal recognition model comprises:
inputting the training sample into a convolution network to obtain convolution characteristics;
inputting the convolution characteristic into the characteristic extraction network, and outputting a high-resolution characteristic; the feature extraction network includes: a attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module, b attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module, c attention feature extraction modules, a multi-pulse radar signal resolution feature fusion extraction module and d attention feature extraction modules, wherein the attention feature extraction modules are connected in series, and the attention feature extraction modules are connected in series and are connected in series
Figure 657162DEST_PATH_IMAGE001
Is an integer greater than 0, and
Figure 167778DEST_PATH_IMAGE002
and inputting the high-resolution features into an output network, outputting a classification prediction result, and performing reverse training according to the classification prediction result and the training samples to obtain a multi-pulse radar signal identification model.
8. The method of claim 7, wherein inputting the high-resolution features into an output network, outputting a classification prediction result, and performing reverse training according to the classification prediction result and the training samples to obtain a multi-pulse radar signal recognition model, comprises:
inputting the high-resolution features into a global average pooling layer of an output network to obtain
Figure 150777DEST_PATH_IMAGE003
A characteristic diagram of (1);
will be described in
Figure 811566DEST_PATH_IMAGE003
Inputting the characteristic diagram into a full connection layer of an output network, and outputting a classification prediction result;
and carrying out reverse training according to the classification prediction result and the training samples to obtain a multi-pulse radar signal identification model.
9. The method of claim 8, wherein the step of converting the signal into a signal comprises converting the signal into a signal having a frequency that is different from a frequency of the signal
Figure 953834DEST_PATH_IMAGE003
The feature map of (2) is input to a full connection layer of an output network, and a classification prediction result is output, and the method comprises the following steps:
will be described in
Figure 245138DEST_PATH_IMAGE003
The value obtained by inputting the characteristic diagram into the full connection layer of the output network is calculated by using Softmax, and the classification prediction result is output.
CN202110028601.3A 2021-01-11 2021-01-11 Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals Active CN112346056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110028601.3A CN112346056B (en) 2021-01-11 2021-01-11 Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110028601.3A CN112346056B (en) 2021-01-11 2021-01-11 Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals

Publications (2)

Publication Number Publication Date
CN112346056A true CN112346056A (en) 2021-02-09
CN112346056B CN112346056B (en) 2021-03-26

Family

ID=74428076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110028601.3A Active CN112346056B (en) 2021-01-11 2021-01-11 Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals

Country Status (1)

Country Link
CN (1) CN112346056B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836054A (en) * 2021-03-08 2021-05-25 重庆大学 Service classification method based on symbiotic attention representation learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106680775A (en) * 2016-12-12 2017-05-17 清华大学 Method and system for automatically identifying radar signal modulation modes
CN109375186A (en) * 2018-11-22 2019-02-22 中国人民解放军海军航空大学 Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error
CN109407067A (en) * 2018-10-13 2019-03-01 中国人民解放军海军航空大学 Radar moving targets detection and classification integral method based on time-frequency figure convolutional neural networks
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
US10402692B1 (en) * 2019-01-22 2019-09-03 StradVision, Inc. Learning method and learning device for fluctuation-robust object detector based on CNN using target object estimating network adaptable to customers' requirements such as key performance index, and testing device using the same
US10410120B1 (en) * 2019-01-25 2019-09-10 StradVision, Inc. Learning method and testing method of object detector to be used for surveillance based on R-CNN capable of converting modes according to aspect ratios or scales of objects, and learning device and testing device using the same
US10783639B2 (en) * 2016-10-19 2020-09-22 University Of Iowa Research Foundation System and method for N-dimensional image segmentation using convolutional neural networks
CN111712725A (en) * 2018-12-26 2020-09-25 北京航迹科技有限公司 Multi-pulse fusion analysis for laser radar ranging
CN111783935A (en) * 2020-05-15 2020-10-16 北京迈格威科技有限公司 Convolutional neural network construction method, device, equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783639B2 (en) * 2016-10-19 2020-09-22 University Of Iowa Research Foundation System and method for N-dimensional image segmentation using convolutional neural networks
CN106680775A (en) * 2016-12-12 2017-05-17 清华大学 Method and system for automatically identifying radar signal modulation modes
CN109407067A (en) * 2018-10-13 2019-03-01 中国人民解放军海军航空大学 Radar moving targets detection and classification integral method based on time-frequency figure convolutional neural networks
CN109375186A (en) * 2018-11-22 2019-02-22 中国人民解放军海军航空大学 Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error
CN111712725A (en) * 2018-12-26 2020-09-25 北京航迹科技有限公司 Multi-pulse fusion analysis for laser radar ranging
US10402692B1 (en) * 2019-01-22 2019-09-03 StradVision, Inc. Learning method and learning device for fluctuation-robust object detector based on CNN using target object estimating network adaptable to customers' requirements such as key performance index, and testing device using the same
US10410120B1 (en) * 2019-01-25 2019-09-10 StradVision, Inc. Learning method and testing method of object detector to be used for surveillance based on R-CNN capable of converting modes according to aspect ratios or scales of objects, and learning device and testing device using the same
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN111783935A (en) * 2020-05-15 2020-10-16 北京迈格威科技有限公司 Convolutional neural network construction method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹为钦 等: ""基于注意力机制的PointPillars+三维目标检测"", 《江苏大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836054A (en) * 2021-03-08 2021-05-25 重庆大学 Service classification method based on symbiotic attention representation learning
CN112836054B (en) * 2021-03-08 2022-07-26 重庆大学 Service classification method based on symbiotic attention representation learning

Also Published As

Publication number Publication date
CN112346056B (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN110109060B (en) Radar radiation source signal sorting method and system based on deep learning network
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
CN110490265B (en) Image steganalysis method based on double-path convolution and feature fusion
CN112446357B (en) SAR automatic target recognition method based on capsule network
CN109242097B (en) Visual representation learning system and method for unsupervised learning
CN111126134A (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN113297572B (en) Deep learning sample-level anti-attack defense method and device based on neuron activation mode
CN112288026B (en) Infrared weak and small target detection method based on class activation diagram
Li et al. Densely connected network for impulse noise removal
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN112346056B (en) Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals
Montúfar et al. Can neural networks learn persistent homology features?
CN116047427A (en) Small sample radar active interference identification method
Kamal et al. Generative adversarial learning for improved data efficiency in underwater target classification
Wang et al. Fused adaptive receptive field mechanism and dynamic multiscale dilated convolution for side-scan sonar image segmentation
CN111291712B (en) Forest fire recognition method and device based on interpolation CN and capsule network
CN110969203B (en) HRRP data redundancy removing method based on self-correlation and CAM network
Lin et al. Optimization of a multi-stage ATR system for small target identification
CN115272865A (en) Target detection method based on adaptive activation function and attention mechanism
CN115661627A (en) Single-beam underwater target identification method based on GAF-D3Net
Chen et al. Feature fusion based on convolutional neural network for SAR ATR
CN115049054A (en) Channel self-adaptive segmented dynamic network pruning method based on characteristic diagram response
Isaacs et al. Signal diffusion features for automatic target recognition in synthetic aperture sonar
Liao et al. Convolution filter pruning for transfer learning on small dataset
Barngrover Automated detection of mine-like objects in side scan sonar imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Ji

Inventor after: Zhang Huiqiang

Inventor after: Wang Wei

Inventor after: Wang Xin

Inventor after: Li Gang

Inventor before: Li Ji

Inventor before: Zhang Huiqiang

Inventor before: Wang Wei

Inventor before: Wang Xin

Inventor before: Ou Jianping

Inventor before: Li Gang