CN113343796A - Knowledge distillation-based radar signal modulation mode identification method - Google Patents

Knowledge distillation-based radar signal modulation mode identification method Download PDF

Info

Publication number
CN113343796A
CN113343796A CN202110569016.4A CN202110569016A CN113343796A CN 113343796 A CN113343796 A CN 113343796A CN 202110569016 A CN202110569016 A CN 202110569016A CN 113343796 A CN113343796 A CN 113343796A
Authority
CN
China
Prior art keywords
network
teacher
training
radar signal
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110569016.4A
Other languages
Chinese (zh)
Other versions
CN113343796B (en
Inventor
曲志昱
李�根
司伟建
许翎靖
邓志安
张春杰
汲清波
侯长波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110569016.4A priority Critical patent/CN113343796B/en
Publication of CN113343796A publication Critical patent/CN113343796A/en
Application granted granted Critical
Publication of CN113343796B publication Critical patent/CN113343796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention belongs to the technical field of deep learning and radar signal modulation identification, and particularly relates to a knowledge distillation-based radar signal modulation mode identification method. The invention combines the thought of knowledge distillation, designs a lightweight network by utilizing a network compression method of knowledge distillation, has low requirement on the memory of equipment, and is beneficial to being integrated into a chip and deployed to terminal equipment. The knowledge distillation training is completed by utilizing the two teacher networks, the radar signals with seriously damaged time-frequency structures under the condition of low signal-to-noise ratio are independently trained to the second teacher network, the soft labels are obtained to be used as supervision information, and the identification accuracy of the lightweight network under the condition of low signal-to-noise ratio can be improved. The networks mentioned by the invention all adopt residual error networks, can extract deeper features of time-frequency images, and have good adaptability to various radar signals. The invention can realize the light weight of the identification network and can realize higher identification accuracy on the modulation mode of the radar signal under lower signal to noise ratio.

Description

Knowledge distillation-based radar signal modulation mode identification method
Technical Field
The invention belongs to the technical field of deep learning and radar signal modulation identification, and particularly relates to a knowledge distillation-based radar signal modulation mode identification method.
Background
The radar signal identification is an important component of a radar reconnaissance system, and accurate enemy information can be provided through the radar signal identification, so that an important reference basis is provided for judging the type and threat level of enemy radar. With the rapid development of modern radar technology, the interception probability of radar signals is lower and lower, and the modulation types are more complex and diversified. To cope with such variations, modern radar modulation identification methods need to accurately identify a wide range of signal modulation types in a low signal-to-noise ratio (SNR) environment.
Meanwhile, the existing radar signal identification method based on deep learning can utilize the characteristics of automatic feature extraction of a neural network and the like to realize higher identification accuracy. The scholars Guo Li in 2017 put forward that AlexNet network models are used for identifying 7 radar signals, and the accuracy is over 90% when the signal to noise ratio is-6 dB. Qin, in 2020, provides a radar signal identification method based on an extended residual error network, and when the signal-to-noise ratio is-6 dB, the identification accuracy of 16 radar signals is more than 93%. However, as the number of network layers is continuously deepened, the network parameter quantity is continuously increased, and although the radar signal modulation type identification performance is improved by the network with large parameter quantity, the requirement of training of such many parameters on the memory of the device is high, and an overlarge model cannot be stored on a chip. Therefore, on the premise of ensuring the accuracy of identifying the radar signal modulation type, a network with smaller parameter and higher efficiency needs to be designed
In deep learning, the knowledge distillation method is an effective method for carrying out model compression by transferring the 'knowledge' of a trained complex model to a model with a simpler structure. The key idea is that a soft label generated by a complex model is used as supervision information to assist a real label to train a simple model together, and meanwhile, a temperature variable T is added in the complex model training process, so that the soft label carries more information.
Disclosure of Invention
The invention aims to provide a knowledge distillation-based radar signal modulation mode identification method which can accurately identify a wide range of radar signal modulation types in a low signal-to-noise ratio environment.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: performing smooth pseudo Wigner-Ville transformation on the intercepted radar signal to obtain a two-dimensional time-frequency image;
the digital model of the radar signal is:
x(t)=s(t)+n(t)
wherein x (t) is a signal received by the receiver; s (t) is a radar signal; n (t) is channel noise;
the specific formula for performing smooth pseudo Wigner-Ville transformation on radar signals x (t) is as follows:
Figure BDA0003081911950000021
wherein t and f respectively represent variables in time domain and frequency domain in time-frequency analysis; h (tau) and g (s-t) are window functions of a frequency domain and a time domain respectively;
step 2: preprocessing a two-dimensional time-frequency image, including adjusting the size and reducing the image channel mean value for standardization; taking the modulation type of the radar signal as a label, and constructing a first training set; constructing a second training set of the radar signals with the damaged time-frequency structure under the condition of low signal-to-noise ratio;
and step 3: three different deep convolutional neural networks were designed: two teacher networks, one student network; the parameter quantity of the two teacher networks is larger, and the parameter quantity of the student networks is smaller;
and 4, step 4: training two teacher networks and a student network to obtain a lightweight model for identifying the modulation type of the unknown radar;
step 4.1: adding a temperature variable T into the softmax layers of the two teacher networks, so that the softmax function is defined as shown in the following formula:
Figure BDA0003081911950000022
zifor the output of the last layer of the teacher network, adding a temperature variable T to enable each category to generate smoother probability distribution, so that information carried by negative labels is relatively amplified, and model training focuses more on the negative labels;
step 4.2: training a first teacher network incorporating the temperature variable T using a first training set; training a second teacher network incorporating the temperature variable T using a second training set;
step 4.3: training a student network by using a first training set, adding two teacher network outputs of a training set data softmax layer as additional supervision information during training, defining the two outputs as a first soft label and a second soft label respectively, and reconstructing a loss function with real labels of the training set, wherein the following formula is as follows:
Ltotal=αLhard+βLsoft1+δLsoft2
wherein L ishardCross entropy losses generated for the real tags and student network outputs; l issoft1Cross entropy loss generated for a first soft label generated for a first teacher network and a student network output; l issoft2Cross entropy loss generated for a second soft label generated for a second teacher network and a student network output; α, β, and δ are coefficients corresponding to respective loss functions;
step 4.4: training to obtain a final student model, and keeping student network parameters as a lightweight model for identifying unknown radar modulation types;
and 5: when an unknown type radar signal is intercepted, carrying out smooth pseudo Wigner-Ville transformation on the unknown type radar signal to obtain a two-dimensional time-frequency image; and preprocessing the two-dimensional time-frequency images, and inputting the preprocessed two-dimensional time-frequency images into a lightweight model for identifying unknown radar modulation types to obtain an identification result.
The present invention may further comprise:
the method for preprocessing the two-dimensional time-frequency image in the step 2 specifically comprises the following steps: uniformly adjusting the time-frequency images to 128 × 128 by adopting a nearest neighbor interpolation method; and subtracting the pixel average value in the channel where each pixel value of the time-frequency image is positioned from each pixel value of the time-frequency image, reducing the pixel value of the input image and reducing the common characteristics in the image.
In the step 3, both teacher networks adopt a resnet34 network structure, the two-dimensional time-frequency image after pretreatment is subjected to feature extraction through 16 groups of residual error units, each group of residual error units is respectively provided with two layers of convolution, and the extracted features enter a full connection layer for classification after passing through an Avgpooling layer; the student network comprises 4 groups of convolution units, wherein the first group is a convolution layer with convolution kernel of 7 x 7, and the rest groups are residual error units and respectively have two layers of convolution; the preprocessed two-dimensional time-frequency image enters a student network, firstly, after the two-dimensional time-frequency image is subjected to first group convolution and maximum pooling, the two-dimensional time-frequency image sequentially enters 3 groups of residual error units for feature extraction, and the extracted features enter a full connection layer for classification after passing through an Avgpoling layer.
The invention has the beneficial effects that:
the invention combines the thought of knowledge distillation, designs a lightweight network by utilizing a network compression method of knowledge distillation, has low requirement on the memory of equipment, and is beneficial to being integrated into a chip and deployed to terminal equipment. The knowledge distillation training is completed by utilizing the two teacher networks, the radar signals with seriously damaged time-frequency structures under the condition of low signal-to-noise ratio are independently trained to the second teacher network, the soft labels are obtained to be used as supervision information, and the identification accuracy of the lightweight network under the condition of low signal-to-noise ratio can be improved. The networks mentioned by the invention all adopt residual error networks, can extract deeper features of time-frequency images, and have good adaptability to various radar signals. The invention can realize the light weight of the identification network and can realize higher identification accuracy on the modulation mode of the radar signal under lower signal to noise ratio.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a block diagram of a student network in an embodiment of the invention.
Fig. 3 is a graph illustrating the relationship between accuracy and snr for class 14 radar signal modulation type identification according to an embodiment of the present invention.
Fig. 4 is a relationship diagram of the recognition accuracy of the final lightweight model and the original student model in the embodiment of the present invention.
FIG. 5 is a parameter scale for a teacher model and a student model in an embodiment of the invention.
FIG. 6 is a table of types and parameters of 14 simulated radar modulation signals according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention aims to provide a light convolutional neural network which can accurately identify a wide range of radar signal modulation types in a low signal-to-noise ratio environment. The invention provides a knowledge distillation-based radar signal modulation mode identification method by combining the thought of knowledge distillation, so that the identification network is light in weight, and the radar signal modulation mode can be identified with high accuracy under a low signal-to-noise ratio.
The object of the invention is achieved by the following steps:
(1) performing smooth pseudo Wigner-Ville transform (SPWVD) on the intercepted radar signal to obtain a two-dimensional time-frequency image;
(2) preprocessing the obtained time-frequency image, including adjusting the size, reducing the image channel mean value for standardization,
then, a training set 1 is made according to the modulation type of the radar signals as a label, and meanwhile, the radar signals with the damaged time-frequency structure under the condition of low signal-to-noise ratio are independently used as a training set 2;
(3) three different deep convolutional neural networks were designed: two teacher networks, one student network;
(4) training to obtain a lightweight network model: respectively training two teacher models by using a training set 1 and a training set 2, storing parameters after training, redesigning a loss function when training a student network, and adding soft labels generated by the two teacher models to obtain a final lightweight model;
(5) when the intercepted unknown radar signal arrives, the signal is input into a trained lightweight identification network after the same pretreatment to automatically complete identification, and the modulation type is judged.
In particular, it is possible to use, for example,
the step (1) is specifically as follows:
the specific formula for performing smooth pseudo-Wigner-Ville transform (SPWVD) on the received radar signal x (t) is as follows:
Figure BDA0003081911950000041
wherein t and f represent the variables in time domain and frequency domain in time-frequency analysis, h (tau) and g (s-t) are window functions of frequency domain and time domain, respectively, and play a role of smoothing filtering to reduce cross term interference.
The time-frequency image preprocessing and training set classification in the step (2) specifically comprises the following steps:
(1) adjusting the size of the image to 128 × 128 by adopting a nearest neighbor interpolation method;
(2) subtracting the average value of the pixels in the channel from each pixel value of the time-frequency image to reduce the input image
Reducing common features in the image;
(3) when the signal-to-noise ratio in the training set is low, the signal time-frequency structure is seriously damaged, and the time-frequency images which are easy to generate confusion are singly listed as the data in the training set 2.
The specific method for designing 3 deep convolutional neural networks in the step (3) is as follows: the two networks with large parameters and deep layers are respectively used as a teacher network I and a teacher network II, so that the radar signal can be identified with high accuracy and high generalization capability, and the network with small parameters is used as a student network, so that the training speed is high;
the specific steps of training in the step (4) to obtain the lightweight network are as follows:
(1) adding a temperature variable T to the softmax layers of the two teacher networks in the step 3, so that the softmax function is defined as shown in the following formula:
Figure BDA0003081911950000051
zifor the output of the last layer of the teacher model, adding a temperature variable T to enable each category to generate smoother probability distribution, so that information carried by negative labels is relatively amplified, and the model training focuses more on the negative labels;
(2) respectively training a teacher network I and a teacher network II which are added with the temperature variable T by using the training set 1 and the training set 2 generated in the step 2, and keeping parameters of the two networks;
(3) training a student network by using a training set 1, adding two teacher networks to output of a softmax layer of training set data as additional supervision information during training, defining the two outputs as a soft label 1 and a soft label 2 respectively, and reconstructing a loss function with a real label of the training set, wherein the following formula is as follows:
Ltotal=αLhard+βLsoft1+δLsoft2
wherein L ishardCross entropy loss, L, for real tags and student network outputsoft1And Lsoft2Respectively generating cross entropy losses generated by soft labels generated by two teacher networks and network outputs of students, wherein alpha, beta and delta are coefficients corresponding to each loss function;
(4) training to obtain a final student model, and keeping student network parameters as a lightweight model for identifying unknown radar modulation types.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a light-weight network designed by using a knowledge distillation network compression method, has low requirement on equipment memory, and is beneficial to being integrated into a chip and deployed to terminal equipment.
2. The method provided by the invention has the advantages that knowledge distillation training is completed by utilizing two teacher networks, the two teacher networks are trained independently by using the radar signals with seriously damaged time-frequency structures under the condition of low signal-to-noise ratio, the obtained soft labels are used as supervision information, and the identification accuracy of the lightweight network under the condition of low signal-to-noise ratio can be improved.
3. The networks mentioned by the invention all adopt residual error networks, can extract deeper features of time-frequency images, and have good adaptability to various radar signals.
Example 1:
fig. 1 is a flow chart of a knowledge distillation-based radar signal modulation mode identification method of the present invention, and the steps and the principle of the method are described in detail below with reference to fig. 1.
Step 1: and obtaining a time-frequency image of the radar modulation signal. The radar signals aimed by the invention mainly comprise 2FSK signals, 4FSK signals, BPSK signals, Frank signals, LFM-SFM signals, MLFM signals, DLFM signals, P1 signals, P2 signals, P3 signals, P4 signals, EQFM signals and SFM signals. And converting the radar signal received by the receiver into a time-frequency image by using smooth pseudo Wigner-Ville transform (SPWVD).
In this step, the digital model of the radar signal can be written as:
x(t)=s(t)+n(t) (1)
where x (t) is the signal received by the receiver, s (t) is the radar signal, and n (t) is the channel noise. The smooth pseudo Wigner-Ville transformation (SPWVD) formula adopted by the invention is as follows:
Figure BDA0003081911950000061
t and f respectively represent variables in time domain and frequency domain in time-frequency distribution, and h (tau) and g (s-t) are window functions of frequency domain and time domain respectively, so that the function of smoothing filtering is achieved to reduce cross term interference.
Step 2: and preprocessing the time-frequency image to obtain a training set. The preprocessing of the time-frequency image comprises the steps of adjusting the size of the image, subtracting a channel mean value, carrying out standardization, and then adding a label to obtain a training set.
When the image size is adjusted, the time-frequency images are uniformly adjusted to 128 × 128 by adopting a nearest neighbor interpolation method with simple calculation. Meanwhile, in order to improve the identification accuracy, the time-frequency image adopts a method of reducing the channel mean value to reduce the pixel value of the input image and reduce the common features among the images. The specific method is to subtract the average value of the pixels in the channel of each pixel value of the image. The preprocessed time-frequency image is labeled according to the modulation type, and for example, the 14-type signals in this embodiment are respectively 1,2,3 …, and 14 in the label bits, which are used as a training set 1. Then, the signals with the time-frequency structure seriously damaged and easily mixed with each other under the condition of low signal-to-noise ratio in the training set 1 are taken out separately as the training set 2.
And step 3: three deep convolutional neural networks are designed, wherein two deep layers with large parameter quantity are used as teacher networks, and one deep layer with small parameter quantity is used as student networks, namely the final lightweight network. Taking the identification of 14 types of signals in this embodiment as an example, the designed network is as follows:
(3.1) the two teacher networks with large parameter quantity adopt a resnet34 network structure. After the step 2, the teacher network I corresponds to the training set 1, and the teacher network II corresponds to the training set 2. And (3) extracting the characteristics of the preprocessed time-frequency image through 16 groups of residual error units, wherein each group of residual error units has two layers of convolution respectively, and the extracted characteristics enter a full-link layer for classification after passing through an Avgpoling layer.
(3.2) the student network structure with a smaller parameter quantity is shown in FIG. 2. The network comprises 4 groups of convolution units, the first group is convolution layers with convolution kernels of 7 x 7, the rest groups are residual error units and respectively have two layers of convolution, a time-frequency image enters the network, firstly undergoes the first group of convolution and maximum pooling, then sequentially enters the 3 groups of residual error units for feature extraction, and the extracted features enter a full-link layer for classification after passing through an Avgpooling layer. The training set 1 is used for training the student network, and the soft labels generated in the training process of the two teacher networks are combined to be used as extra monitoring information to obtain a final lightweight network. The parameter quantities of the teacher model and the student model are shown in fig. 5.
And 4, step 4: and training three deep convolutional neural networks to obtain a final training model.
And (4.1) respectively training the two teacher networks designed in the step 3 by using the training set 1 and the training set 2 in the step 2. During training, a temperature variable T is added to the softmax layer, so that the softmax function is shown as the following formula:
Figure BDA0003081911950000071
zifor the output of the last layer of the teacher model, a temperature variable T is added to enable each category to generate smoother probability distribution, information carried by the negative labels is relatively amplified, and the model training focuses more on the negative labels. After the two teacher network training is finished, the output of the softmax layer is used as a soft label.
(4.2) training a student network by using the training set 1 in the step 2, adding two teacher networks to generate soft labels for the training set data during training, and reconstructing a loss function with real labels of the training set data, wherein the loss function is represented by the following formula:
Ltotal=αLhard+βLsoft1+δLsoft2 (4)
wherein L ishardCross entropy loss, L, for real tags and student network outputsoft1And Lsoft2The soft labels generated by the two teacher networks and the cross entropy loss generated by the output of the student networks are respectively, alpha, beta and delta are coefficients corresponding to each loss function, and taking 14 types of signals in the embodiment as an example, the alpha, beta and delta are respectively 0.1, 0.2 and 0.7.
And (4.3) training to obtain a final student model, and keeping student network parameters as a lightweight model for identifying unknown radar modulation types.
And 5: and identifying the intercepted radar signals after the final lightweight model is obtained. And (3) after the radar signals of unknown types are subjected to the same pretreatment in the step (2), directly entering a lightweight network, and automatically finishing the identification of the modulation mode by the network.
Specifically, verification is performed by simulation in the present embodiment
The simulated radar modulated signals are 14 types in total, the types and the parameters are shown in fig. 6, and the signal length N is 1024. The training set sample signal-to-noise ratio ranges from-10 dB to 8dB, producing 600 samples per signal every 2dB that satisfy fig. 6 as training set 1. Training samples from-10 dB to 0dB in training set 1 are taken out separately as training set 2. The test set samples had signal-to-noise ratios ranging from-12 dB to 0dB, with 400 samples per signal taken every 1 dB.
Further, fig. 3 shows a recognition accuracy curve of the final lightweight model obtained by training in the embodiment of the present invention at different signal-to-noise ratios. When the signal-to-noise ratio is-6 dB, the identification accuracy of most signals is more than 96%; when the signal-to-noise ratio is-4 dB, the identification accuracy of all signals is more than 98%. FIG. 4 shows the average recognition accuracy of the original student model and the final lightweight model trained after knowledge distillation under different signal-to-noise ratios in the embodiment of the invention. Two soft labels are added as additional supervision information, so that the identification accuracy of the network model can be obviously improved.
The method is effective, the network model is lightened, the 14-class radar signal modulation types can be identified, and the identification accuracy is high under the condition of a low signal-to-noise ratio.
Other step details and functions of the radar signal intra-pulse modulation mode identification algorithm of the embodiment of the present invention are known to those skilled in the art, and are not described herein in detail in order to reduce redundancy.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. A knowledge distillation-based radar signal modulation mode identification method is characterized by comprising the following steps:
step 1: performing smooth pseudo Wigner-Ville transformation on the intercepted radar signal to obtain a two-dimensional time-frequency image;
the digital model of the radar signal is:
x(t)=s(t)+n(t)
wherein x (t) is a signal received by the receiver; s (t) is a radar signal; n (t) is channel noise;
the specific formula for performing smooth pseudo Wigner-Ville transformation on radar signals x (t) is as follows:
Figure FDA0003081911940000011
wherein t and f respectively represent variables in time domain and frequency domain in time-frequency analysis; h (tau) and g (s-t) are window functions of a frequency domain and a time domain respectively;
step 2: preprocessing a two-dimensional time-frequency image, including adjusting the size and reducing the image channel mean value for standardization; taking the modulation type of the radar signal as a label, and constructing a first training set; constructing a second training set of the radar signals with the damaged time-frequency structure under the condition of low signal-to-noise ratio;
and step 3: three different deep convolutional neural networks were designed: two teacher networks, one student network; the parameter quantity of the two teacher networks is larger, and the parameter quantity of the student networks is smaller;
and 4, step 4: training two teacher networks and a student network to obtain a lightweight model for identifying the modulation type of the unknown radar;
step 4.1: adding a temperature variable T into the softmax layers of the two teacher networks, so that the softmax function is defined as shown in the following formula:
Figure FDA0003081911940000012
zifor the output of the last layer of the teacher network, adding a temperature variable T to enable each category to generate smoother probability distribution, so that information carried by negative labels is relatively amplified, and model training focuses more on the negative labels;
step 4.2: training a first teacher network incorporating the temperature variable T using a first training set; training a second teacher network incorporating the temperature variable T using a second training set;
step 4.3: training a student network by using a first training set, adding two teacher network outputs of a training set data softmax layer as additional supervision information during training, defining the two outputs as a first soft label and a second soft label respectively, and reconstructing a loss function with real labels of the training set, wherein the following formula is as follows:
Ltotal=αLhard+βLsoft1+δLsoft2
wherein L ishardCross entropy losses generated for the real tags and student network outputs; l issoft1Cross entropy loss generated for a first soft label generated for a first teacher network and a student network output; l issoft2Cross entropy loss generated for a second soft label generated for a second teacher network and a student network output; α, β, and δ are coefficients corresponding to respective loss functions;
step 4.4: training to obtain a final student model, and keeping student network parameters as a lightweight model for identifying unknown radar modulation types;
and 5: when an unknown type radar signal is intercepted, carrying out smooth pseudo Wigner-Ville transformation on the unknown type radar signal to obtain a two-dimensional time-frequency image; and preprocessing the two-dimensional time-frequency images, and inputting the preprocessed two-dimensional time-frequency images into a lightweight model for identifying unknown radar modulation types to obtain an identification result.
2. The knowledge distillation-based radar signal modulation mode identification method according to claim 1, wherein the knowledge distillation-based radar signal modulation mode identification method comprises the following steps: the method for preprocessing the two-dimensional time-frequency image in the step 2 specifically comprises the following steps: uniformly adjusting the time-frequency images to 128 × 128 by adopting a nearest neighbor interpolation method; and subtracting the pixel average value in the channel where each pixel value of the time-frequency image is positioned from each pixel value of the time-frequency image, reducing the pixel value of the input image and reducing the common characteristics in the image.
3. The knowledge distillation-based radar signal modulation scheme identification method according to claim 1 or 2, wherein: in the step 3, both teacher networks adopt a resnet34 network structure, the two-dimensional time-frequency image after pretreatment is subjected to feature extraction through 16 groups of residual error units, each group of residual error units is respectively provided with two layers of convolution, and the extracted features enter a full connection layer for classification after passing through an Avgpooling layer; the student network comprises 4 groups of convolution units, wherein the first group is a convolution layer with convolution kernel of 7 x 7, and the rest groups are residual error units and respectively have two layers of convolution; the preprocessed two-dimensional time-frequency image enters a student network, firstly, after the two-dimensional time-frequency image is subjected to first group convolution and maximum pooling, the two-dimensional time-frequency image sequentially enters 3 groups of residual error units for feature extraction, and the extracted features enter a full connection layer for classification after passing through an Avgpoling layer.
CN202110569016.4A 2021-05-25 2021-05-25 Knowledge distillation-based radar signal modulation mode identification method Active CN113343796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569016.4A CN113343796B (en) 2021-05-25 2021-05-25 Knowledge distillation-based radar signal modulation mode identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110569016.4A CN113343796B (en) 2021-05-25 2021-05-25 Knowledge distillation-based radar signal modulation mode identification method

Publications (2)

Publication Number Publication Date
CN113343796A true CN113343796A (en) 2021-09-03
CN113343796B CN113343796B (en) 2022-04-05

Family

ID=77471181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110569016.4A Active CN113343796B (en) 2021-05-25 2021-05-25 Knowledge distillation-based radar signal modulation mode identification method

Country Status (1)

Country Link
CN (1) CN113343796B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115577305A (en) * 2022-10-31 2023-01-06 中国人民解放军军事科学院系统工程研究院 Intelligent unmanned aerial vehicle signal identification method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097084A (en) * 2019-04-03 2019-08-06 浙江大学 Pass through the knowledge fusion method of projection feature training multitask student network
CN112086103A (en) * 2020-08-17 2020-12-15 广东工业大学 Heart sound classification method
CN112116030A (en) * 2020-10-13 2020-12-22 浙江大学 Image classification method based on vector standardization and knowledge distillation
CN112417973A (en) * 2020-10-23 2021-02-26 西安科锐盛创新科技有限公司 Unmanned system based on car networking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097084A (en) * 2019-04-03 2019-08-06 浙江大学 Pass through the knowledge fusion method of projection feature training multitask student network
CN112086103A (en) * 2020-08-17 2020-12-15 广东工业大学 Heart sound classification method
CN112116030A (en) * 2020-10-13 2020-12-22 浙江大学 Image classification method based on vector standardization and knowledge distillation
CN112417973A (en) * 2020-10-23 2021-02-26 西安科锐盛创新科技有限公司 Unmanned system based on car networking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高钦泉 等: "基于知识蒸馏的超分辨率卷积神经网络压缩方法", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115577305A (en) * 2022-10-31 2023-01-06 中国人民解放军军事科学院系统工程研究院 Intelligent unmanned aerial vehicle signal identification method and device

Also Published As

Publication number Publication date
CN113343796B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
CN110086737B (en) Communication signal modulation mode identification method based on graph neural network
CN110532932B (en) Method for identifying multi-component radar signal intra-pulse modulation mode
CN108564006B (en) Polarized SAR terrain classification method based on self-learning convolutional neural network
CN113591145A (en) Federal learning global model training method based on difference privacy and quantification
CN109471074B (en) Radar radiation source identification method based on singular value decomposition and one-dimensional CNN network
CN114092769B (en) Transformer substation multi-scene inspection analysis method based on federal learning
CN111428817A (en) Defense method for resisting attack by radio signal identification
CN112949387A (en) Intelligent anti-interference target detection method based on transfer learning
CN113572708B (en) DFT channel estimation improvement method
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN113343796B (en) Knowledge distillation-based radar signal modulation mode identification method
CN114943245A (en) Automatic modulation recognition method and device based on data enhancement and feature embedding
CN112749663A (en) Agricultural fruit maturity detection system based on Internet of things and CCNN model
CN115186712A (en) Modulated signal identification method and system
CN115392285A (en) Deep learning signal individual recognition model defense method based on multiple modes
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN116566777B (en) Frequency hopping signal modulation identification method based on graph convolution neural network
CN114826459B (en) Spectrum map accurate construction method based on cross-domain reasoning
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN115422977A (en) Radar radiation source signal identification method based on CNN-BLS network
CN114529766A (en) Heterogeneous source SAR target identification method based on domain adaptation
CN115062690A (en) Bearing fault diagnosis method based on domain adaptive network
CN114724245A (en) CSI-based incremental learning human body action identification method
CN114070688A (en) Multi-standard underwater acoustic communication signal modulation identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant