CN114004250A - Method and system for identifying open set of modulation signals of deep neural network - Google Patents
Method and system for identifying open set of modulation signals of deep neural network Download PDFInfo
- Publication number
- CN114004250A CN114004250A CN202111070105.0A CN202111070105A CN114004250A CN 114004250 A CN114004250 A CN 114004250A CN 202111070105 A CN202111070105 A CN 202111070105A CN 114004250 A CN114004250 A CN 114004250A
- Authority
- CN
- China
- Prior art keywords
- data set
- signal
- generator
- discriminator
- modulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Digital Transmission Methods That Use Modulated Carrier Waves (AREA)
Abstract
The invention provides a method and a system for identifying open sets of modulation signals of a deep neural network, wherein the method comprises the following steps: establishing a data set X with a known modulation type and a data set V with an unknown modulation type under a first environment, and initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; according to a preset parameter updating time threshold value, performing parameter updating on a discriminator D and the generator G, and performing signal enhancement on a data set X and a data set V; modeling the modulation signal through a one-dimensional convolution residual error network to obtain a 1D-ResNet network model; and inputting the test set S into the 1D-ResNet network model to obtain a first identification result. The method solves the technical problems that a modulation recognition algorithm is sensitive to noise, open set recognition under a real environment is not considered, unknown modulation type signals cannot be distinguished, and the requirement on signal quality is high in the prior art.
Description
Technical Field
The invention relates to the technical field of communication, in particular to a method and a system for identifying an open set of a modulation signal of a deep neural network.
Background
In recent years, with the emergence of Neural Network (NN) technology, neural networks have begun to be applied to modulation recognition tasks due to their excellent feature extraction and data mapping capabilities. In a complex real electromagnetic environment, a training set is difficult to collect data of all debugging types for neural network training, so that some modulation types which do not appear in the training set exist in a test set, signals of known modulation types and signals of unknown modulation types need to be recognized during recognition, and the recognition task is called open set recognition. Since wireless signals are easily affected by various interferences and noises in real-world propagation, the signal quality of signals received by a receiving end is often poor, which may result in failure of modulation identification.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
in the prior art, a modulation identification algorithm is sensitive to noise, the problem of open set identification in a real environment is not considered, unknown modulation type signals cannot be distinguished, the requirement on signal quality is high, and the robustness and the generalization of the algorithm are poor.
Disclosure of Invention
The embodiment of the application provides a method and a system for identifying the open set of modulation signals of a deep neural network, and solves the technical problems that in the prior art, a modulation identification algorithm is sensitive to noise, open set identification under a real environment is not considered, unknown modulation type signals cannot be distinguished, and the requirement on signal quality is high. The problems that the signals are sensitive to noise, the unknown modulation type signals cannot be distinguished and the requirement on the signal quality is high are solved through signal enhancement, open set identification is achieved by adding the unknown modulation type signals and adding the loss function into training data, the identification accuracy is improved, and the robustness and the generalization capability are enhanced.
In view of the foregoing problems, embodiments of the present application provide a method and system for identifying open sets of modulation signals of a deep neural network.
In a first aspect, the embodiments of the present application provide a depthThe open set identification method of the neural network modulation signals comprises the following steps: in a first context, a data set X of a known modulation type and a data set V of an unknown modulation type are created, where X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V; modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model; and inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
In another aspect, an embodiment of the present application provides a deep neural network modulation signal open set identification system, where the system includes: a first establishing unit for establishing a data set X of a known modulation type and a data set V of an unknown modulation type in a first environment, wherein X is { X ═1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; a first execution unit for initializing the parameters of generator G and of discriminator D; a first obtaining unit, configured to obtain a threshold of a predetermined parameter update time; a second execution unit, configured to perform parameter updating on the discriminator D and the generator G according to the predetermined parameter updating time threshold, and perform signal enhancement on the data set X and the data set V; a second obtaining unit, configured to model a modulation signal through a one-dimensional convolution residual network according to the data set X and the data set V after signal enhancement, and obtain a 1D-ResNet network model; and the third obtaining unit is used for inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
In a third aspect, an embodiment of the present application provides a deep neural network modulation signal open set identification system, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
due to the adoption of the method, a data set X with a known modulation type and a data set V with an unknown modulation type are established under the first environment, wherein X is { X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V; modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model; the embodiment of the application provides a method and a system for open-set recognition of modulation signals of a deep neural network, solves the problems of sensitivity to noise, incapability of distinguishing unknown modulation type signals and high requirement on signal quality through signal enhancement, and achieves the technical effects of open-set recognition, improvement of recognition accuracy and enhancement of robustness and generalization capability by adding unknown modulation type signals and adding loss functions into training data.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart of a method for identifying an open set of modulation signals of a deep neural network according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a data set establishment process of a deep neural network modulation signal open set identification method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a single parameter update of a method for identifying open sets of modulation signals of a deep neural network according to an embodiment of the present disclosure;
FIG. 4 is a schematic flowchart of a training discriminator of a deep neural network modulation signal open set identification method according to an embodiment of the present disclosure;
FIG. 5 is a schematic flowchart of a training generator of a deep neural network modulation signal open set identification method according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a method for identifying an open set of modulation signals of a deep neural network according to the present application to generate a noise reduction modulation data set and an unknown modulation signal data set;
FIG. 7 is a schematic flowchart of modeling a modulation signal in a method for identifying a modulation signal open set of a deep neural network according to an embodiment of the present application;
fig. 8 is a schematic flowchart of obtaining an identification accuracy rate of a deep neural network modulation signal open set identification method according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a deep neural network modulation signal open set identification system according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: the system comprises a first establishing unit 11, a first executing unit 12, a first obtaining unit 13, a second executing unit 14, a second obtaining unit 15, a third obtaining unit 16, an electronic device 300, a memory 301, a processor 302, a communication interface 303 and a bus architecture 304.
Detailed Description
The embodiment of the application provides a method and a system for identifying the open set of modulation signals of a deep neural network, and solves the technical problems that in the prior art, a modulation identification algorithm is sensitive to noise, open set identification under a real environment is not considered, unknown modulation type signals cannot be distinguished, and the requirement on signal quality is high. The problems that the signals are sensitive to noise, the unknown modulation type signals cannot be distinguished and the requirement on the signal quality is high are solved through signal enhancement, open set identification is achieved by adding the unknown modulation type signals and adding the loss function into training data, the identification accuracy is improved, and the robustness and the generalization capability are enhanced.
Summary of the application
In recent years, with the emergence of Neural Network (NN) technology, neural networks have begun to be applied to modulation recognition tasks due to their excellent feature extraction and data mapping capabilities. In a complex real electromagnetic environment, a training set is difficult to collect data of all debugging types for neural network training, so that some modulation types which do not appear in the training set exist in a test set, signals of known modulation types and signals of unknown modulation types need to be recognized during recognition, and the recognition task is called open set recognition. Since wireless signals are easily affected by various interferences and noises in real-world propagation, the signal quality of signals received by a receiving end is often poor, which may result in failure of modulation identification. The method solves the technical problems that a modulation recognition algorithm is sensitive to noise, open set recognition under a real environment is not considered, unknown modulation type signals cannot be distinguished, and the requirement on signal quality is high in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a method for identifying an open set of modulation signals of a deep neural network, wherein the method comprises the following steps: in a first context, a data set X of a known modulation type and a data set V of an unknown modulation type are created, where X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; updating the parameters of the discriminator D and the generator G according to the preset parameter updating time threshold value, and updating the data set X and the generator GCarrying out signal enhancement on the data set V; modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model; and inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a deep neural network modulation signal open set identification method, where the method includes:
s100: in a first context, a data set X of a known modulation type and a data set V of an unknown modulation type are created, where X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm},m=1,2,...,M;
Specifically, the first environment refers to any complex environment including a gaussian white noise environment with different signal-to-noise ratios, a real environment noise channel environment, and the like, and the known Modulation types include Amplitude Modulation (AM), Frequency Modulation (FM), Frequency Shift Keying (FSK), Phase Shift Keying (PSK), Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), and the like. Adding white Gaussian noise and real environment noise with different signal to noise ratios to a pure signal data set with a known modulation type to obtain a data set X with a noise signal, wherein the data set X can be expressed as X ═ X1,x2,...,xnN, N is the number of signal samples. Further, Amplitude Shift Keying (ASK) and Minimum Shift Keying (MSK) are generated, and two noisy modulated signal data sets V ═ V of white gaussian noise and real environment noise environments with different signal-to-noise ratios1,v2,...,vmM, M is the number of signal samples. Can be thatAnd subsequently, carrying out signal enhancement and modulated signal open-set identification operation tamping foundation.
S200: initializing parameters of a generator G and a discriminator D;
specifically, a conditioning countermeasure network (CGAN) is generated for the modulation signals under various complex channels to perform signal enhancement, and the CGAN converts the modulation signals with noise into de-noising signals of the same modulation type by learning a nonlinear mapping function F. CGAN consists of two deep neural networks, a Generator (Generator, G) and a Discriminator (Discriminator, D), respectively. The generator G is used to act as a mapping function F, inputting a noisy modulated signal xnThen, a de-noised modulation signal is generatedThe discriminator D measures the denoised modulation signal x generated by the generator GnAnd a clean signal ynThe difference between them. The initialization parameters refer to a process of initializing the weights and biases of the respective nodes before network training. The initialization of the parameters is related to whether the network can train out a good result or how fast the network can converge, and a good training result can be obtained.
S300: obtaining a preset parameter updating time threshold;
s400: according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V;
specifically, the predetermined parameter update number threshold is used to set the number of times the generator G and the discriminator D are updated, and if 50, 50 parameter updates are performed on the generator G and the discriminator D. And performing signal enhancement on the data set X and the data set V through parameter updating, constructing a data set required by a subsequent modulation signal open set identification task by using a generator after the last parameter updating, namely sequentially inputting signals in the noisy modulation signal data set X into a generator G for enhancement to obtain a noise reduction modulation data set, and sequentially inputting signals in the data set V into output signals of the generator G to form an unknown modulation signal data set. Thereby achieving the effect of signal enhancement of the data set using the CGAN. The countermeasure network is generated to enhance the signal, so that the identification can be carried out in the environment with low signal to noise ratio, and the robustness is good.
S500: modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model;
s600: and inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
Specifically, because signals of known modulation types and signals of unknown modulation types exist in a real environment, a one-dimensional convolution residual error network (1D-ResNet) is built to model the modulation signals, a modulation signal open set identification data set is constructed, and the obtained noise reduction modulation data set is enhancedI.e. a data set of known modulation type and a data set of unknown modulation signalMerging to obtain an open set identification data set U ═ U1,u2,...,unN, N is the number of signal samples. Then, distributing samples in the data set U in a disorderly mode according to the following steps of 8: 1: the scale of 1 is divided into a training set T, a validation set E and a test set S. And further constructing a 1D-ResNet network structure, wherein the 1D-ResNet network structure consists of two residual modules and 512-dimensional modulation signals in a data set U are input. And inputting the samples in the training set T into the 1D-ResNet, training the 1D-ResNet, and calculating the loss values of the model on the training set and the verification set once after each round of training is finished. And when the preset training times are reached or the loss value of the network is not reduced for 5 times continuously, completing the training process of the neural network to obtain the trained 1D-ResNet. And inputting the test set S into the trained 1D-ResNet to obtain a recognition result, namely the first recognition result. The one-dimensional convolution residual error network is used for automatically extracting the characteristics of the input modulation signal for identification, thereby avoiding manual extraction in the traditional method in the prior artAnd (4) complex steps of signal feature acquisition. Meanwhile, signals of unknown modulation types are added into training data, and a central error loss function is added into a full connection layer of the network, so that the network can carry out open-set identification on the modulation signals, and the generalization capability of the model is improved.
Further, as shown in fig. 2, the establishing a data set X of a known modulation type and a data set V of an unknown modulation type under the first environment, and the step S100 further includes:
s110: generating clean signal data sets Y ═ Y { Y } of multiple modulation square types including amplitude modulation, frequency shift keying, binary frequency shift keying, and quaternary frequency shift keying1,y2,...,ynN, N is the number of signal samples;
s120: adding white Gaussian noise with different signal-to-noise ratios and real environment noise to the pure signal in the pure signal data set Y to obtain a signal data set with noise as the data set X ═ X of the known modulation type1,x2,...,xn},n=1,2,...,N;
S130: pairing the clean signal data set Y with the data set X to form a training data pair set T { (X)1,y1),(x2,y2),...,(xn,yn)},n=1,2,...,N;
S140: generating two data sets of amplitude shift keying and minimum shift keying with Gaussian white noise and real environment noise environment with different signal-to-noise ratios as the data set V ═ V of the unknown modulation type1,v2,...,vmM, M is the number of signal samples.
In particular, software Radio software GNU-Radio is used to generate the modulation signal. GNU-Radio is a Python-based architecture platform based on which users can write a variety of wireless applications in a software programming manner. The GNU-Radio platform is generally combined with a Radio peripheral USPR to define the transmission and reception of Radio by software, thereby forming a complete software and hardware communication system. The generation mode of the modulation signal with noise is as follows: GNU-Radio is used to generate 100 pure signals each of amplitude modulation, frequency shift keying, binary frequency shift keying, phase frequency shift keying, amplitude shift keying, minimum frequency shift keying, seven modulation types. Each signal takes 512 points in length, and the modulation frequency is 1 kHz. Gaussian white noise with different energy is added to each pure modulation signal, and the pure modulation signals are expanded into noisy signals with the signal-to-noise ratio ranging from-10 dB to 20dB, wherein the number of the noisy signals is 1000. And USRP hardware equipment is deployed in a real environment, and the distance between the transmitting equipment and the receiving equipment is 50 m. The seven pure modulation signals are sent at the sending end, and the modulation signals influenced by the environmental noise can be received at the receiving end.
And adding white Gaussian noise with different signal to noise ratios to the pure signal in the pure signal data set Y to obtain a noisy signal data set X, matching Y and X to form a training data pair set T, wherein the training data pair set T is used for fitting and learning by a neural network to perform network training. The data set X is used as a data set of a known modulation type, the data set V is used as a data set of an unknown modulation type to carry out open set identification, the problem of open set identification in a real environment is considered, and therefore identification accuracy is improved.
Further, as shown in fig. 3, the updating parameters of the discriminator D and the generator G according to the predetermined parameter update time threshold, and performing signal enhancement on the data set X and the data set V, where a single parameter update of the discriminator D and the generator G, the step S400 further includes:
s410: fixing the parameters of the generator G and training the discriminator D;
s420: and fixing the parameters of the discriminator D and training the generator G.
Specifically, the generator G is composed of an Encoder (Encoder) module and a Decoder (Decoder) module. The encoder is composed of 6 one-dimensional convolution layers (convolution kernel size is 6, step size is 2), the PReLU activation function is connected after each layer, convolution depth is gradually increased (16, 32, 64, 128, 256, 512), and input signal length is gradually decreased (512, 256, 128, 64, 32, 16). The decoder input is the sum of the output of the last layer of the encoder and a normally distributed noise vector Z. The decoder is also made up of 6 one-dimensional convolutional layers whose convolutional depth is gradually reduced (256, 128, 64, 32, 16, 1), the PReLU activation function is connected after each layer, the output signal length is gradually increased (16, 32, 64, 128, 256, 512), and each layer splices the output of the corresponding layer of the encoder as input, i.e., a jump connection. The structure of the discriminator is similar to that of the encoder, and the discriminator comprises 6 one-dimensional convolution layers, and the convolution parameters are the same as those of the encoder. Using RMSprop as optimizer (learning rate 0.0001), 50 epochs were trained for generator G and discriminator D. The updated generator G lays a foundation for subsequently constructing the data set.
Further, as shown in fig. 4, the fixing the parameters of the generator G, training the discriminator D, and the step S410 further includes:
s411: randomly selecting N groups of signal pairs (x) from the training data pair set Tn,yn) Modulating the signal x with noisenInputting the generator G to obtain an output G (x)n);
S412: mixing the G (x)n) And said xnSpliced into a two-dimensional signal (G (x)n),xn) Inputting the discriminator D so that the target output of the discriminator is '0';
s413: will be the same as ynAnd said xnSpliced into a two-dimensional signal (x)n,yn) The discriminator D is input so that the target output of the discriminator is '1'.
Specifically, the process of training the discriminator D is to minimize the loss function LCGAN(D) The parameters of the discriminator D are updated by reverse error propagation:
wherein y isnFor the nth clean modulation signal, xnFor the n-th strip of noise modulated signals,indicating that the 1 st modulated signal to the nth modulated signal are accumulated. G (x)n) And the representation generator G generates a denoised modulation signal according to the nth strip noise modulation signal. D (-) denotes whether it is a pure modulation signal or a modulation signal generated by the generator G according to the inputted debug signal,indicating that the parameters of the discriminator D are updated by minimizing the loss function. Training the discriminator D to continuously improve the discrimination ability of the discriminator D so as to achieve the purpose of distinguishing the modulation signal from the pure signal.
Further, as shown in fig. 5, the step S420 of fixing the parameters of the discriminator D and training the generator G includes:
s421: applying the two-dimensional signal (G (x))n),xn) Inputting the discriminator D, and training the parameters of the generator G by making the output of the discriminator be '1';
s422: by making the output G (x) of the generator Gn) Approach said ynTo train the parameters of the generator G.
Specifically, the two-dimensional signal (G (x)n),xn) Is composed of G (x)n) And xnSpliced, the steps being equivalent to using a minimum loss function LCGAN(G) Error back-propagation is performed to update the parameters of the generator G:
where yn is the nth pure modulation signal, xnFor the n-th strip of noise modulated signals,indicating that the 1 st modulated signal to the nth modulated signal are accumulated. G (x)n) And the representation generator G generates a denoised modulation signal according to the nth strip noise modulation signal. D (-) represents the output result of the discriminator D according to the input debugging signal, lambda is the weight between 0 and 1, | | - | does not count1To representIs the sum of the absolute values of each element in the vector,indicating that the parameters of the generator G are updated by minimizing the loss function. Training the generator G enables the generator G to continuously improve the generation capability of the generator G, and the purpose of confusing the discriminator D is achieved.
Further, as shown in fig. 6, the step S400 further includes, in accordance with the predetermined parameter update time threshold, performing parameter update on the discriminator D and the generator G:
s430: sequentially inputting the signals in the data set X into the generator G for enhancement to obtain a noise reduction modulation data set
S440: and sequentially inputting the signals in the data set V into the generator G to form an unknown modulation signal data set by the output signals
Specifically, the noisy modulated signal x is input to the generator GnCapable of generating a denoised modulated signalWhen the generator G is trained to continuously improve the generation capability to the extent of confusing the discriminator D, the discriminator D can be confused by the generator GCan be regarded as a clean signal ynIs calculated. Sequentially inputting the signals in the data set X and the signals in the data set V into the trained generator G to obtain the noise reduction modulation data setAnd said unknown modulated signal data setThe effects of signal enhancement and overcoming of Gaussian white noise or environmental noise interference can be achieved.
Further, as shown in fig. 7, the modeling is performed on the modulation signal through a one-dimensional convolution residual network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model, and step S500 further includes:
s510: constructing a signal open set identification data set U, wherein the data set U comprises the data set X after signal enhancement and signals in the data set V;
s520: dividing the samples in the data set U into a training set T, a verification set E and the test set S according to a preset proportion after the samples in the data set U are distributed in a disorderly mode;
s530: constructing a 1D-ResNet network structure;
s540: inputting the training set T into the 1D-ResNet network structure for training;
s550: carrying out weighted summation on a multi-classification cross entropy loss function and a central error loss function to obtain a 1D-ResNet loss function;
s560: calculating loss values of the primary model on a training set T and a verification set E according to the 1D-ResNet loss function;
s570: and when the loss value reaches a preset training time or the loss value of the network does not decrease for a continuous preset time, obtaining the 1D-ResNet network model.
Specifically, a signal open set identification data set U is constructed, wherein the data set U contains signals of known modulation types and signals of unknown modulation types, namely the data set X and the data set V after signal enhancement. Distributing samples in a data set U in a disorderly mode according to the following steps of 8: 1: the scale of 1 is divided into a training set T, a validation set E and a test set S. Further, the 1D-ResNet network structure is constructed, and the 1D-ResNet network structure is composed of two residual modules. The convolution layer of the residual error module is composed of a one-dimensional convolution layer combined with batch regularization layer operation and a Relu activation function, jump connection operation is added into the corresponding convolution layer, the size of a convolution kernel is 5, and the step length is 3. The output of each residual block is followed by a maximum pooling operation with a pooling window size of 2. The parameters of the full-connection layer are set to 64 full-connection neurons of the first full-connection layer, 3 neurons of the second full-connection layer, the output full-connection layer is set to 6 output neural units, and the activation function is a softmax function. A Droupout operation is also added to the fully connected layer to prevent overfitting.
The input of 1D-ResNet is the 512-dimensional modulation signal in the data set U, i.e. the training set T. And then setting parameters of 7 one-dimensional convolutional layers and 3 full-connection layers in the 1D-ResNet, and performing weighted summation on the multi-class cross entropy loss function and the central error loss function to serve as the loss function of the 1D-ResNet.
The multi-class cross entropy loss function is as follows:
whereinAnd the value is 0 or 1 to indicate whether the nth modulation signal belongs to the kth category.Probability of nth modulation signal belonging to kth class predicted by networkIndicating accumulation of the error for the kth class. (K equals 6, consisting of AM, FM, FSK, BPSK, QPSK, 5 known debugging classes and the Unknown class to which ASK and MSK belong together: 'Unknown'), N is the number of signal samples,indicating that the errors for all samples are accumulated.
The central error loss function is given by:
whereinInput signal g representing 1D-ResNetnForward to the output value at the last fully-connected layer,the category centers corresponding to the respective categories learned through error back propagation.Is the euclidean distance between the two vectors,the method can accumulate the error between each sample and the class center thereof, and can map the signal of the known modulation type to the respective class center thereof in the high-dimensional space and gather the signal of the Unknown modulation type to the class center of the 'Unknown' class by adding the center error loss function, thereby achieving the clustering effect.
Training the 1D-ResNet, inputting samples in the training set T into the 1D-ResNet, and selecting the model optimization method as an adaptive optimization algorithm Adam. Setting the training times of the network by minimizing the loss function J1D-ResNetThe training network parameters are propagated back to the error as shown in the following equation.
minJ1D-ResNet=J1+β·J2
Wherein J1Represents the categorical cross entropy loss, J2Representing the central error loss, beta is a weighted value between 0 and 1, and min (-) represents the training of the network parameters by minimizing the loss function. After each round of training is finished, the loss values of the model on the training set and the verification set are calculated once. When the preset training times are reached or the loss value of the network is not reduced for 5 times continuously, the training process of the neural network is completedAnd obtaining the trained 1D-ResNet. The method has the advantages that the characteristics of the input modulation signals are automatically extracted and recognized by using the one-dimensional convolution residual error network, the complex step of manually extracting the characteristics of the signals in the traditional method is avoided, and meanwhile, the network can perform open-set recognition on the modulation signals by adding the signals of unknown modulation types into training data and adding a central error loss function into the full connection layer of the network.
Further, as shown in fig. 8, after the inputting the test set S into the 1D-ResNet network model and obtaining the first recognition result, the step S600 further includes:
s610: acquiring real category data;
s620: and comparing the first identification result with the real category data to obtain identification accuracy.
Specifically, the real category data is grasped, and the test set S is input into the trained 1D-ResNet to obtain the recognition result. And comparing the recognition result with the real category, and counting the recognition accuracy rate, namely the recognition accuracy rate of the modulation signal open set, wherein the recognition accuracy rate is the percentage of the correctly recognized test sample in the test sample. The method can observe the open set identification accuracy of the modulation signals, thereby evaluating the accuracy of the identification method.
To sum up, the method and the system for identifying the open set of the modulation signals of the deep neural network provided by the embodiment of the application have the following technical effects:
1. due to the adoption of the method, a data set X with a known modulation type and a data set V with an unknown modulation type are established under the first environment, wherein X is { X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V; modeling the modulated signal by a one-dimensional convolutional residual network from the signal enhanced data set X and the data set V,obtaining a 1D-ResNet network model; the embodiment of the application provides a method and a system for open-set recognition of modulation signals of a deep neural network, solves the problems of sensitivity to noise, incapability of distinguishing unknown modulation type signals and high requirement on signal quality through signal enhancement, and achieves the technical effects of open-set recognition, improvement of recognition accuracy and enhancement of robustness and generalization capability by adding unknown modulation type signals and adding loss functions into training data.
2. The method has the advantages that the method adopts the technical effect that the characteristics of the input modulation signals are automatically extracted for identification, the complicated step of manually extracting the signal characteristics in the traditional method is avoided, and meanwhile, the network can perform open-set identification on the modulation signals by adding the signals of unknown modulation types into the training data and adding the central error loss function into the full-link layer of the network.
Example two
Based on the same inventive concept as the method for identifying the open set of the modulation signal of the deep neural network in the foregoing embodiment, as shown in fig. 9, the embodiment of the present application provides a system for identifying the open set of the modulation signal of the deep neural network, wherein the system includes:
a first establishing unit 11, said first establishing unit 11 being configured to establish a data set X of a known modulation type and a data set V of an unknown modulation type in a first environment, wherein X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm},m=1,2,...,M;
A first execution unit 12, said first execution unit 12 being configured to initialize the parameters of generator G and of discriminator D;
a first obtaining unit 13, where the first obtaining unit 13 is configured to obtain a threshold of the number of updating times of a predetermined parameter;
a second executing unit 14, where the second executing unit 14 is configured to perform parameter updating on the discriminator D and the generator G according to the predetermined parameter updating number threshold, and perform signal enhancement on the data set X and the data set V;
a second obtaining unit 15, where the second obtaining unit 15 is configured to model a modulation signal through a one-dimensional convolution residual network according to the data set X and the data set V after signal enhancement, and obtain a 1D-ResNet network model;
a third obtaining unit 16, where the third obtaining unit 16 is configured to input the test set S into the 1D-ResNet network model to obtain a result of the first identification.
Further, the system comprises:
a first generation unit for generating a clean signal data set Y ═ Y { Y } of a plurality of modulation side types including amplitude modulation, frequency shift keying, binary frequency shift keying, and quadrature frequency shift keying1,y2,...,ynN, N is the number of signal samples;
a second execution unit, configured to add white gaussian noise and real environment noise with different signal-to-noise ratios to the clean signal in the clean signal data set Y to obtain a noisy signal data set, which is used as the data set X ═ of the known modulation type1,x2,...,xn},n=1,2,...,N;
A third execution unit, configured to pair the clean signal data set Y with the data set X to form a training data pair set T { (X)1,y1),(x2,y2),...,(xn,yn)},n=1,2,...,N;
A fourth execution unit, configured to generate two noisy modulation signal data sets of amplitude shift keying and minimum frequency shift keying, where the noisy modulation signal data sets have different signal-to-noise ratios and are used as the data set of the unknown modulation type, where V is ═ V1,v2,...,vmM, M is the number of signal samples.
Further, the system comprises:
a first training unit for fixing the parameters of the generator G and training the discriminator D;
and the second training unit is used for fixing the parameters of the discriminator D and training the generator G.
Further, the system comprises:
a fifth execution unit for randomly selecting N groups of signal pairs (x) from the training data pair set Tn,yn) Modulating the signal x with noisenInputting the generator G to obtain an output G (x)n);
A sixth execution unit to execute the G (x)n) And said xnSpliced into a two-dimensional signal (G (x)n),xn) Inputting the discriminator D so that the target output of the discriminator is '0';
a seventh execution unit to execute the ynAnd said xnSpliced into a two-dimensional signal (x)n,yn) The discriminator D is input so that the target output of the discriminator is '1'.
Further, the system comprises:
an eighth execution unit for executing the two-dimensional signal (G (x))n),xn) Inputting the discriminator D, and training the parameters of the generator G by making the output of the discriminator be '1';
a ninth execution unit to execute the program by causing the output G (x) of the generator Gn) Approach said ynTo train the parameters of the generator G.
Further, the system comprises:
a fourth obtaining unit, configured to sequentially input the signals in the data set X to the generator G for enhancement to obtain a noise reduction modulation data set
A tenth execution unit, configured to compose output signals after signals in the data set V are sequentially input to the generator GData set of unknown modulation signal
Further, the system comprises:
the first construction unit is used for constructing a signal open set identification data set U, and the data set U comprises the data set X after signal enhancement and signals in the data set V;
an eleventh execution unit, configured to divide the samples in the data set U into a training set T, a verification set E, and the test set S according to a predetermined ratio after the samples are distributed in a scrambled manner;
the second construction unit is used for constructing a 1D-ResNet network structure;
a twelfth execution unit, configured to input the training set T into the 1D-ResNet network structure for training;
a fifth obtaining unit, configured to perform weighted summation on the multi-class cross entropy loss function and the central error loss function to obtain a 1D-ResNet loss function;
a thirteenth execution unit, configured to calculate loss values of the primary model on the training set T and the validation set E according to the 1D-ResNet loss function;
a sixth obtaining unit, configured to obtain the 1D-ResNet network model when the loss value reaches a preset training number or the loss value of the network does not decrease for a predetermined number of consecutive times.
Further, the system comprises:
a seventh obtaining unit configured to obtain real category data;
an eighth obtaining unit, configured to compare the first identification result with the real category data, and obtain an identification accuracy.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 10,
based on the same inventive concept as the method for identifying the open set of the modulation signals of the deep neural network in the foregoing embodiments, the embodiment of the present application further provides a system for identifying the open set of the modulation signals of the deep neural network, which includes: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The communication interface 303 is a system using any transceiver or the like, and is used for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN), wired access network, and the like.
The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact-read-only-memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement a deep neural network modulation signal open set identification method provided by the above-mentioned embodiment of the present application.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
The embodiment of the application provides a method for identifying an open set of modulation signals of a deep neural network, wherein the method comprises the following steps: in a first context, a data set X of a known modulation type and a data set V of an unknown modulation type are created, where X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm1,2, say, M; initializing parameters of a generator G and a discriminator D; obtaining a preset parameter updating time threshold; according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V; modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model; and inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by general purpose processors, digital signal processors, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic systems, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing systems, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.
Claims (10)
1. A deep neural network modulation signal open set identification method, wherein the method comprises the following steps:
in a first context, a data set X of a known modulation type and a data set V of an unknown modulation type are created, where X ═ X1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm},m=1,2,...,M;
Initializing parameters of a generator G and a discriminator D;
obtaining a preset parameter updating time threshold;
according to the preset parameter updating time threshold, performing parameter updating on the discriminator D and the generator G, and performing signal enhancement on the data set X and the data set V;
modeling a modulation signal through a one-dimensional convolution residual error network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model;
and inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
2. The method of claim 1, wherein said establishing a data set X of a known modulation type and a data set V of an unknown modulation type in a first environment comprises:
generating clean signal data sets Y ═ Y { Y } of multiple modulation square types including amplitude modulation, frequency shift keying, binary frequency shift keying, and quaternary frequency shift keying1,y2,...,ynN, N is the number of signal samples;
for the clean signal dataAnd adding white Gaussian noise with different signal-to-noise ratios to the pure signal in the set Y to obtain a data set of the signal with noise as the data set X of the known modulation type ═ X1,x2,...,xn},n=1,2,...,N;
Pairing the clean signal data set Y with the data set X to form a training data pair set T { (X)1,y1),(x2,y2),...,(xn,yn)},n=1,2,...,N;
Generating two data sets of amplitude shift keying and minimum shift keying with Gaussian white noise and real environment noise environment with different signal-to-noise ratios as the data set V ═ V of the unknown modulation type1,v2,...,vmM, M is the number of signal samples.
3. The method of claim 1, wherein the performing parameter updates to the discriminator D and the generator G in accordance with the predetermined parameter update time threshold performs signal enhancement to the data set X and the data set V, wherein a single parameter update to the discriminator D and the generator G comprises:
fixing the parameters of the generator G and training the discriminator D;
and fixing the parameters of the discriminator D and training the generator G.
4. The method of claim 3, wherein said fixing the parameters of said generator G, training said discriminator D, comprises:
randomly selecting N groups of signal pairs (x) from the training data pair set Tn,yn) Modulating the signal x with noisenInputting the generator G to obtain an output G (x)n);
Mixing the G (x)n) And said xnSpliced into a two-dimensional signal (G (x)n),xn) Inputting the discriminator D so that the target output of the discriminator is '0';
will be the same as ynAnd said xnSpliced into a two-dimensional signal(xn,yn) The discriminator D is input so that the target output of the discriminator is '1'.
5. The method of claim 4, wherein said fixing the parameters of said arbiter D, training said generator G, comprises:
applying the two-dimensional signal (G (x))n),xn) Inputting the discriminator D, and training the parameters of the generator G by making the output of the discriminator be '1';
by making the output G (x) of the generator Gn) Approach said ynTo train the parameters of the generator G.
6. The method of claim 1, wherein said updating parameters of said discriminator D and said generator G according to said predetermined parameter update time threshold comprises:
sequentially inputting the signals in the data set X into the generator G for enhancement to obtain a noise reduction modulation data set
7. The method of claim 1, wherein the modeling the modulated signal by a one-dimensional convolution residual network according to the data set X and the data set V after signal enhancement to obtain a 1D-ResNet network model comprises:
constructing a signal open set identification data set U, wherein the data set U comprises the data set X after signal enhancement and signals in the data set V;
dividing the samples in the data set U into a training set T, a verification set E and the test set S according to a preset proportion after the samples in the data set U are distributed in a disorderly mode;
constructing a 1D-ResNet network structure;
inputting the training set T into the 1D-ResNet network structure for training;
carrying out weighted summation on a multi-classification cross entropy loss function and a central error loss function to obtain a 1D-ResNet loss function;
calculating loss values of the primary model on a training set T and a verification set E according to the 1D-ResNet loss function;
and when the loss value reaches a preset training time or the loss value of the network does not decrease for a continuous preset time, obtaining the 1D-ResNet network model.
8. The method of claim 1, wherein said inputting a test set S into said 1D-ResNet network model, after obtaining a result of a first recognition, comprises:
acquiring real category data;
and comparing the first identification result with the real category data to obtain identification accuracy.
9. A deep neural network modulation signal open set identification system, wherein the system comprises:
a first establishing unit for establishing a data set X of a known modulation type and a data set V of an unknown modulation type in a first environment, wherein X is { X ═1,x2,...,xn},n=1,2,...,N,V={v1,v2,...,vm},m=1,2,...,M;
A first execution unit for initializing the parameters of generator G and of discriminator D;
a first obtaining unit, configured to obtain a threshold of a predetermined parameter update time;
a second execution unit, configured to perform parameter updating on the discriminator D and the generator G according to the predetermined parameter updating time threshold, and perform signal enhancement on the data set X and the data set V;
a second obtaining unit, configured to model a modulation signal through a one-dimensional convolution residual network according to the data set X and the data set V after signal enhancement, and obtain a 1D-ResNet network model;
and the third obtaining unit is used for inputting the test set S into the 1D-ResNet network model to obtain a first identification result.
10. A deep neural network modulation signal open set identification system, comprising: a processor coupled to a memory, the memory for storing a program that, when executed by the processor, causes a system to perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111070105.0A CN114004250A (en) | 2021-09-13 | 2021-09-13 | Method and system for identifying open set of modulation signals of deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111070105.0A CN114004250A (en) | 2021-09-13 | 2021-09-13 | Method and system for identifying open set of modulation signals of deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114004250A true CN114004250A (en) | 2022-02-01 |
Family
ID=79921273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111070105.0A Pending CN114004250A (en) | 2021-09-13 | 2021-09-13 | Method and system for identifying open set of modulation signals of deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114004250A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114997248A (en) * | 2022-07-29 | 2022-09-02 | 杭州电子科技大学 | Model and method for identifying open set interference based on prototype learning |
CN115913850A (en) * | 2022-11-18 | 2023-04-04 | 中国电子科技集团公司第十研究所 | Open set modulation identification method based on residual error network |
-
2021
- 2021-09-13 CN CN202111070105.0A patent/CN114004250A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114997248A (en) * | 2022-07-29 | 2022-09-02 | 杭州电子科技大学 | Model and method for identifying open set interference based on prototype learning |
CN115913850A (en) * | 2022-11-18 | 2023-04-04 | 中国电子科技集团公司第十研究所 | Open set modulation identification method based on residual error network |
CN115913850B (en) * | 2022-11-18 | 2024-04-05 | 中国电子科技集团公司第十研究所 | Open set modulation identification method based on residual error network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109274621B (en) | Communication protocol signal identification method based on depth residual error network | |
CN114004250A (en) | Method and system for identifying open set of modulation signals of deep neural network | |
CN111475797A (en) | Method, device and equipment for generating confrontation image and readable storage medium | |
CN114595732B (en) | Radar radiation source sorting method based on depth clustering | |
CN115455471A (en) | Federal recommendation method, device, equipment and storage medium for improving privacy and robustness | |
CN110738242A (en) | Bayes structure learning method and device for deep neural networks | |
CN107832789B (en) | Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation | |
CN112329837B (en) | Countermeasure sample detection method and device, electronic equipment and medium | |
US11700156B1 (en) | Intelligent data and knowledge-driven method for modulation recognition | |
CN112861927B (en) | Signal modulation classification method based on self-adaptive feature extraction and fusion | |
CN114531729B (en) | Positioning method, system, storage medium and device based on channel state information | |
Safarinejadian et al. | Distributed unsupervised Gaussian mixture learning for density estimation in sensor networks | |
CN114818864A (en) | Gesture recognition method based on small samples | |
Varughese et al. | Accelerating assessments of optical components using machine learning: TDECQ as demonstrated example | |
Tembine | Mean field stochastic games: Convergence, Q/H-learning and optimality | |
KR102110316B1 (en) | Method and device for variational interference using neural network | |
CN115859048A (en) | Noise processing method and device for partial discharge signal | |
Kalade et al. | Using sequence to sequence learning for digital bpsk and qpsk demodulation | |
CN113869227B (en) | Signal modulation mode identification method, device, equipment and readable storage medium | |
CN115795005A (en) | Session recommendation method and device integrating contrast learning denoising optimization | |
CN113111720B (en) | Electromagnetic modulation signal denoising method and system based on deep learning | |
US20230394304A1 (en) | Method and Apparatus for Neural Network Based on Energy-Based Latent Variable Models | |
CN115640845A (en) | Method for generating few-category samples of neural network of graph based on generation of confrontation network | |
CN114764593A (en) | Model training method, model training device and electronic equipment | |
CN115952466A (en) | Communication radiation source cross-mode identification method based on multi-mode information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |