CN112132266A - Signal modulation identification system and modulation identification method based on convolution cycle network - Google Patents

Signal modulation identification system and modulation identification method based on convolution cycle network Download PDF

Info

Publication number
CN112132266A
CN112132266A CN202011011055.4A CN202011011055A CN112132266A CN 112132266 A CN112132266 A CN 112132266A CN 202011011055 A CN202011011055 A CN 202011011055A CN 112132266 A CN112132266 A CN 112132266A
Authority
CN
China
Prior art keywords
network
time
layer
modulation
convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011011055.4A
Other languages
Chinese (zh)
Inventor
王艺敏
苏洋
周华
徐智勇
蒲涛
沈荟萍
汪井源
李建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN202011011055.4A priority Critical patent/CN112132266A/en
Publication of CN112132266A publication Critical patent/CN112132266A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)

Abstract

A signal modulation identification system and a modulation identification method based on a convolution cycle network disclose a digital signal automatic modulation identification method based on the convolution cycle network: taking I/Q component original data of a received digital signal as input; designing a deep convolutional network to extract features, expanding I/Q dimensionality, increasing the richness of the features, compressing time dimensionality and reducing time cost overhead of classification; introducing a circulating network structure in a time dimension to extract time sequence information; and finishing modulation identification by utilizing output of the full connection layer. The method provided by the invention achieves better effect than the existing method under the condition of low signal-to-noise ratio in the public data set RML2016.10b containing noise.

Description

Signal modulation identification system and modulation identification method based on convolution cycle network
Technical Field
The invention relates to the technical field of digital communication signal modulation, in particular to a signal modulation identification system and a signal modulation identification method based on a convolution cycle network, and particularly relates to a digital signal automatic modulation identification system and a signal modulation identification method based on the convolution cycle network.
Background
The advantages of digital signals are many, firstly, it has a strong anti-interference ability, it can be used in communication technology, but also in information processing technology, on the other hand, the automatic modulation identification of digital signals is one of the key technologies in fields of uncooperative communication and information monitoring, etc., and is also an important component of software radio technology. The automatic modulation identification of the digital signal containing noise at the present stage has three methods: statistical hypothesis testing, conventional pattern recognition, and deep learning methods.
The statistical hypothesis test adopts statistical information such as the mean, variance and covariance of signals as random variables, and introduces probability distribution and hypothesis test to complete the identification of digital signals, however, the variables are difficult to accurately estimate in non-cooperative communication, thereby resulting in poor accuracy and robustness of signal identification.
The traditional pattern recognition method is used for popularizing hypothesis testing to the traditional classifier, and the traditional classifier has limited classification capability and high requirement on the provided features. The recognition effect is good or bad, and the problems of few recognition types, poor robustness under low signal-to-noise ratio and the like still exist under the influence of the recognition signal types.
The recognition method based on deep learning greatly reduces the requirement for extracting the features, even does not need to extract the features, directly takes the original signal as input, and can realize automatic extraction and modulation recognition of the features by utilizing the self-learning capability of the deep neural network. However, the design and optimization of the current deep neural network lacks a complete theoretical system and depends on more experience. In order to promote the deep learning to be used for the automatic modulation recognition of digital signals and uniformly evaluate the performance of the deep learning, the Timothy O' Shea and the like develop data sets with different signal-to-noise ratios, design different network structures, directly perform learning training on 11 signals without signal preprocessing, and finally complete the recognition of signal modulation classification. Although the method reduces the manual preprocessing link and greatly reduces the complexity, the identification effect is not ideal under the condition of medium-low signal-to-noise ratio at present.
Disclosure of Invention
In order to solve the problems, the invention provides a signal modulation identification system and a modulation identification method based on a convolution cycle network, which can well depict a broadband anti-interference scene based on a deep reinforcement learning algorithm and effectively avoid the defects of poor accuracy and robustness of signal identification, few identification types and poor identification effect under the condition of medium and low signal-to-noise ratio in the prior art.
In order to overcome the defects in the prior art, the invention provides a signal modulation identification system and a solution of a modulation identification method based on a convolution cycle network, which specifically comprise the following steps:
a signal modulation identification system based on a convolution cycle network comprises a deep convolution network;
the deep convolutional network comprises 3 convolutional layers, a bidirectional long-time memory model, an attention neural network, a full connection layer and an output layer;
the 3 convolutional layers, the bidirectional long-time memory model, the attention neural network, the full connection layer and the output layer are sequentially cascaded in a communication connection mode.
A modulation identification method of a signal modulation identification system based on a convolution cycle network comprises the following steps:
step 1, firstly, using the original data of I component and Q component of received digital signal as input data; designing a deep convolutional network to extract features of the input data, expanding the dimensionality of an I component and the dimensionality of a Q component to increase the richness of the features, compressing the time dimensionality and reducing the time cost expense of classification;
step 2, introducing a multilayer circulating network structure in a time dimension to extract time sequence information, namely a vector sequence, wherein the circulating network structure selects a bidirectional long-time and short-time memory model with context long-time modeling capability;
step 3, introducing an Attention mechanism to the time sequence information of the bidirectional long and short time memory model BLSTM as output information through an Attention neural Network layer, and carrying out self-adaptive weighting on the vector sequence to obtain single vector output;
step 4, outputting the probability value of each modulation type by the vector output through a full connection layer and an output layer which are cascaded, and finishing modulation identification;
step 5, determining initialization parameters of the deep convolutional network, and calling the existing network layer functions by combining a Keras deep learning framework to build a deep network structure; network training is carried out by utilizing a training set, and an Early-stop strategy is adopted to prevent an overfitting phenomenon;
and 6, after the network training is finished, verifying the training effect by using the test data set, and finishing the automatic signal modulation and identification.
The step 1 of using the original data of the I component and the Q component of the received digital signal as input data includes:
firstly, representing a received digital signal in a two-dimensional array form, wherein the two-dimensional array is an array of 2 x 128, namely an array of 2 rows and 128 columns, 2 represents two channels of an I channel for storing an I component of the digital signal and a Q channel for storing a Q component of the digital signal, and 128 represents components of the digital signal, which are used as sampling points, stored in each channel;
the designing of the deep convolutional network in the step 1 extracts features of the input data, and includes:
designing the deep convolutional network to include 3 convolutional layers, each convolutional layer having 128 convolutional kernels with the size of (2, 3), the step size of the convolutional layer is (1, 2), and the activation function is a ReLU function;
extracting features by performing convolution operation on the input data through the 3 convolution layers;
in the step 1, expanding the dimension of the I component and the dimension of the Q component to increase the richness of the features, compressing the time dimension to reduce the time cost overhead of classification includes:
and stretching the features obtained by the convolution operation into vectors, and reserving the time dimension.
The bidirectional long-short term memory model in the step 2 is a bidirectional long-short term memory network layer containing 128 long-short term memory units.
The bidirectional Long-Short Term Memory model in the step 2 is an expansion of a Long Short Term Memory (LSTM) in the left and right directions, and the Long-Short Term Memory model LSTM has a cyclic structure and a Memory unit, so that the Long-Short Term Memory model LSTM has the capability of extracting Long-time sequence information; the basic structure of the long-time memory model LSTM is shown in formula (1) to formula (6):
Figure BDA0002697565990000041
it=σ(Wxixt+Whiht-1+bi) (2)
ft=σ(Wxfxt+Whfht-1+Wcfct-1+bf) (3)
Figure BDA0002697565990000042
ot=σ(Wxoxt+Whoht-1+Wcoct-1+bo) (5)
ht=ot⊙tanh(ct) (6)
wherein x istFor the vector at the t-th moment of the convolutional network output sequence in step 1, htIs long and shortThe t-th time of the time memory model LSTM implies a unit vector,
Figure BDA0002697565990000051
is an intermediate variable, ctCell unit vector, W, specific to the long and short term memory model LSTMijAre respective weights, bjFor each bias, i, f and o are positive integers and are respectively an input gate, a forgetting gate and an output gate, sigma is an activation function which takes a Sigmoid function, and the function is a point-to-point element multiplication; as can be seen from the formula (1) to the formula (6), the function of the long-and-short-term memory model LSTM is to input the vector sequence xtMapping via cell unit vector ctAnd implicit element vector htAnd (6) mapping.
The Attention neural Network Attention Network layer in the step 3 is used for converting a vector sequence output by the bidirectional long-time and short-time memory model BLSTM into a vector.
The step 3 comprises the following steps:
for the input vector sequence htCalculating the weight value alpha by the formula (7) -formula (9)tWeighting all vectors in the sequence in such a way that a single vector z is obtained as output, wherein the weight αtCalculated according to formula (7) -formula (9), and the intermediate quantity etIs a mechanism of attention through an attention neural network
Figure BDA0002697565990000052
So as to obtain the compound with the characteristics of,
Figure BDA0002697565990000053
the structure of the method adopts a form of a multilayer feedforward neural network;
Figure BDA0002697565990000054
Figure BDA0002697565990000055
Figure BDA0002697565990000056
the full connection layer in the step 4 is a full connection layer containing 128 units, and the activation function of the full connection layer is a ReLU function;
the activation function of the output layer in step 4 is a Softmax function, and the number of cells of the output layer is a modulation class number, which here can be 11.
The step 4 comprises the following steps:
outputting a probability value of the modulation type j through a formula (10), and finishing modulation identification by taking a type label corresponding to the maximum value of K probability values;
Figure BDA0002697565990000061
wherein K is a positive integer.
The invention has the beneficial effects that:
the invention improves the robustness of system identification and the distinctiveness of extracted features; the representative features can be directly extracted from the original signals, so that modulation recognition can be completed without expert experience, and the usability of the system is improved.
Drawings
Fig. 1 is a flowchart of a method for identifying digital signal automatic modulation based on a convolutional loop network according to an embodiment of the present invention.
FIG. 2 is a diagram of an attention mechanism neural network employed by an embodiment of the invention.
Fig. 3 is a graph of digital modulation identification performance provided by an embodiment of the present invention.
Fig. 4 is a digital modulation characteristic visualization effect diagram provided by the embodiment of the invention.
Detailed Description
2018: the signal modulation recognition is improved by designing a convolutional neural network and a residual error network structure. Sharan Ramjee et al, 2019, analyzed the effects of convolution structures, recurrent neural network structures, and residual network structures by comparison in the publication of Fast Deep Learning for Automatic Modulation Classification. The result is taken as a baseline system, and the method further improves the modulation identification mode by designing the convolution cyclic neural network capable of reflecting the characteristics of two channels, time sequence and the like of the modulation signal. The automatic digital signal modulation and identification method based on the convolution cycle network has the advantages that: the influence of channel effect in a statistical hypothesis test method and the subjectivity of decision criterion selection are overcome; the noise robustness of the traditional pattern recognition method is improved, and particularly the recognition effect under the medium-low signal-to-noise ratio is improved; the design and optimization method of the network structure in deep learning is improved, so that the network structure can reflect the essential characteristics of signals better, and the superiority of the network structure is verified on the public noisy data set.
The invention will be further described with reference to the following figures and examples.
The signal modulation identification system based on the convolution cycle network comprises a signal modulation identification system based on the convolution cycle network, and a deep convolution network;
the deep convolutional network comprises 3 convolutional layers, a bidirectional long-time memory model, an attention neural network, a full connection layer and an output layer;
the 3 convolutional layers, the bidirectional long-time memory model, the attention neural network, the full connection layer and the output layer are sequentially cascaded in a communication connection mode.
The modulation identification method of the signal modulation identification system based on the convolution cycle network comprises the following steps:
step 1, firstly, using the original data of I component and Q component of received digital signal as input data; designing a deep convolutional network to extract features of the input data, expanding the dimensionality of an I component and the dimensionality of a Q component to increase the richness of the features, compressing the time dimensionality and reducing the time cost expense of classification;
step 2, introducing a multilayer cycle network structure in a time dimension to extract time sequence information, wherein the time sequence information is also a vector sequence, and the cycle network structure selects a Bidirectional Long Short Term Memory model (BLSTM) with context Long-Term modeling capability; the bidirectional Long-Short time Memory model is the expansion of a Long Short Term Memory (LSTM) in the left and right directions, and the Long-Short time Memory model LSTM has a cyclic structure and a Memory unit so as to have the capacity of extracting Long-time sequence information; the basic structure of the long-time memory model LSTM is shown in formula (1) to formula (6):
Figure BDA0002697565990000081
it=σ(Wxixt+Whiht-1+bi) (2)
ft=σ(Wxfxt+Whfht-1+Wcfct-1+bf) (3)
Figure BDA0002697565990000082
ot=σ(Wxoxt+Whoht-1+Wcoct-1+bo) (5)
ht=ot⊙tanh(ct) (6)
wherein x istFor the vector at the t-th moment of the convolutional network output sequence in step 1, htThe t-th time hidden unit vector of the long-short time memory model LSTM,
Figure BDA0002697565990000083
is an intermediate variable, ctCell unit vector, W, specific to the long and short term memory model LSTMijThe respective weights by which the quantities are multiplied, bjJ is positive integer i, f and o which are positive integers and are respectively an input gate, a forgetting gate and an output gate, sigma is an activation function, and the function is generally a Sigmoid functionInstead, it is a point-to-point element multiplication; as can be seen from the formula (1) to the formula (6), the function of the long-and-short-term memory model LSTM is to input the vector sequence xtMapping via cell unit vector ctAnd implicit element vector htMapping so as to obtain x while maintaining a timing structuretIs more abstract oftFor subsequent processing;
step 3, introducing an Attention mechanism to the time sequence information of the bidirectional long and short time memory model BLSTM as output information through an Attention neural Network layer, and carrying out self-adaptive weighting on the vector sequence to obtain single vector output; wherein for the input vector sequence htCalculating the weight value alpha by the formula (7) -formula (9)tWeighting all vectors in the sequence in such a way that a single vector z is obtained as output, wherein the weight αtCalculated according to formula (7) -formula (9), and the intermediate quantity etIs a mechanism of attention through an attention neural network
Figure BDA0002697565990000091
So as to obtain the compound with the characteristics of,
Figure BDA0002697565990000092
the structure of (1) can adopt the form of a general multilayer feedforward neural network;
Figure BDA0002697565990000093
Figure BDA0002697565990000094
Figure BDA0002697565990000095
step 4, outputting the probability value of the modulation type j through a formula (10) by the vector output through a cascade full-connection layer and an output layer, and finishing modulation identification by taking the type label corresponding to the maximum value of K probability values;
Figure BDA0002697565990000096
step 5, determining initialization parameters of the deep convolutional network, and calling the existing network layer functions by combining a Keras deep learning framework to build a deep network structure; network training is carried out by utilizing a training set, and an Early-stop strategy is adopted to prevent an overfitting phenomenon;
the network initialization parameter adopts an Xavier algorithm, and the method determines the magnitude of the initial value of the layer parameter according to the input and output node number and the activation function type of each layer of the deep convolutional network according to a formula (11) and a formula (12):
Figure BDA0002697565990000101
Figure BDA0002697565990000102
in the formula (11) and the formula (12), uniformity represents a Uniform random value, faninAnd fanoutThe number of input and output nodes of the layer is respectively; and initializes the network using the ReLU activation function using an initialization formula of the tanh type.
Wherein, the deep network structure is shown in fig. 1, and sequentially includes as shown by the arrow: the device comprises a two-dimensional convolution layer, a size resetting layer, a two-way long-time and short-time memory layer, an attention mechanism layer, a full connection layer and an output layer;
wherein the data set adoptshttps://www.deepsig.io/datasetsIn the prior art, RML2016.10b is divided into a training set, a verification set and a test set according to a random sampling mode in a ratio of 4:1:5, namely, only half of data training models are adopted in the present case;
wherein, the result of deep learning training iteration is evaluated on the verification set by tracking, and the training iteration process is stopped in time by adopting Early Stopping (Early Stopping) strategy to prevent overfitting.
And 6, after network training is finished, verifying a training effect by using a test set, and finishing automatic signal modulation recognition, wherein the specific implementation mode is that signals of the test set are input into a deep convolutional network shown in the figure 1, reasoning is implemented by using network parameters obtained by training, finally, the probability value of the modulation mode corresponding to each section of signals is obtained through an output layer and a Softmax function, the category corresponding to the maximum value is the prediction category of the modulation mode of the section of signals, and finally, the sample prediction result under each signal-to-noise ratio condition is compared with the real modulation category, so that the recognition accuracy under each signal-to-noise ratio condition can be obtained through statistics.
The step 1 of using the original data of the I component and the Q component of the received digital signal as input data includes:
firstly, representing a received digital signal in a two-dimensional array form, wherein the two-dimensional array is an array of 2 x 128, namely an array of 2 rows and 128 columns, 2 represents two channels of an I channel for storing an I component of the digital signal and a Q channel for storing a Q component of the digital signal, and 128 represents components of the digital signal, which are used as sampling points, stored in each channel;
the designing of the deep convolutional network in the step 1 extracts features of the input data, and includes:
designing the deep convolutional network to include 3 convolutional layers, each convolutional layer having 128 convolutional kernels with the size of (2, 3), the step size of the convolutional layer is (1, 2), and the activation function is a ReLU function;
extracting features by performing convolution operation on the input data through the 3 convolution layers;
in the step 1, expanding the dimension of the I component and the dimension of the Q component to increase the richness of the features, compressing the time dimension to reduce the time cost overhead of classification includes:
and stretching the features obtained by the convolution operation into vectors, and reserving the time dimension.
The bidirectional long-short term memory model in the step 2 is a bidirectional long-short term memory network (BLSTM) layer containing 128 long-short term memory units.
The Attention neural Network architecture layer in the step 3 is used for converting a vector sequence output by the bidirectional long-and-short-term memory model BLSTM into a vector, and adopts a structure shown in fig. 2 and a quantitative connection relationship described in equations (7) - (9).
The full connection layer in the step 4 is a full connection layer containing 128 units, and the activation function of the full connection layer is a ReLU function;
the activation function of the output layer in step 4 is a Softmax function, and the number of cells of the output layer is a modulation class number, which here can be 11.
Finally, on the RML2016.10b data set, under the condition that intel i7 CPU and 8GM memory iterate for 150 times by adopting the steps and parameter configuration, the result of the identification accuracy of the modulation mode under each signal-to-noise ratio is obtained as shown in FIG. 3, and meanwhile, the features corresponding to the full connection layer are subjected to t-SNE dimension reduction and visualization and then are shown in FIG. 4. By comparing fig. 3 with the latest method in the prior art, it can be found that the method of the present invention has excellent performance under the condition of medium and low signal-to-noise ratio; it can also be seen from fig. 4 that the extracted signal features of the present invention can separate most samples in 11 classes, i.e. each cluster corresponds to exactly one class, so that the reason for achieving the better result shown in fig. 3 is vividly explained.
The present invention has been described in an illustrative manner by the embodiments, and it should be understood by those skilled in the art that the present disclosure is not limited to the embodiments described above, but is capable of various changes, modifications and substitutions without departing from the scope of the present invention.

Claims (6)

1. A signal modulation recognition system based on a convolution cycle network is characterized in that the deep convolution network comprises 3 convolution layers, a bidirectional long-time memory model, an attention neural network, a full connection layer and an output layer;
the 3 convolutional layers, the bidirectional long-time memory model, the attention neural network, the full connection layer and the output layer are sequentially cascaded in a communication connection mode.
2. A modulation identification method of a signal modulation identification system based on a convolution cycle network is characterized by comprising the following steps:
step 1, firstly, using the original data of I component and Q component of received digital signal as input data; designing a deep convolutional network to extract features of the input data, expanding the dimensionality of an I component and the dimensionality of a Q component to increase the richness of the features, compressing the time dimensionality and reducing the time cost expense of classification;
step 2, introducing a multilayer circulating network structure in a time dimension to extract time sequence information, namely a vector sequence, wherein the circulating network structure selects a bidirectional long-time and short-time memory model with context long-time modeling capability;
step 3, introducing an Attention mechanism to the time sequence information of the bidirectional long and short time memory model BLSTM as output information through an Attention neural Network layer, and carrying out self-adaptive weighting on the vector sequence to obtain single vector output;
step 4, outputting the probability value of each modulation type by the vector output through a full connection layer and an output layer which are cascaded, and finishing modulation identification;
step 5, determining initialization parameters of the deep convolutional network, and calling the existing network layer functions by combining a Keras deep learning framework to build a deep network structure; network training is carried out by utilizing a training set, and an Early-stop strategy is adopted to prevent an overfitting phenomenon;
and 6, after the network training is finished, verifying the training effect by using the test data set, and finishing the automatic signal modulation and identification.
3. The modulation identification method of the signal modulation identification system based on the convolution circulation network as claimed in claim 2, wherein the step 1 of using the original data of the I component and the Q component of the received digital signal as the input data comprises:
firstly, representing a received digital signal in a two-dimensional array form, wherein the two-dimensional array is an array of 2 x 128, namely an array of 2 rows and 128 columns, 2 represents two channels of an I channel for storing an I component of the digital signal and a Q channel for storing a Q component of the digital signal, and 128 represents components of the digital signal, which are used as sampling points, stored in each channel;
the designing of the deep convolutional network in the step 1 extracts features of the input data, and includes:
designing the deep convolutional network to include 3 convolutional layers, each convolutional layer having 128 convolutional kernels with the size of (2, 3), the step size of the convolutional layer is (1, 2), and the activation function is a ReLU function;
extracting features by performing convolution operation on the input data through the 3 convolution layers;
in the step 1, expanding the dimension of the I component and the dimension of the Q component to increase the richness of the features, compressing the time dimension to reduce the time cost overhead of classification includes:
and stretching the features obtained by the convolution operation into vectors, and reserving the time dimension.
4. The modulation recognition method of the signal modulation recognition system based on the convolutional loop network as claimed in claim 2, wherein the bidirectional long-short term memory model in step 2 is a bidirectional long-short term memory network layer containing 128 long-short term memory cells.
5. The modulation recognition method of the signal modulation recognition system based on the convolutional loop Network as claimed in claim 2, wherein the Attention neural Network Attention Network layer in the step 3 is used for converting the vector sequence output by the bidirectional long-and-short-term memory model BLSTM into a vector.
6. The modulation identification method of the signal modulation identification system based on the convolutional loop network as claimed in claim 2, wherein the full link layer in the step 4 is a full link layer containing 128 units, and the activation function is a ReLU function;
the activation function of the output layer in step 4 is a Softmax function, and the number of cells of the output layer is a modulation class number, which here can be 11.
CN202011011055.4A 2020-09-23 2020-09-23 Signal modulation identification system and modulation identification method based on convolution cycle network Pending CN112132266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011011055.4A CN112132266A (en) 2020-09-23 2020-09-23 Signal modulation identification system and modulation identification method based on convolution cycle network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011011055.4A CN112132266A (en) 2020-09-23 2020-09-23 Signal modulation identification system and modulation identification method based on convolution cycle network

Publications (1)

Publication Number Publication Date
CN112132266A true CN112132266A (en) 2020-12-25

Family

ID=73839226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011011055.4A Pending CN112132266A (en) 2020-09-23 2020-09-23 Signal modulation identification system and modulation identification method based on convolution cycle network

Country Status (1)

Country Link
CN (1) CN112132266A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668507A (en) * 2020-12-31 2021-04-16 南京信息工程大学 Sea clutter prediction method and system based on hybrid neural network and attention mechanism
CN113114599A (en) * 2021-03-12 2021-07-13 电子科技大学 Modulation identification method based on lightweight neural network
CN113298031A (en) * 2021-06-16 2021-08-24 中国人民解放军国防科技大学 Signal modulation identification method considering signal physical and time sequence characteristics and application
CN113406588A (en) * 2021-05-14 2021-09-17 北京理工大学 Joint modulation type identification and parameter estimation method for cognitive radar signals
CN113486724A (en) * 2021-06-10 2021-10-08 重庆邮电大学 Modulation identification model based on CNN-LSTM multi-tributary structure and multiple signal representations
CN113947151A (en) * 2021-10-20 2022-01-18 嘉兴学院 Automatic modulation and identification method for wireless communication signals in offshore complex environment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107961007A (en) * 2018-01-05 2018-04-27 重庆邮电大学 A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
CN109890043A (en) * 2019-02-28 2019-06-14 浙江工业大学 A kind of wireless signal noise-reduction method based on production confrontation network
CN110515456A (en) * 2019-08-14 2019-11-29 东南大学 EEG signals emotion method of discrimination and device based on attention mechanism
CN110598677A (en) * 2019-10-08 2019-12-20 电子科技大学 Space-time multi-channel deep learning system for automatic modulation recognition
KR20200001866A (en) * 2018-06-28 2020-01-07 국방과학연구소 Method and apparatus for signal classification
CN111510408A (en) * 2020-04-14 2020-08-07 北京邮电大学 Signal modulation mode identification method and device, electronic equipment and storage medium
CN114465855A (en) * 2022-01-17 2022-05-10 武汉理工大学 Attention mechanism and multi-feature fusion based automatic modulation recognition method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107961007A (en) * 2018-01-05 2018-04-27 重庆邮电大学 A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
KR20200001866A (en) * 2018-06-28 2020-01-07 국방과학연구소 Method and apparatus for signal classification
CN109890043A (en) * 2019-02-28 2019-06-14 浙江工业大学 A kind of wireless signal noise-reduction method based on production confrontation network
CN110515456A (en) * 2019-08-14 2019-11-29 东南大学 EEG signals emotion method of discrimination and device based on attention mechanism
CN110598677A (en) * 2019-10-08 2019-12-20 电子科技大学 Space-time multi-channel deep learning system for automatic modulation recognition
CN111510408A (en) * 2020-04-14 2020-08-07 北京邮电大学 Signal modulation mode identification method and device, electronic equipment and storage medium
CN114465855A (en) * 2022-01-17 2022-05-10 武汉理工大学 Attention mechanism and multi-feature fusion based automatic modulation recognition method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WEI, SHUNJUN ET AL.: "Intra-pulse modulation radar signal recognition based on CLDN network", IET RADAR, SONAR & NAVIGATION, vol. 14, no. 06, pages 803 - 810, XP006090459, DOI: 10.1049/iet-rsn.2019.0436 *
乔霓丹: "音频音乐与计算机的交融 音频音乐技术", 31 January 2020, 上海:上海科学技术出版社, pages: 234 - 236 *
王春: "基于卷积神经网络的数字信号调制识别研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 02, pages 136 - 240 *
翁建新等: "利用并联CNN-LSTM的调制样式识别算法", 信号处理, vol. 35, no. 05, pages 870 - 876 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668507A (en) * 2020-12-31 2021-04-16 南京信息工程大学 Sea clutter prediction method and system based on hybrid neural network and attention mechanism
CN113114599A (en) * 2021-03-12 2021-07-13 电子科技大学 Modulation identification method based on lightweight neural network
CN113114599B (en) * 2021-03-12 2022-01-11 电子科技大学 Modulation identification method based on lightweight neural network
CN113406588A (en) * 2021-05-14 2021-09-17 北京理工大学 Joint modulation type identification and parameter estimation method for cognitive radar signals
CN113486724A (en) * 2021-06-10 2021-10-08 重庆邮电大学 Modulation identification model based on CNN-LSTM multi-tributary structure and multiple signal representations
CN113298031A (en) * 2021-06-16 2021-08-24 中国人民解放军国防科技大学 Signal modulation identification method considering signal physical and time sequence characteristics and application
CN113947151A (en) * 2021-10-20 2022-01-18 嘉兴学院 Automatic modulation and identification method for wireless communication signals in offshore complex environment
CN113947151B (en) * 2021-10-20 2024-05-24 嘉兴学院 Automatic modulation and identification method for wireless communication signals in marine complex environment

Similar Documents

Publication Publication Date Title
CN112132266A (en) Signal modulation identification system and modulation identification method based on convolution cycle network
CN112784881B (en) Network abnormal flow detection method, model and system
CN110163261B (en) Unbalanced data classification model training method, device, equipment and storage medium
CN111861013B (en) Power load prediction method and device
CN112910811B (en) Blind modulation identification method and device under unknown noise level condition based on joint learning
CN113591728A (en) Electric energy quality disturbance classification method based on integrated deep learning
CN112241724A (en) Automatic identification method and system based on double-path convolution long-term and short-term neural network
CN111123894B (en) Chemical process fault diagnosis method based on combination of LSTM and MLP
JPH0744514A (en) Learning data contracting method for neural network
CN112232577A (en) Power load probability prediction system and method for multi-core intelligent meter
CN112307927A (en) BP network-based identification research for MPSK signals in non-cooperative communication
CN116346639A (en) Network traffic prediction method, system, medium, equipment and terminal
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
CN115511162A (en) Short-term power load prediction method based on CVMD-GRU-DenseNet hybrid model
CN111371611A (en) Weighted network community discovery method and device based on deep learning
CN113239809B (en) Underwater sound target identification method based on multi-scale sparse SRU classification model
CN117851863A (en) Feature index selection method for microservice anomaly detection
CN117435909A (en) Non-invasive load decomposition method based on transfer learning and multidimensional feature extraction model
CN111797979A (en) Vibration transmission system based on LSTM model
CN110288002B (en) Image classification method based on sparse orthogonal neural network
CN115348215B (en) Encryption network traffic classification method based on space-time attention mechanism
CN116055270A (en) Modulation recognition model, training method thereof and signal modulation processing method
CN115169740A (en) Sequence prediction method and system of pooled echo state network based on compressed sensing
Huang et al. Neural fault analysis for sat-based atpg
CN113435321A (en) Method, system and equipment for evaluating state of main shaft bearing and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 210 007 No. 88 Houbaoying Road, Qinhuai District, Nanjing City, Jiangsu Province

Applicant after: ARMY ENGINEERING University OF PLA

Address before: Box 12-1, No.42, Luoyu East Road, Wuhan, Hubei Province 430075

Applicant before: ARMY ENGINEERING University OF PLA