CN115982613A - Signal modulation identification system and method based on improved convolutional neural network - Google Patents
Signal modulation identification system and method based on improved convolutional neural network Download PDFInfo
- Publication number
- CN115982613A CN115982613A CN202211054696.7A CN202211054696A CN115982613A CN 115982613 A CN115982613 A CN 115982613A CN 202211054696 A CN202211054696 A CN 202211054696A CN 115982613 A CN115982613 A CN 115982613A
- Authority
- CN
- China
- Prior art keywords
- neural network
- layer
- module
- signal
- modulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 35
- 238000011176 pooling Methods 0.000 claims abstract description 52
- 238000003062 neural network model Methods 0.000 claims abstract description 19
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 27
- 238000000605 extraction Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000009467 reduction Effects 0.000 claims description 6
- 235000007164 Oryza sativa Nutrition 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 5
- 235000009566 rice Nutrition 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 4
- 230000005284 excitation Effects 0.000 claims description 3
- 238000005562 fading Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 239000000654 additive Substances 0.000 claims description 2
- 230000000996 additive effect Effects 0.000 claims description 2
- 240000007594 Oryza sativa Species 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 7
- 239000000284 extract Substances 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 241000209094 Oryza Species 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- LTUFGCFAPCJOFQ-UHFFFAOYSA-N 2h-pyran-3-carboxamide Chemical compound NC(=O)C1=CC=COC1 LTUFGCFAPCJOFQ-UHFFFAOYSA-N 0.000 description 1
- 238000001994 activation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Landscapes
- Noise Elimination (AREA)
Abstract
The invention discloses a signal modulation recognition system and method based on an improved convolutional neural network, which relate to the technical field of signal modulation recognition and comprise the following steps: generating different modulation signals as a data set by a signal sample generating unit from the baseband signal; building a neural network model by utilizing a two-layer convolution neural network module, a multi-scale pyramid pooling module, a mixed cascade attention mechanism module and a full connection layer module; training a neural network model through a data set; and inputting the modulation signal to be detected into the trained neural network model, and identifying the modulation signal. The modulation recognition algorithm based on the improved convolutional neural network is designed by combining a multi-scale pyramid module and a mixed cascade attention mechanism module, optimizes the network structure, extracts deep features of signals, enables the network to be trained and learned better so as to obtain better results, and improves the generalization of the model to noise.
Description
Technical Field
The invention relates to the technical field of signal modulation identification, in particular to a signal modulation identification system and method based on an improved convolutional neural network.
Background
Automatic modulation identification plays an important role in modern wireless communication, and has applications in both civil and military fields. In a real environment, due to interference from non-cooperative communication and background noise, certain characteristics of the received signal may become blurred, thereby affecting the recognition result. The automatic modulation recognition is a processing process before demodulating the received signals, can effectively process the received signals in non-cooperative communication, and classifies the modulation types of the signals so as to facilitate subsequent demodulation work, and is widely applied to the fields of radio signal monitoring, electronic countermeasure, intelligent communication and the like.
The recognition method based on the hypothesis test modulation mode realizes recognition of the signal modulation mode by comparing a decision threshold value with a statistic value by using the theory of probability and hypothesis test. This method requires a large amount of a priori information, such as the mean and variance of the modulated signal, which is difficult to obtain accurately in non-cooperative communications. Furthermore, the recognition accuracy is low at low signal-to-noise ratios due to the large influence of noise.
The modulation recognition method based on feature extraction extracts the most representative and reflective features of signals of different modulation types in a time domain or a frequency domain, and compares the features with ideal values, so that the signals of different modulation types are recognized accurately.
The modulation recognition method based on deep learning generally comprises a direct recognition method and an indirect recognition method, wherein the direct recognition method is to directly put baseband signals into a neural network for learning training and finish recognition and classification, and the indirect recognition method is to convert the signals into other transformation forms through preprocessing and then carry out learning training on the signals to realize classification. The method greatly reduces the difficulty of feature extraction, even does not need feature extraction, and the direct identification method can directly input the original signal into a network model for training, testing and identification without pretreatment or expert feature extraction steps, thereby greatly simplifying the process of signal modulation identification. However, the current modulation recognition algorithm based on deep learning has poor generalization performance and unsatisfactory recognition accuracy under the condition of low signal-to-noise ratio.
Disclosure of Invention
The embodiment of the invention provides a signal modulation identification system and method based on an improved convolutional neural network, which can solve the problems in the prior art.
The invention provides a signal modulation identification system based on an improved convolutional neural network, which comprises a signal sample set generation unit and a neural network unit;
the signal sample set generating unit is used for generating different modulation signals from baseband signals and inputting the modulation signals to the neural network unit;
the neural network unit is used for identifying and classifying the modulation signals and comprises the following steps:
the two-layer convolutional neural network module is used for carrying out primary feature extraction on the modulation signal;
the multi-scale pyramid pooling module is used for carrying out deep feature extraction on the modulation signal;
the mixed cascade attention mechanism module is used for carrying out weight optimization on the extracted features;
and the full connection layer module is used for integrating the features.
Preferably, each convolutional neural network module comprises a convolutional layer, a BN layer, an activation layer and a max pooling layer.
Preferably, the multi-scale pyramid pooling module is composed of three pyramid pooling modules, and each pyramid pooling moduleOutput profile F of a volume block in a block x In association with the pyramid pooling feature, the calculation formula is as follows:
constructing an upsampling layer by bilinear interpolation to obtain a sum F x The same size signature and concatenating the outputs of the three pyramid pooling modules.
Preferably, the hybrid cascade attention module comprises a squeeze-fire block SEB and a scaling dot product attention module SDPA.
Preferably, the squeeze-and-fire block SEB includes a squeeze mapping function and a fire mapping function, and the squeeze mapping function formula is as follows:
in the formula, F sq (. Cndot.) denotes the squeeze map, H × W denotes the size of the input feature layer, x n ∈R H×W Is the nth channel of the input feature layer;
the excitation mapping function is formulated as follows:
in the formula, W 1 And W 2 Representing a weight parameter, wherein r represents a dimensionality reduction coefficient, sigma is a ReLU function, and g is a Sigmoid function;
multiplying the weight vectors of different channels with the input feature space X to carry out feature channel weighting to obtain an output feature spaceThe calculation formula is as follows:
in the formula,and s n Represents an output characteristic space->The c-th convolution kernel and the channel weights.
Preferably, the scaled dot product attention module SDPA contains a feature space transformation formula, and the calculation formula is as follows:
wherein X = [ X ] 1 ,x 2 ,...,x n ]∈R H×W×C Inputting a feature space, wherein C' = C/h, h represents a dimensionality reduction coefficient, C represents the number of feature channels, and Q, K and V represent different feature spaces;
obtaining an attention matrix B epsilon R from feature spaces Q and K through matrix multiplication N×N The calculation formula is as follows:
B=Q T K
obtaining an output characteristic space A through matrix multiplication of B and V, wherein the calculation formula is as follows:
A=B·V。
preferably, a signal modulation identification method based on the improved convolutional neural network comprises the following steps:
generating different modulation signals as a data set by a signal sample generating unit from the baseband signal;
building a neural network model by utilizing a two-layer convolutional neural network module, a multi-scale pyramid pooling module, a mixed cascade attention mechanism module and a full connection layer module;
training the neural network model through a data set;
and inputting the modulation signal to be detected into the trained neural network model, and identifying the modulation signal.
Preferably, the generating, by the signal sample generating unit, the different modulation signals as the data sets from the baseband signals comprises the following steps:
carrying out information source modulation on the baseband signal to obtain a signal original sample;
sequentially passing the original signal sample through an additive white Gaussian noise channel and a Rice multipath fading channel, and applying clock offset to generate and obtain an original signal sample after interference;
the raw signal samples are taken as a data set.
Preferably, the method for building the neural network model by using the two layers of convolutional neural network modules, the multi-scale pyramid pooling module, the hybrid cascade attention mechanism module and the full connection layer module comprises the following steps of:
taking two layers of convolutional neural networks as the first two layers of networks;
taking the multi-scale pyramid pooling module as a third-fifth-layer network;
taking the mixed cascade attention mechanism module as a sixth layer network;
taking the full connection layer module as a seventh layer network;
and connecting the networks in each layer in sequence to obtain a neural network model.
Preferably, training the neural network model through a data set comprises the following steps:
inputting a data set into a first two-layer network, and performing primary feature extraction on the data set;
inputting the data set subjected to the preliminary feature extraction into a third-layer network, a fifth-layer network and deep-layer feature extraction on the data set;
inputting the data set subjected to deep feature extraction into a sixth layer network, and performing weight optimization on the extracted features;
and inputting the optimized features into a seventh layer network, and integrating the features.
Compared with the prior art, the invention has the beneficial effects that:
the modulation recognition algorithm based on the improved convolutional neural network is designed by combining a multi-scale pyramid module and a mixed cascade attention mechanism module, optimizes a network structure, extracts deep features of signals, enables the network to be trained and learned better to obtain a better result, and improves the generalization of the model to noise.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a schematic diagram of a multi-scale pyramid pooling module of the present invention;
FIG. 3 is a diagram of a HCAM module of the present invention;
FIG. 4 is a diagram of an SEB module according to the present invention;
FIG. 5 is a simulation of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the invention provides a signal modulation identification system and method based on an improved convolutional neural network, and the signal modulation identification system comprises a signal sample generation unit and a neural network unit. The signal sample generating unit is used for generating different modulation signals by passing the sequence through a baseband carrier. The neural network unit is used for identifying and classifying the modulation signals and comprises a two-layer convolutional neural network module, a multi-scale pyramid pooling module, a mixed cascade attention mechanism module and a full connection layer module. The two-layer convolutional neural network module is used for carrying out preliminary feature extraction on the modulation signals, the multi-scale pyramid pooling module is used for carrying out deep feature extraction on the modulation signals, the mixed cascade attention mechanism module is used for carrying out weight optimization on the extracted features, and the full connection layer module is used for integrating the features.
Each convolutional neural network module comprises a convolutional layer, a BN layer, an activation layer and a maximum pooling layer.
Referring to fig. 2, the multi-scale pyramid pooling module consists of three pyramid pooling modules, except the first block, behind the last convolution layer of each convolution block are connected three modules of pyramid-level lower integrated feature mapping. First, output characteristic diagram F of each block x x Associated with the pyramid pooling feature, the formula is as follows:
wherein,the pyramid pooling feature, representing block x, indicates a cascading operation. Then, an up-sampling layer is constructed by bilinear interpolation to obtain F x The feature maps with the same size are cascaded, the output of 3 pyramid pooling modules are cascaded,and obtaining the final multi-scale pyramid pooling characteristic. Finally, two convolution layers of size 1 x 32 x 192 and 1 x 12, respectively, are connected to generate the final extracted features.
Referring to fig. 3, the hybrid cascaded attention mechanism module HCAM comprises a squeeze-fire block SEB and a scaled dot product attention SDPA. The global dependency relationship between input and output data is extracted through nonlinear transformation, and the internal related information of an input image can be obtained from the space and channel dimensions, so that useful characteristic information beneficial to modulation identification is extracted.
Suppose X = [ X = 1 ,x 2 ,...,x n ]∈R H×W×C For inputting the feature space, H × W represents the size of the input feature layer, C represents the number of feature channels, x n ∈R H×W Then it is the nth channel of the input feature layer. The SEB of the HCAM contains squeeze and stimulus mapping functions, as shown in FIG. 4, F sq (. Cndot.) represents a squeeze map that encodes spatial features on a channel as global spatial features using global average pooling. The input features are subjected to function mapping to obtain global features of a global feature space, and the calculation process is as shown in the formula:
then, the SEB adaptively learns the nonlinear relationship between channels through an excitation mapping function, which is calculated by the following formula:
wherein W 1 And W 2 And representing a weight parameter, wherein r represents a dimensionality reduction coefficient, sigma is a ReLU function, and g is a Sigmoid function.
Two Full Connection (FC) layers are introduced into the SEB module, the first FC layer is used for reducing dimension, the second FC layer is used for recovering original characteristic information, and then the weight vectors of different channels are multiplied by the original characteristic input characteristic space X to carry out characteristic channel weighting (F) scale ) To obtain the outputGo out of the characteristic spaceWherein +>And s n Space of characteristic output>And a channel weight, <' > based on the th convolution kernel and the channel weight>The calculation formula is as follows:
for the SDPA module in HCAM, the input feature space X is converted into three different feature spaces Q, K and V, and the corresponding calculation process is as follows:
wherein, C' = C/h, h represents the dimensionality reduction coefficient. Obtaining an attention matrix B epsilon R from feature spaces Q and K through matrix multiplication N×N The calculation formula is as follows:
B=Q T K (8)
and then, performing normalization processing on each line of the attention matrix B by using a Softmax function, wherein the calculation process is as follows:
then, obtaining an output characteristic space A through matrix multiplication of B and V, wherein the specific expression is as follows:
A=B·V (10)
the HCAM can extract features from the feature channel and the space dimension to obtain key informationAnd A, and obtaining an output characteristic->
The invention also provides a signal modulation identification method based on the improved convolutional neural network, which comprises the following steps:
the first step is as follows: the baseband signal is generated by a signal sample generation unit into different modulation signals as data sets.
And carrying out source modulation on the baseband signal to obtain a signal original sample in the form of a + bi.
Passing the signal through an AWGN channel adds AWGN to the signal with a signal-to-noise ratio of-18 dB to 18 dB.
With a rice multipath fading channel, the signal is a superposition of a complex gaussian signal and a direct component (i.e., a sine wave plus narrowband gaussian process), and the probability density function of its envelope follows a rice distribution, which is:
wherein z is the envelope of sine (cosine) signal plus narrow-band Gaussian random signal, the peak value of the amplitude of the main signal of the parameter A,is the power of a multipath signal component, I 0 (. Cndot.) is a modified 0 th order Bessel function of the first kind. The rice factor K in the channel is the ratio of the power of the main signal to the multipath component variance, i.e.:
applying clock skew to the signals, the clock skew being caused by differences in the positions of clock edges due to differences in the driving and loading of the paths taken by the clock source to reach different registers, wherein the clock skew factor C has the formula:
wherein, delta clock Is the clock offset. Likewise, the frequency offset f for each frame o And the sample rate offset SFO is determined by the clock offset factor, the sample rate f s And a center frequency f c Determined as follows:
f o =-(C-1)f c (1) (14)
SFO=C×f s (15)
the modulated signal processed as described above is taken as a data set.
The second step: building a neural network model by utilizing a two-layer convolutional neural network module, a multi-scale pyramid pooling module, a mixed cascade attention mechanism module and a full connection layer module; and taking two layers of convolutional neural networks as the first two layers of networks, taking the multi-scale pyramid pooling module as a third-fifth layer network, taking the mixed cascade attention mechanism module as a sixth layer network, taking the full-connection layer module as a seventh layer network, and sequentially connecting the layers of networks to obtain a neural network model.
The third step: the neural network model is trained through the data set. The method specifically comprises the following steps:
inputting the data set into a first convolution layer, and performing inner convolution calculation, wherein the inner convolution formula is as follows:
where A and B are assumed to be matrices of size M N and M N, respectively. Wherein M is more than or equal to M, and N is more than or equal to N.
In the model, the calculation formula is as follows:
wherein x is i_q For IQ samples of the input original signal, w 1_c And b 1_c Respectively, the weight and the offset of the first convolutional layer.
And carrying out batch normalization on the processed samples through a BN layer, wherein the BN layer is arranged between the convolutional layer and the activation layer, and the data can be converted under the conditions that the mean value is 0 and the variance is 1. The normalization process through the first BN layer is expressed as the following equation:
x 1_b =c 1_b x 1_c +k 1_b (18)
wherein x is 1_c For the convolutional layer output, c 1_b Is x 1_c Variance of (k) 1_b Is x 1_c The mean values of the data are learnable reconstruction parameters respectively, so that the network can learn and restore the feature distribution to be learned by the original network, and the distribution of each layer of data is consistent.
The processed feature graph passes through an activation function layer, an activation function is applied to add nonlinear factors to the extracted features, and a ReLU activation function sigma (-) is selected to x 1_b Processing is carried out, and the formula is as follows:
x 1_r =σ(x 1_b ) (19)
the ReLU activation function formula is:
ReLU(x)=max(0,x) (20)
entering a pooling layer, wherein the step of the pooling layer is 2, the size of a pooling block is nxn, and n is the size of a pooling layer. The pooling process for the first pooling layer can be represented as:
x 1_p =maxdown n×n (x 1_r ) (21)
the steps performed by the first convolutional layer include four steps of convolution, batch normalization, activation and pooling, and the first two layers have the same structure. And carrying out primary feature extraction on the data set.
And the third to fifth layers enter a multi-scale pyramid pooling module, and the formula is as follows:
x l_p =MSPP m×m (x l_r ) (22)
wherein l is 3, 4, 5, the size of the pooling block is nxn, and the output x is obtained by multi-scale pyramid pooling 5_p 。
After the last convolutional layer, an average pooling layer is used, with a pooling block size of 1 × 32 with an input of x L_b Output x L_p Can be expressed as:
x L_p =avgdown 1×32 (x L_b ) (23)
the resulting output is 1 × 1 × 96 in size. And wherein, the pairThe average pooling of (a) is defined as:
and carrying out deep feature extraction on the data set from the third layer to the fifth layer.
The output is input into a mixed cascade attention mechanism to extractFor key features and information, an output x is obtained L_A The process is shown as the following formula:
x L_A =Attention(x L_b ) (25)
x is to be L_A It is input to a fully-connected layer that can map distributed features to the sample label space, which can integrate features together. The softmax function is used as an activation function to meet the aim of multi-classification, the number of neurons is the same as that of tags, the types of modulation signals in the model are 12, and the output of the layer predicts the tags y s Comprises the following steps:
y s =ρ(w s x L_A +b s ) (26)
wherein, w s And b s For the weight and the bias of the full connection layer, which are learnable parameters, ρ (·) is the softmax activation function, and its formula is:
and after the network training is finished, inputting the modulation signal to be detected into the trained neural network model to finish the automatic signal modulation and identification.
Referring to fig. 5, the present invention also performs modulation mode recognition training and testing on 12 digital and analog modulation signals generated under different signal-to-noise ratios according to a real communication environment. The method is simulated with the existing general convolutional neural network model, SVM algorithm and KNN, and the simulation result shows that the model has more than 60% of recognition accuracy rate at-18 dB, reaches 90.96% at 18dB, and the recognition rate is higher than other algorithms in the whole process, so that the effectiveness and high recognition rate of the algorithm are verified.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A signal modulation identification system based on an improved convolutional neural network is characterized by comprising a signal sample set generation unit and a neural network unit;
the signal sample set generating unit is used for generating different modulation signals from baseband signals and inputting the modulation signals to the neural network unit;
the neural network unit is used for identifying and classifying the modulation signals and comprises the following steps:
the two-layer convolutional neural network module is used for carrying out primary feature extraction on the modulation signal;
the multi-scale pyramid pooling module is used for carrying out deep feature extraction on the modulation signal;
the mixed cascade attention mechanism module is used for carrying out weight optimization on the extracted features;
and the full connection layer module is used for integrating the features.
2. The system of claim 1, wherein each convolutional neural network module comprises a convolutional layer, a BN layer, an activation layer and a max-pooling layer.
3. The system for signal modulation recognition based on improved convolutional neural network of claim 1, wherein the multi-scale pyramid pooling module is composed of three pyramid pooling modules, and the output feature map F of the convolution block in each pyramid pooling module x In association with the pyramid pooling feature, the calculation formula is as follows:
constructing an upsampling layer by bilinear interpolation to obtain F x The same size of feature map, and concatenating the outputs of the three pyramid pooling modules.
4. The system for signal modulation recognition based on the improved convolutional neural network as claimed in claim 1, wherein said hybrid cascade attention module comprises a squeeze-fire block SEB and a scaled dot product attention module SDPA.
5. The system for identifying signal modulation based on the improved convolutional neural network as claimed in claim 4, wherein said squeeze-fire block SEB comprises a squeeze mapping function and a fire mapping function, and the formula of the squeeze mapping function is as follows:
in the formula, F sq (. Cndot.) denotes the squeeze map, H × W denotes the size of the input feature layer, x n ∈R H×W Is the nth channel of the input feature layer;
the excitation mapping function formula is as follows:
in the formula, W 1 And W 2 Representing a weight parameter, wherein r represents a dimensionality reduction coefficient, sigma is a ReLU function, and g is a Sigmoid function;
multiplying the weight vectors of different channels with the input feature space X to carry out feature channel weighting to obtain an output feature spaceThe calculation formula is as follows:
6. The system of claim 4, wherein the SDPA comprises a feature space transformation formula, which is calculated as follows:
wherein X = [ X ] 1 ,x 2 ,…,x n ]∈R H×W×C For the input eigenspace, C' = C/h, h denotes dimensionality reduction coefficient, C denotesThe number of characteristic channels, Q, K and V represent different characteristic spaces;
obtaining an attention matrix B epsilon R from feature spaces Q and K through matrix multiplication N×N The calculation formula is as follows:
B=Q T K
obtaining an output characteristic space A through matrix multiplication of B and V, wherein the calculation formula is as follows:
A=B·V。
7. an identification method of the signal modulation identification system based on the improved convolutional neural network as claimed in any one of claims 1 to 6, which comprises the following steps:
generating different modulation signals as a data set by a signal sample generating unit from the baseband signal;
building a neural network model by utilizing a two-layer convolutional neural network module, a multi-scale pyramid pooling module, a mixed cascade attention mechanism module and a full connection layer module;
training the neural network model through a data set;
and inputting the modulation signal to be detected into the trained neural network model, and identifying the modulation signal.
8. The signal modulation identification method based on the improved convolutional neural network as claimed in claim 7, wherein the step of generating different modulation signals as data sets from the baseband signal by the signal sample generating unit comprises the following steps:
carrying out information source modulation on the baseband signal to obtain a signal original sample;
sequentially passing the original signal sample through an additive white Gaussian noise channel and a Rice multipath fading channel, and applying clock offset to generate an original signal sample after obtaining interference;
the raw signal samples are taken as a data set.
9. The signal modulation identification method based on the improved convolutional neural network as claimed in claim 7, wherein the step of building the neural network model by using the two-layer convolutional neural network module, the multi-scale pyramid pooling module, the hybrid cascade attention mechanism module and the full connection layer module comprises the following steps:
taking two layers of convolutional neural networks as the first two layers of networks;
taking the multi-scale pyramid pooling module as a third-fifth layer network;
taking the mixed cascade attention mechanism module as a sixth layer network;
taking the full connection layer module as a seventh layer network;
and connecting the networks in each layer in sequence to obtain a neural network model.
10. The signal modulation identification method based on the improved convolutional neural network as claimed in claim 7, wherein training the neural network model through the data set comprises the following steps:
inputting a data set into a first two-layer network, and performing primary feature extraction on the data set;
inputting the data set subjected to the preliminary feature extraction into a third-layer network, a fifth-layer network and deep-layer feature extraction on the data set;
inputting the data set subjected to deep feature extraction into a sixth-layer network, and performing weight optimization on the extracted features;
and inputting the optimized features into a seventh layer network, and integrating the features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211054696.7A CN115982613A (en) | 2022-08-31 | 2022-08-31 | Signal modulation identification system and method based on improved convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211054696.7A CN115982613A (en) | 2022-08-31 | 2022-08-31 | Signal modulation identification system and method based on improved convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115982613A true CN115982613A (en) | 2023-04-18 |
Family
ID=85972635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211054696.7A Pending CN115982613A (en) | 2022-08-31 | 2022-08-31 | Signal modulation identification system and method based on improved convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115982613A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117294322A (en) * | 2023-11-24 | 2023-12-26 | 北京雷格讯电子股份有限公司 | Microwave transmission system and transmission method |
CN117614467A (en) * | 2024-01-17 | 2024-02-27 | 青岛科技大学 | Underwater sound signal intelligent receiving method based on noise reduction neural network |
-
2022
- 2022-08-31 CN CN202211054696.7A patent/CN115982613A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117294322A (en) * | 2023-11-24 | 2023-12-26 | 北京雷格讯电子股份有限公司 | Microwave transmission system and transmission method |
CN117294322B (en) * | 2023-11-24 | 2024-02-09 | 北京雷格讯电子股份有限公司 | Microwave transmission system and transmission method |
CN117614467A (en) * | 2024-01-17 | 2024-02-27 | 青岛科技大学 | Underwater sound signal intelligent receiving method based on noise reduction neural network |
CN117614467B (en) * | 2024-01-17 | 2024-05-07 | 青岛科技大学 | Underwater sound signal intelligent receiving method based on noise reduction neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112418014B (en) | Modulated signal identification method based on wavelet transformation and convolution long-term and short-term memory neural network | |
Li et al. | Robust automated VHF modulation recognition based on deep convolutional neural networks | |
CN108234370B (en) | Communication signal modulation mode identification method based on convolutional neural network | |
CN110048827B (en) | Class template attack method based on deep learning convolutional neural network | |
CN115982613A (en) | Signal modulation identification system and method based on improved convolutional neural network | |
CN110598530A (en) | Small sample radio signal enhanced identification method based on ACGAN | |
CN113014524B (en) | Digital signal modulation identification method based on deep learning | |
Che et al. | Spatial-temporal hybrid feature extraction network for few-shot automatic modulation classification | |
CN114726692B (en) | SERESESESENet-LSTM-based radiation source modulation mode identification method | |
CN113723556B (en) | Modulation mode identification method based on entropy weighting-multi-mode domain antagonistic neural network | |
CN111612130B (en) | Frequency shift keying communication signal modulation mode identification method | |
Huang et al. | Radar waveform recognition based on multiple autocorrelation images | |
Kong et al. | A transformer-based CTDNN structure for automatic modulation recognition | |
CN116628566A (en) | Communication signal modulation classification method based on aggregated residual transformation network | |
CN115941407A (en) | Signal modulation identification method based on recursive convolutional network and attention mechanism | |
Zhang et al. | Efficient residual shrinkage CNN denoiser design for intelligent signal processing: Modulation recognition, detection, and decoding | |
CN117614467B (en) | Underwater sound signal intelligent receiving method based on noise reduction neural network | |
CN118337576A (en) | Lightweight automatic modulation identification method based on multichannel fusion | |
CN113902095A (en) | Automatic modulation identification method, device and system for wireless communication | |
Li et al. | Automatic modulation recognition based on a new deep K-SVD denoising algorithm | |
CN113076925A (en) | M-QAM signal modulation mode identification method based on CNN and ELM | |
Kalade et al. | Using sequence to sequence learning for digital bpsk and qpsk demodulation | |
Wang et al. | Enhancing the generalization ability of deep learning model for radio signal modulation recognition | |
CN116614333A (en) | Modulation identification method based on Markov conversion field and deep learning | |
CN115955375A (en) | Modulated signal identification method and system based on CNN-GRU and CA-VGG feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |