CN116896492A - Modulation and coding joint identification method and system based on multichannel attention network - Google Patents

Modulation and coding joint identification method and system based on multichannel attention network Download PDF

Info

Publication number
CN116896492A
CN116896492A CN202310704966.2A CN202310704966A CN116896492A CN 116896492 A CN116896492 A CN 116896492A CN 202310704966 A CN202310704966 A CN 202310704966A CN 116896492 A CN116896492 A CN 116896492A
Authority
CN
China
Prior art keywords
modulation
signal
convolution
channel
attention network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310704966.2A
Other languages
Chinese (zh)
Inventor
吴迪
易冬
王世举
刘凯越
张靖志
王书
卢万杰
胡涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202310704966.2A priority Critical patent/CN116896492A/en
Publication of CN116896492A publication Critical patent/CN116896492A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention relates to the technical field of signal modulation coding joint identification, in particular to a modulation coding joint identification method and a modulation coding joint identification system based on a multichannel attention network, which utilize a randomly generated radio information sequence to simulate a communication environment coding modulation information sequence and generate a digital modulation signal and a simulation modulation signal, and construct signal sample data; constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification and training and optimizing the multichannel convolution gating depth attention network model by utilizing signal sample data; inputting the signal to be identified into a multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding type of the signal to be identified by using the multi-channel convolution gating depth attention network model after training optimization. The invention can improve the modulation and coding joint identification efficiency, can realize the signal modulation and coding joint automatic identification task, and is convenient for deployment and implementation.

Description

Modulation and coding joint identification method and system based on multichannel attention network
Technical Field
The invention relates to the technical field of signal modulation and coding joint identification, in particular to a modulation and coding joint identification method and system based on a multichannel attention network.
Background
With the development of communication technology, in order to improve communication capacity, variable code modulation (AVM) and Adaptive Code Modulation (ACM) technologies are being generated and widely used in communication. The self-adaptive code modulation technology can flexibly select a code modulation mode according to the change of channel transmission conditions, and the frequency spectrum utilization rate is greatly improved. When the method is applied to interactive point-to-point, the ACM technology is adopted, so that the satellite communication capacity can be increased by 100% -200%. With the continuous enhancement of modern informatization and digitalization construction, satellite communication networks are easily threatened by detection, interference and attack of third parties; in order to improve the communication safety of the satellite communication network, the modulation mode and the coding mode of the communication signals can be identified through unintended interference, so that the modulation mode and the coding mode of the communication signals can be obtained to further obtain relevant signal parameters, and decision guidance can be provided for perfecting satellite communication management, enhancing inter-system coordination and optimizing anti-interference performance.
The modulation and coding joint identification of the current signals can be divided into: the traditional hierarchical recognition algorithm and the modulation coding joint recognition algorithm based on deep learning. The traditional algorithm is mainly realized in a layering identification mode, namely, firstly, the signals are modulated and identified, then the signals are demodulated, and finally, the demodulated information flow is subjected to coding type identification and parameter estimation, so that the modulation and coding combined identification is realized. By adopting layered identification, the code identification performance depends on the modulation identification performance and the demodulation algorithm error to a certain extent, and when the modulation identification performance is reduced or the demodulation algorithm error is larger, the code identification performance is necessarily reduced. Modulation and coding joint recognition based on deep learning mostly adopts intermediate frequency signal waveforms as deep network inputs, and intermediate frequency signals are unfavorable for deep network extraction of features compared with baseband quadrature in-phase (I/Q) signals. This is because the modulation and coding joint constraint relationship in the signal is difficult to mine, the carrier frequency in the intermediate frequency signal occupies most of the energy of the signal, so that the difficulty of mining the modulation and coding joint constraint relationship in the signal is increased to a certain extent, and the baseband I/Q signal does not have the worry. Meanwhile, the current depth network structure for modulation and coding joint identification is not strong in pertinence, and the extraction capability of modulation and coding joint features is insufficient, so that the identification rate and the robustness of an algorithm are not high. .
Disclosure of Invention
Therefore, the invention provides a modulation and coding joint identification method and a modulation and coding joint identification system based on a multichannel attention network, which solve the problems of unreasonable signal input form, unsatisfactory identification rate and robustness caused by poor structural design of a depth network in the existing modulation and coding joint identification.
According to the design scheme provided by the invention, a modulation and coding joint identification method based on a multichannel attention network is provided, which comprises the following steps:
simulating a communication environment using the randomly generated radio information sequences to code the modulated information sequences and generating digital modulated signals and analog modulated signals;
the method comprises the steps of utilizing a digital modulation signal and an analog modulation signal to construct signal sample data, and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion;
constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification, and training and optimizing the multichannel convolution gating depth attention network model by using a training sample set, a verification sample set and a test sample set;
inputting the signal to be identified into a multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding type of the signal to be identified by using the multi-channel convolution gating depth attention network model after training optimization.
As the modulation and coding joint recognition method based on the multichannel attention network of the invention, further, the method for simulating the communication environment by using the randomly generated radio information sequence to code the modulation information sequence and generate the digital modulation signal and the simulation modulation signal comprises the following steps:
firstly, a 01 bit radio information sequence is randomly generated by utilizing a random function;
then, the communication environment is simulated, noise interference is added in the code modulation and sampling process of the radio information sequence to obtain an IQ sampling sequence of the modulated signal, wherein the noise interference comprises: additive gaussian noise and rayleigh fading channel model effects;
then, for the IQ sample sequence, a plurality of digital modulation signals combined by each channel coding and modulation scheme are generated according to the digital modulation signal code rate, and an analog modulation signal without channel coding is generated.
As the modulation and coding joint identification method based on the multichannel attention network, the digital modulation signals further comprise 28 digital modulation signals formed by combining four channel codes of LDPC codes, RS codes, TCM codes and TPC codes and seven modulation modes of 2FSK, GMSK, BPSK, QPSK, 8PSK, 8APSK and 16 APSK; the analog modulated signal comprises AM and FM signals without channel coding.
As the modulation and coding joint identification method based on the multichannel attention network, the invention further utilizes digital modulation signals and analog modulation signals to construct signal sample data, and divides the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion, and comprises the following steps: for information points of each modulation signal, continuously collecting m information points to form a signal sample by taking n information points as sampling intervals; each modulation signal collects M signal samples, and a signal sample data set is built according to the signal samples of all modulation signals; and extracting sample data in each type of modulation coded signals from the signal sample data set according to a preset proportion to form a training sample set, a verification sample set and a test sample set, wherein M, n and M are preset thresholds.
As the modulation and coding joint identification method based on the multichannel attention network, the invention further constructs a multichannel convolution gating depth attention network model which comprises the following steps: the device comprises a multi-channel convolution module for extracting a feature map from input data through multi-channel convolution operation, a dimension-increasing convolution module for carrying out dimension-increasing convolution on the extracted feature map to extract target shapes and details in the feature map, a gating circulation classification module for capturing deep connection dimension information and classifying output modulation categories, and an extrusion excitation module for adaptively recalibrating channel weights by utilizing interdependencies among channel weight vectors to explicitly model channels.
As the modulation and coding joint identification method based on the multichannel attention network of the invention, further, the multichannel convolution module extracts a feature map from input data through multichannel convolution operation, and the method comprises the following steps: and respectively convolving the I channel signal, the Q channel signal and the I/Q signal in the input data by using the three convolution layers, and splicing the outputs of the three convolution layers according to the input dimension so as to obtain a characteristic diagram of the input data.
As the modulation and coding joint identification method based on the multichannel attention network, the invention further comprises the following steps that the dimension-lifting convolution module carries out dimension-lifting convolution on the extracted feature map: and gradually up-sampling the input low-resolution feature map by using three serially connected convolution units in sequence to output a high-resolution feature map, wherein the convolution units comprise a one-dimensional convolution layer, a random inactivation layer, a batch normalization layer and a ReLU activation function.
As the modulation and coding joint identification method based on the multichannel attention network, the invention further comprises the steps of capturing deep connection dimension information by a gating cycle classification module and classifying and outputting modulation categories, wherein the method comprises the following steps: outputting the current hidden node output and transmitting the hidden state of the next node by combining the current input and the hidden state transmitted by the previous node by utilizing a gating circulating unit GRU, so as to capture the dimension information of the feature map of the deep connection through the gating state update; and classifying the feature images by using the fully connected output layer to obtain corresponding signal modulation categories.
As the modulation and coding joint identification method based on the multichannel attention network, further, the process of recalibrating the sequence characteristic channel weight vector according to the sharing degree of the characteristic pair classification task in the input characteristic vector by the extrusion excitation module to generate a new characteristic vector is expressed as follows:wherein (1)>For new feature vector->Is the i-th element, s i The ith element X in the channel weight vector S, which is the input feature vector X i And s=σ (W 2 δ(W 1 Z) is a channel statistical vector generated by a layer of output feature vector on the network through global average pooling operation, delta is a ReLU function, sigma is a Sigmoid activation function, and W 1 For squeezing the weight parameter of the first fully-connected layer in the excitation module, W 2 For squeezing the weight parameters of the second fully connected layer in the excitation module.
Further, the invention also provides a modulation and coding joint identification system based on the multichannel attention network, which comprises the following components: the system comprises a data simulation module, a data processing module, a model construction module and a target identification module, wherein,
a data analog module for simulating a communication environment using the randomly generated radio information sequences to code the modulated information sequences and generating digital modulation signals and analog modulation signals;
the data processing module is used for utilizing the digital modulation signals and the analog modulation signals to construct signal sample data and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion;
the model construction module is used for constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification and training and optimizing the multichannel convolution gating depth attention network model by utilizing a training sample set, a verification sample set and a test sample set;
the target recognition module is used for inputting the signals to be recognized into the multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding types of the signals to be recognized by utilizing the multi-channel convolution gating depth attention network model after training optimization.
The invention has the beneficial effects that:
the invention solves the problems of unreasonable signal input form and unsatisfactory recognition rate and robustness caused by poor design of a depth network structure in the conventional modulation coding combined recognition, and realizes the signal modulation coding combined automatic recognition task by designing a multichannel convolution gating depth attention network model and performing training and optimizing by using analog digital signals and analog signal sample data so as to recognize the modulation type of a target signal by using a trained network model structure, thereby being convenient for deployment and implementation.
Description of the drawings:
FIG. 1 is a schematic diagram of modulation and coding joint identification principle based on a multichannel attention network in an embodiment;
FIG. 2 is a schematic diagram of a multi-channel convolution gating deep attention network model in an embodiment;
FIG. 3 is a schematic diagram of a multi-channel convolution gating deep attention network model construction flow in an embodiment;
FIG. 4 is a schematic representation of a change in validation loss during network training in an embodiment;
fig. 5 is a schematic diagram of recognition rate results of modulation and coding joint recognition in the embodiment.
The specific embodiment is as follows:
the present invention will be described in further detail with reference to the drawings and the technical scheme, in order to make the objects, technical schemes and advantages of the present invention more apparent.
Aiming at the situation that the existing modulation and coding combined recognition is unreasonable in signal input form and poor in depth network structural design, which leads to unsatisfactory recognition rate and robustness, the embodiment of the invention provides a modulation and coding combined recognition method based on a multichannel attention network, which comprises the following steps:
s101, using the random generated radio information sequence to simulate the communication environment code modulation information sequence and generating a digital modulation signal and an analog modulation signal.
Specifically, it can be designed to include the following:
firstly, a 01 bit radio information sequence is randomly generated by utilizing a random function;
then, the communication environment is simulated, noise interference is added in the code modulation and sampling process of the radio information sequence to obtain an IQ sampling sequence of the modulated signal, wherein the noise interference comprises: additive gaussian noise and rayleigh fading channel model effects;
then, for the IQ sample sequence, a plurality of digital modulation signals combined by each channel coding and modulation scheme are generated according to the digital modulation signal code rate, and an analog modulation signal without channel coding is generated.
The information sequence of the random generation radio uses a random function to randomly generate a 01 bit sequence, so that the randomness of the information content is ensured, and the influence of the signal content on the joint identification of the signal modulation and coding can be removed. The method comprises the steps of simulating a real communication environment code modulation information sequence, sampling to obtain a quadrature in-phase IQ sequence, and adding the influence of additive Gaussian noise and a Rayleigh fading channel model in the code modulation and sampling process of the information sequence, wherein the influence is similar to the real environment. In the simulation process, the parameters may be set as: the modulation rate is 2MBaud/s, the sampling rate is 10MHz, and the signal to noise ratio is between-8 dB and 20dB with 2dB as interval. Finally, an IQ sample sequence of the modulated signal, i.e. 5 information points of one symbol, can be obtained.
The digital modulation signals generated by simulation comprise 28 combined modulation coding signals which are combined by adopting 4 channel codes such as LDPC codes with the code rate of 1/2, RS codes with the code rate of 56/64, TCM codes with the code rate of 2/3, TPC codes with the code rate of 676/1024 and 7 modulation modes such as 2FSK, GMSK, BPSK, QPSK, 8PSK, 8APSK, 16APSK and the like. The analog modulation signals are AM and FM signals, respectively, without channel coding.
S102, constructing signal sample data by using the digital modulation signals and the analog modulation signals, and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion.
Specifically, it can be designed to include the following: for information points of each modulation signal, continuously collecting m information points to form a signal sample by taking n information points as sampling intervals; each modulation signal collects M signal samples, and a signal sample data set is built according to the signal samples of all modulation signals; and extracting sample data in each type of modulation coded signals from the signal sample data set according to a preset proportion to form a training sample set, a verification sample set and a test sample set, wherein M, n and M are preset thresholds.
In practical application, 10 information points can be used as sampling intervals, and 1024 information points are continuously collected each time to form a signal sample. For each modulation code signal, signals are collected at intervals of 2dB before-8 dB to 20dB, 1000 samples are collected under each signal-to-noise ratio, 15000 samples are collected for each modulation code signal, and all signals form a signal sample set. All acquired signals are grouped into a signal sample set of 450000 samples in total. 60% of each type of modulation code signal is extracted from the generated sample set to form a training sample set, 20% of the rest 40% of the modulation code signals are extracted to form a verification sample set, and the final 20% of the whole sample set is taken as a test sample set. And respectively extracting 60% of 1000 signal samples under each signal-to-noise ratio under each modulation coding mode at random to be added into a training set, then extracting 20% of the rest 40% at random to be added into a verification sample set, and finally adding the rest 20% into a test sample set to increase the randomness of sample data so as to improve the training quality of a network model.
S103, constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification, and training and optimizing the multichannel convolution gating depth attention network model by using a training sample set, a verification sample set and a test sample set.
Specifically, the structure of the constructed multichannel convolution gating deep attention network model is shown in fig. 2, and comprises: the device comprises a multi-channel convolution module for extracting a feature map from input data through multi-channel convolution operation, a dimension-increasing convolution module for carrying out dimension-increasing convolution on the extracted feature map to extract target shapes and details in the feature map, a gating circulation classification module for capturing deep connection dimension information and classifying output modulation categories, and an extrusion excitation module for adaptively recalibrating channel weights by utilizing interdependencies among channel weight vectors to explicitly model channels.
The multi-channel convolution module extracts a feature map from input data through multi-channel convolution operation, and the feature map comprises: and respectively convolving the I channel signal, the Q channel signal and the I/Q signal in the input data by using the three convolution layers, and splicing the outputs of the three convolution layers according to the input dimension so as to obtain a characteristic diagram of the input data.
The 3 one-dimensional convolutional layers (ConvI, convQ, convI/Q) may be followed by a ReLU activation function. The connection structure of the multi-channel convolution module can be described as: i channel signal of normalized I/Q signal- & gtConvI- & gtConcate, -Q channel signal of normalized I/Q signal- & gtConvQ- & gtConcate, -normalized I/Q signal- & ConvI/Q- & Concate; concate→SE block1.
The up-dimension convolution module performs up-dimension convolution on the extracted feature map, and includes: and gradually up-sampling the input low-resolution feature map by using three serially connected convolution units in sequence to output a high-resolution feature map, wherein the convolution units comprise a one-dimensional convolution layer, a random inactivation layer, a batch normalization layer and a ReLU activation function.
The connection structure of the upwarp convolution module can be described as: SE block1→convolution unit 2→convolution unit 3→se block2. The SE block1 and the SE block2 are two extrusion excitation modules.
The gating cycle classification module captures deep connection dimension information and classifies and outputs modulation categories, and the gating cycle classification module comprises: outputting the current hidden node output and transmitting the hidden state of the next node by combining the current input and the hidden state transmitted by the previous node by utilizing a gating circulating unit GRU, so as to capture the dimension information of the feature map of the deep connection through the gating state update; and classifying the feature images by using the fully connected output layer to obtain corresponding signal modulation categories.
The gating cycle classification module can be formed by sequentially connecting a gating cycle unit (GRU 1), a random inactivation layer, a gating cycle unit (GRU 2) and a full-connection output layer. Wherein the activation function of the fully connected output layer is Softmax.
The extrusion excitation module outputs characteristic vectors X epsilon R for one layer of output on the neural network L×C Pooling F by global averaging sq (. Cndot.) channel statistics vector Z ε R can be generated 1×C Wherein L is the length of the feature vector, C is the channel number of the feature vector, and the ith element Z of Z i Calculated by the following equation, where L is the length of the feature vector and C represents the number of channels of the feature vector.
Z can be changed by a specific change F shown in the following formula ex (. Cndot.W) generates a weight vector S for the sequence feature X channel, where delta refers to the ReLU function, sigma is the Sigmoid activation function,each value in the weight vector represents the result of the SE block's learning of the importance of the classification task for each channel of X, with a larger value indicating that the channel is more advantageous for the classification task.
S=F ex (Z,W)=σ(g(Z,W))=σ(W 2 δ(W 1 Z))
After obtaining the weight vector S of the sequence characteristic X channel, S can be used for recalibrating the channel characteristic weight of X through multiplication to generate a new characteristic vector I element->Calculated by the following formula, wherein s= [ S ] 1 ,s 2 ,...,s C ]. Compared with X->The weight of each channel is recalibrated according to the different degree of contribution of the channel to the classification task, so that +.>The final classification task is more directional.
In a specific application, parameters of each module in the network model can be designed as follows:
the number of convolution kernels of the one-dimensional convolution layer ConvI is 8, and the convolution kernel size is 3.
The number of convolution kernels of the one-dimensional convolution layer ConvQ is 8, and the convolution kernel size is 3.
The number of convolution kernels of the one-dimensional convolution layer ConvI/Q is 16, and the convolution kernel size is 3.
R=4 is set in SE block1.
The number of convolution kernels of the one-dimensional convolution layer in the convolution unit 1 is 160, the convolution kernel size is 7, and the convolution step size is 4.
The number of convolution kernels of the one-dimensional convolution layer in the convolution unit 2 is 200, the convolution kernel size is 7, and the convolution step size is 4.
The number of convolution kernels of the one-dimensional convolution layer in the convolution unit 3 is 256, the convolution kernel size is 5, and the convolution step size is 2.
R=4 is set in SE block2.
The number of hidden units of GRU1 is 256, returning the entire output sequence.
The number of hidden units of GRU1 is 256, returning to the last output of the output sequence.
The number of convolution kernels of the full connection layer is 30, corresponding to the number of output categories.
When the training set is used for training the multichannel convolution gating deep attention network model, an Adam optimizer can be selected to optimize the network, the initial learning rate is set to be 0.001, 512 samples are trained in each batch, and the maximum training round of the whole training samples is 100. The model is validated using the validation set for each training round, with the validation loss as a reference, and the training of the model is stopped when the validation loss does not decrease after 20 durations.
The training samples and the verification samples are input into the multichannel convolution gating deep attention network model to train the network model, and when the maximum turn of network training is reached or the condition of early shutdown is met, the training process of the neural network is completed, so that the trained multichannel convolution gating deep attention network model is obtained. And the test sample set can be input into a trained multichannel convolution gating deep attention network to obtain a recognition result, and the recognition result is compared with a real category to count and evaluate the recognition accuracy of the network model.
S104, inputting the signals to be identified into a sequence fusion convolutional network model after training optimization, and obtaining modulation coding types of the signals to be identified by using the sequence fusion convolutional network model after training optimization.
Further, based on the above method, the embodiment of the present invention further provides a modulation and coding joint identification system based on a multichannel attention network, which includes: the system comprises a data simulation module, a data processing module, a model construction module and a target identification module, wherein,
a data analog module for simulating a communication environment using the randomly generated radio information sequences to code the modulated information sequences and generating digital modulation signals and analog modulation signals;
the data processing module is used for utilizing the digital modulation signals and the analog modulation signals to construct signal sample data and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion;
the model construction module is used for constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification and training and optimizing the multichannel convolution gating depth attention network model by utilizing a training sample set, a verification sample set and a test sample set;
the target recognition module is used for inputting the signals to be recognized into the multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding types of the signals to be recognized by utilizing the multi-channel convolution gating depth attention network model after training optimization.
To verify the validity of this protocol, the following is further explained in connection with experimental data:
simulation experiments are realized on NVIDIA Quadro RTX 6000 and Keras2.6.0Tensorflow-GPU2.4.0 platforms, and generation of modulation signals and simulation of multi-channel convolution gating deep attention network training in the scheme of the embodiment of the scheme are completed. And (3) completing experiments according to the steps (1) to (5) shown in fig. 3 to obtain the verification loss change trend in the training process of the multichannel convolution gating depth attention network and the recognition rate of the modulation and coding joint recognition method based on the multichannel convolution gating depth attention network.
FIG. 4 illustrates validation loss variation during network training; the graph shows that the verification loss is gradually reduced, converged and stabilized, and the training effect of the simulation experiment is gradually improved along with the increase of training times. As can be seen from the simulation experiment recognition rate results of FIG. 5, the recognition rate gradually increases and stabilizes with the increase of the signal-to-noise ratio, and the highest recognition rate reaches 93.7%. At the same time, the model inference speed was recorded as 0.024 milliseconds per sample inference time.
Through the experimental data, the scheme can be used for completing the automatic modulation and identification task, is effective and feasible, and can be suitable for application in actual scenes such as signal investigation and/or defense, electronic countermeasure and the like.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or a combination thereof, and the elements and steps of the examples have been generally described in terms of functionality in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different methods for each particular application, but such implementation is not considered to be beyond the scope of the present invention.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the above methods may be performed by a program that instructs associated hardware, and that the program may be stored on a computer readable storage medium, such as: read-only memory, magnetic or optical disk, etc. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits, and accordingly, each module/unit in the above embodiments may be implemented in hardware or may be implemented in a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A modulation and coding joint identification method based on a multichannel attention network, comprising:
simulating a communication environment using the randomly generated radio information sequences to code the modulated information sequences and generating digital modulated signals and analog modulated signals;
the method comprises the steps of utilizing a digital modulation signal and an analog modulation signal to construct signal sample data, and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion;
constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification, and training and optimizing the multichannel convolution gating depth attention network model by using a training sample set, a verification sample set and a test sample set;
inputting the signal to be identified into a multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding type of the signal to be identified by using the multi-channel convolution gating depth attention network model after training optimization.
2. The joint recognition method of modulation and coding based on a multichannel attention network according to claim 1, wherein the analog communication environment coding the modulation information sequence using the randomly generated radio information sequence and generating the digital modulation signal and the analog modulation signal comprises:
firstly, a 01 bit radio information sequence is randomly generated by utilizing a random function;
then, the communication environment is simulated, noise interference is added in the code modulation and sampling process of the radio information sequence to obtain an IQ sampling sequence of the modulated signal, wherein the noise interference comprises: additive gaussian noise and rayleigh fading channel model effects;
then, for the IQ sample sequence, a plurality of digital modulation signals combined by each channel coding and modulation scheme are generated according to the digital modulation signal code rate, and an analog modulation signal without channel coding is generated.
3. The modulation and coding joint recognition method based on the multichannel attention network according to claim 1 or 2, wherein the digital modulation signals comprise 28 digital modulation signals formed by combining four channel codes of an LDPC code, an RS code, a TCM code and a TPC code and seven modulation modes of 2FSK, GMSK, BPSK, QPSK, 8PSK, 8APSK and 16 APSK; the analog modulated signal comprises AM and FM signals without channel coding.
4. The modulation and coding joint recognition method based on a multi-channel attention network according to claim 1, wherein the constructing signal sample data using the digital modulation signal and the analog modulation signal and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset ratio comprises: for information points of each modulation signal, continuously collecting m information points to form a signal sample by taking n information points as sampling intervals; each modulation signal collects M signal samples, and a signal sample data set is built according to the signal samples of all modulation signals; and extracting sample data in each type of modulation coded signals from the signal sample data set according to a preset proportion to form a training sample set, a verification sample set and a test sample set, wherein M, n and M are preset thresholds.
5. The modulation and coding joint recognition method based on the multichannel attention network according to claim 1, wherein the constructed multichannel convolution gating depth attention network model comprises: the device comprises a multi-channel convolution module for extracting a feature map from input data through multi-channel convolution operation, a dimension-increasing convolution module for carrying out dimension-increasing convolution on the extracted feature map to extract target shapes and details in the feature map, a gating circulation classification module for capturing deep connection dimension information and classifying output modulation categories, and an extrusion excitation module for adaptively recalibrating channel weights by utilizing interdependencies among channel weight vectors to explicitly model channels.
6. The modulation and coding joint recognition method based on the multichannel attention network according to claim 5, wherein the multichannel convolution module extracts a feature map from input data through a multichannel convolution operation, and the method comprises the following steps: and respectively convolving the I channel signal, the Q channel signal and the I/Q signal in the input data by using the three convolution layers, and splicing the outputs of the three convolution layers according to the input dimension so as to obtain a characteristic diagram of the input data.
7. The modulation and coding joint recognition method based on the multichannel attention network according to claim 5, wherein the up-dimension convolution module performs up-dimension convolution on the extracted feature map, and the method comprises: and gradually up-sampling the input low-resolution feature map by using three serially connected convolution units in sequence to output a high-resolution feature map, wherein the convolution units comprise a one-dimensional convolution layer, a random inactivation layer, a batch normalization layer and a ReLU activation function.
8. The modulation and coding joint recognition method based on a multichannel attention network according to claim 5, wherein the gating loop classification module captures deep connection dimension information and classifies output modulation classes, and comprises: outputting the current hidden node output and transmitting the hidden state of the next node by combining the current input and the hidden state transmitted by the previous node by utilizing a gating circulating unit GRU, so as to capture the dimension information of the feature map of the deep connection through the gating state update; and classifying the feature images by using the fully connected output layer to obtain corresponding signal modulation categories.
9. The joint recognition method of modulation and coding based on multi-channel attention network of claim 5, wherein the squeeze excitation module uses inter-dependencies between channel weight vectors to explicitly model channels to adapt overcorrection of recalibrated channel weightsThe procedure is expressed as:wherein (1)>For new feature vector->Is the i-th element, s i The ith element X in the channel weight vector S which is the input feature vector X i And s=σ (W 2 δ(W 1 Z) is a channel statistical vector generated by a layer of output feature vector on the network through global average pooling operation, delta is a ReLU function, sigma is a Sigmoid activation function, and W 1 For squeezing the weight parameter of the first fully-connected layer in the excitation module, W 2 For squeezing the weight parameters of the second fully connected layer in the excitation module.
10. A multi-channel attention network based modulation and coding joint recognition system, comprising: the system comprises a data simulation module, a data processing module, a model construction module and a target identification module, wherein,
a data analog module for simulating a communication environment using the randomly generated radio information sequences to code the modulated information sequences and generating digital modulation signals and analog modulation signals;
the data processing module is used for utilizing the digital modulation signals and the analog modulation signals to construct signal sample data and dividing the signal sample data into a training sample set, a verification sample set and a test sample set according to a preset proportion;
the model construction module is used for constructing a multichannel convolution gating depth attention network model for modulation and coding joint identification and training and optimizing the multichannel convolution gating depth attention network model by utilizing a training sample set, a verification sample set and a test sample set;
the target recognition module is used for inputting the signals to be recognized into the multi-channel convolution gating depth attention network model after training optimization, and obtaining the modulation coding types of the signals to be recognized by utilizing the multi-channel convolution gating depth attention network model after training optimization.
CN202310704966.2A 2023-06-14 2023-06-14 Modulation and coding joint identification method and system based on multichannel attention network Pending CN116896492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310704966.2A CN116896492A (en) 2023-06-14 2023-06-14 Modulation and coding joint identification method and system based on multichannel attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310704966.2A CN116896492A (en) 2023-06-14 2023-06-14 Modulation and coding joint identification method and system based on multichannel attention network

Publications (1)

Publication Number Publication Date
CN116896492A true CN116896492A (en) 2023-10-17

Family

ID=88310055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310704966.2A Pending CN116896492A (en) 2023-06-14 2023-06-14 Modulation and coding joint identification method and system based on multichannel attention network

Country Status (1)

Country Link
CN (1) CN116896492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117375770A (en) * 2023-11-08 2024-01-09 中国人民解放军军事科学院系统工程研究院 Channel code identification method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117375770A (en) * 2023-11-08 2024-01-09 中国人民解放军军事科学院系统工程研究院 Channel code identification method and device
CN117375770B (en) * 2023-11-08 2024-03-22 中国人民解放军军事科学院系统工程研究院 Channel code identification method and device

Similar Documents

Publication Publication Date Title
CN110855591B (en) QAM and PSK signal intra-class modulation classification method based on convolutional neural network structure
CN113630130B (en) End-to-end digital communication demodulation method
Zhang et al. A data preprocessing method for automatic modulation classification based on CNN
CN112347871B (en) Interference signal modulation identification method for communication carrier monitoring system
CN116896492A (en) Modulation and coding joint identification method and system based on multichannel attention network
CN113542171B (en) Modulation pattern recognition method and system based on CNN and combined high-order spectrum image
CN113298031B (en) Signal modulation identification method and system considering signal physical and time sequence characteristics
CN113205140B (en) Semi-supervised specific radiation source individual identification method based on generative countermeasure network
Smith et al. A communication channel density estimating generative adversarial network
CN114422311A (en) Signal modulation identification method and system combining deep neural network and expert prior characteristics
Almohamad et al. Dual-determination of modulation types and signal-to-noise ratios using 2D-ASIQH features for next generation of wireless communication systems
CN114615118A (en) Modulation identification method based on multi-terminal convolution neural network
CN116132235B (en) Continuous phase modulation signal demodulation method based on deep learning
CN116319210A (en) Signal lightweight automatic modulation recognition method and system based on deep learning
Sang et al. Deep learning based predictive power allocation for V2X communication
Kalade et al. Using sequence to sequence learning for digital bpsk and qpsk demodulation
CN113869227B (en) Signal modulation mode identification method, device, equipment and readable storage medium
CN116405158A (en) End-to-end communication system based on deep learning under non-Gaussian noise
Kumar et al. Hardware implementation of automatic modulation classification with deep learning
Hu et al. Signal Classification in Real-time Based on SDR using Convolutional Neural Network
Zou et al. [Retracted] Automatic Modulation and Recognition of Robot Communication Signal Based on Deep Learning Neural Network
Tunze et al. Multi-shuffled convolutional blocks for low-complex modulation recognition
CN117614467B (en) Underwater sound signal intelligent receiving method based on noise reduction neural network
CN117978588A (en) End-to-end system for optimizing pilot frequency position and channel estimation based on deep learning
CN115277324B (en) FSK signal identification method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination