CN114548201B - Automatic modulation identification method and device for wireless signal, storage medium and equipment - Google Patents

Automatic modulation identification method and device for wireless signal, storage medium and equipment Download PDF

Info

Publication number
CN114548201B
CN114548201B CN202111348246.4A CN202111348246A CN114548201B CN 114548201 B CN114548201 B CN 114548201B CN 202111348246 A CN202111348246 A CN 202111348246A CN 114548201 B CN114548201 B CN 114548201B
Authority
CN
China
Prior art keywords
layer
node
result
feature map
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111348246.4A
Other languages
Chinese (zh)
Other versions
CN114548201A (en
Inventor
段瑞枫
李欣泽
张海燕
赵元琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN202111348246.4A priority Critical patent/CN114548201B/en
Publication of CN114548201A publication Critical patent/CN114548201A/en
Application granted granted Critical
Publication of CN114548201B publication Critical patent/CN114548201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The application discloses a method, a device, a storage medium and equipment for automatically modulating and identifying wireless signals, and belongs to the technical field of communication. The method comprises the following steps: inputting a sample set into the created neural network model, wherein the samples in the sample set comprise wireless signals, signal-to-noise ratios, channel information and actual modulation types; for each sample, performing feature extraction and dimension reduction on the sample by using the first convolution layer; generating feature maps with different resolutions for the obtained first feature map by using a dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature map on each channel by using a compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; reducing the dimension of the second characteristic diagram by using the second convolution layer; classifying the obtained third feature map by using a full connection layer to obtain a predicted modulation type; and adjusting the model parameters according to the predicted modulation type and the actual modulation type. The method and the device can obtain accurate signal phase and amplitude information.

Description

Automatic modulation identification method and device for wireless signal, storage medium and equipment
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method, a device, a storage medium and equipment for automatically modulating and identifying a wireless signal.
Background
With 5G (5) th Generation, fifth Generation mobile communication system) communication and internet of things technology, spectrum resources are increasingly strained, wireless communication channels become more and more complex, and modulation modes are also increasingly diversified, so that signal parameter estimation before information recovery becomes an essential important component. The automatic modulation identification technology is a technology for acquiring a modulation scheme and parameters of a received unknown wireless signal by an automatic processing method. The traditional automatic modulation recognition technology of wireless signals mainly comprises a hypothesis testing method based on likelihood ratio and a pattern recognition method based on feature extraction, but the hypothesis testing method and the pattern recognition method have limitations and cannot adapt to the characteristics of rapid change, various modulation types, light weight deployment and rapid response of modern wireless communication.
The cognitive radio can improve the utilization rate of frequency spectrum, and on the premise of finishing automatic modulation identification, the frequency spectrum can be shared by the primary user and the secondary user, so that the channel condition is more complex, the modulation types are more, and a new challenge is provided for the automatic modulation identification. With the rapid improvement of computer hardware performance and the arrival of big data era, deep learning is facing to the heat tide of the computer. The task of automatic modulation recognition of wireless signals is against the advantages of deep learning. The automatic modulation identification method based on deep learning effectively reduces the calculation complexity, improves the generalization capability of the model, enhances the robustness, can quickly deploy the self-adaptive module, is superior to the traditional method to a certain extent, better accords with the actual application environment, and is a major trend of the current automatic debugging identification technology. Therefore, it is necessary to find an automatic modulation recognition method of wireless signals that can be based on deep learning under mixed noise.
Disclosure of Invention
The embodiment of the application provides an automatic modulation identification method, device, storage medium and equipment of wireless signals, and features under low resolution can be obtained through a neural network model with a dense jump connection mechanism and a compression excitation module, so that information loss caused by down sampling is optimized, and more accurate signal phase and amplitude information is obtained. The technical scheme is as follows:
in one aspect, a method for automatic modulation identification of a wireless signal is provided, the method comprising:
acquiring a sample set, wherein samples in the sample set are training samples or testing samples, and each sample comprises a wireless signal, a signal-to-noise ratio, channel information and an actual modulation type;
inputting the sample set into a created neural network model, wherein the neural network model comprises a first convolution layer, a core layer, a second convolution layer and a full-link layer which are sequentially connected, and the core layer comprises a dense jump-connection mechanism and a compression excitation mechanism;
for each sample, performing feature extraction and dimension reduction on the sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using the dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map; classifying the third feature map by using the full connection layer to obtain a predicted modulation type of the wireless signal; and adjusting the model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain the trained neural network model.
In a possible implementation manner, if the core layer includes coding nodes, intermediate nodes, and decoding nodes, the generating and fusing feature maps with different resolutions for the first feature map by using the dense jump and connect mechanism, and performing multiplicative weighting on the fused feature map on each channel by using the compressed excitation mechanism to obtain a second feature map calibrated on a channel dimension includes:
downsampling the first feature map at different depths by using the coding nodes of different layers;
utilizing the intermediate node to fuse the down-sampling result of the coding node of the layer with the up-sampling result of the coding node or the intermediate node of the lower layer; or, the intermediate node is used for fusing the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the up-sampling result of the intermediate node of the lower layer;
and performing up-sampling, compression excitation and channel alignment on the down-sampling result of the coding node of the layer by using the decoding node, or performing up-sampling, feature fusion, compression excitation and channel alignment on the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the fusion result of the decoding node of the lower layer by using the decoding node.
In a possible implementation manner, when the core layer is a three-layer structure, and the core layer includes three coding nodes, three intermediate nodes, and three decoding nodes, the generating and fusing feature maps with different resolutions for the first feature map by using the dense jump and connect mechanism, and performing multiplication weighting on the fused feature map on each channel by using the compression and excitation mechanism to obtain a second feature map calibrated on a channel dimension includes:
the coding node of the first layer carries out down-sampling on the first characteristic graph, and a first down-sampling result is respectively sent to the coding node of the second layer, the first middle node of the first layer, the second middle node of the first layer and the decoding node of the first layer; the coding node of the second layer performs downsampling on the first downsampling result again, sends the second downsampling result to the coding node of the third layer, the middle node of the second layer and the decoding node of the second layer respectively, and sends the upsampling result of the second downsampling result to the first middle node of the first layer; the coding node of the third layer performs down-sampling on the second down-sampling result again, sends the third down-sampling result to the decoding node of the third layer, and sends the up-sampling result of the third down-sampling result to the middle node of the second layer;
the intermediate node of the second layer performs feature fusion and upsampling on the second downsampling result and the upsampling result of the third downsampling result, and then sends the result to the second intermediate node of the first layer; the intermediate node of the second layer performs feature fusion on the second down-sampling result and the up-sampling result of the third down-sampling result, and then sends the result to the decoding node of the second layer; the first intermediate node of the first layer performs feature fusion on the first downsampling result and the upsampling result of the second downsampling result, and then sends the result to the second intermediate node of the first layer; the second intermediate node of the first layer performs feature fusion on the first downsampling result, the fusion result sent by the first intermediate node of the first layer and the upsampling result sent by the intermediate node of the second layer, and then sends the result to the decoding node of the first layer;
the decoding node of the third layer sends the up-sampling result of the third down-sampling result to the decoding node of the second layer, the decoding node of the second layer performs feature fusion on the second down-sampling result, the fusion result sent by the middle node of the second layer and the up-sampling result sent by the decoding node of the third layer, and sends the result to the decoding node of the first layer after being processed by the compression excitation mechanism, and the decoding node of the first layer performs feature fusion on the first down-sampling result, the fusion result sent by the first middle node and the second middle node of the first layer and the up-sampling result sent by the decoding node of the second layer, and sends the result to the second convolutional layer after being processed by the compression excitation mechanism.
In a possible implementation manner, the compressed excitation mechanism includes a global pooling layer, a first fully connected layer, a ReLU function, a second fully connected layer, a Sigmod activation function, and a Scale scaling function, and then the fused feature map is multiply weighted on each channel by using the compressed excitation mechanism to obtain a second feature map calibrated on a channel dimension, including:
reducing the dimension of the fused feature map by using the global pooling layer;
forming a bottleneck by using the first full connection layer, the ReLU function and the second full connection layer, and exciting the feature map subjected to dimensionality reduction by using the bottleneck to obtain a numerical value of each channel;
normalizing the value of each channel by using the Sigmod activation function;
and performing multiplication weighting on the fused feature map and the numerical value after the corresponding channel normalization by using the Scale scaling function to obtain the second feature map.
In one possible implementation, the sample set is a radio deep learning data set 2018.10a version, which contains 24 modulation types and 26 signal-to-noise values, each modulation type contains 4096 pieces of two-channel IQ data at each signal-to-noise value, and each data contains 2 × 1024 samples, wherein the 26 signal-to-noise values are extracted from the [ -20db,30db ] interval at 2dB intervals.
In one aspect, there is provided an automatic modulation identification method for wireless signals, which is used in the neural network model as described above, and includes:
acquiring input data, wherein the input data comprises a wireless signal to be identified, a signal-to-noise ratio of the wireless signal and channel information;
performing feature extraction and dimension reduction on the input data by using the first convolution layer to obtain a first feature map;
generating feature maps with different resolutions for the first feature map by using the dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on the channel dimension;
reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map;
and classifying the third characteristic diagram by utilizing the full connection layer to obtain the modulation type of the wireless signal.
In one aspect, an apparatus for identifying automatic modulation of a wireless signal is provided, the apparatus including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a sample set, samples in the sample set are training samples or testing samples, and each sample comprises a wireless signal, a signal-to-noise ratio, channel information and an actual modulation type;
an input module, configured to input the sample set into a created neural network model, where the neural network model includes a first convolution layer, a core layer, a second convolution layer, and a fully-connected layer that are sequentially connected, and the core layer includes a dense jump-connect mechanism and a compression excitation mechanism
A mechanism;
the training module is used for performing feature extraction and dimension reduction on each sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using the dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map; classifying the third feature map by using the full connection layer to obtain a predicted modulation type of the wireless signal; and adjusting the model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain the trained neural network model.
In one aspect, an apparatus for identifying an automatic modulation of a wireless signal is provided, which is used in a neural network model as described above, and includes:
the second acquisition module is used for acquiring input data, wherein the input data comprises a wireless signal to be identified, a signal-to-noise ratio of the wireless signal and channel information;
the identification module is used for performing feature extraction and dimension reduction on the input data by using the first convolution layer to obtain a first feature map;
the identification module is further configured to generate feature maps with different resolutions for the first feature map by using the dense jump-join mechanism, perform fusion, and perform multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on a channel dimension;
the identification module is further configured to perform dimension reduction on the second feature map by using the second convolution layer to obtain a third feature map;
the identification module is further configured to classify the third feature map by using the full connection layer to obtain a modulation type of the wireless signal.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the automatic modulation recognition method for wireless signals as described above.
In one aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the automatic modulation recognition method for wireless signals as described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
in the continuous convolution process of the traditional convolution method, the receiving domain is increased, the resolution is reduced, the feature details are partially lost, and the model precision is lost. The dense hop connection mechanism provided by the application can enable information with low resolution and high resolution to be shared, fused and extracted, and reduces information loss caused by down-sampling; introducing a redesigned compression excitation mechanism, deploying the compression excitation mechanism to a core layer decoding node part, recalibrating the weight of each channel through a self-adaptive attention mechanism, forcing a network to learn the importance of each channel from a feature diagram, and completing the recalibration of the original feature on the channel dimension by carrying out multiplication weighting on the feature diagram on each channel; meanwhile, a cross entropy loss function is adopted and combined with a SoftMax function classifier, so that the difference between the estimated value and the true value is better measured, a more accurate signal value is reconstructed, and the accuracy of model training is improved.
Since the sample set is RML2018.10A (radio deep learning data set 2018.10a version), its classification accuracy is highest on the common data set Over the Air, especially for high order modulated SIGNALs, which is superior to data sets of other SIGNALs in the SIGNAL-to-NOISE RATIO (SNR) range of-10 dB to 20 dB.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for automatic modulation identification of a wireless signal according to an embodiment of the present application;
FIG. 2 is a flow chart of a neural network model provided by an embodiment of the present application;
FIG. 3 is a flow chart of a core layer structure provided by one embodiment of the present application;
fig. 4 is a flow chart of the structure of a dense hop connection mechanism provided by an embodiment of the present application;
FIG. 5 is a flow diagram of a structure of a compression incentive mechanism provided by one embodiment of the present application;
fig. 6 is a flowchart of a method for automatic modulation identification of a wireless signal according to another embodiment of the present application;
fig. 7 is a block diagram illustrating an apparatus for automatically identifying modulation of a wireless signal according to still another embodiment of the present application;
fig. 8 is a block diagram of an apparatus for automatic modulation recognition of a wireless signal according to still another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for automatic modulation recognition of a wireless signal according to an embodiment of the present application is shown, where the method for automatic modulation recognition of a wireless signal can be applied to a computer device.
The automatic modulation identification method of the wireless signal can comprise the following steps:
step 101, a sample set is obtained, wherein samples in the sample set are training samples or testing samples, and each sample comprises a wireless signal, a signal-to-noise ratio, channel information and an actual modulation type.
Wherein the sample set is RML2018.10A (radio deep learning data set 2018.10a version), also known as OTA, and the sample set can be used as channel input data to simulate a wireless communication environment.
The data set contains 24 modulation types, which can be classified into digital modulation and analog modulation. Wherein the digital modulation comprises: OOK (binary amplitude keying), 4ASK (amplitude keying), 8ASK, BPSK (binary phase shift keying), QPSK (quadrature absolute phase shift modulation), OQPSK (offset quadrature phase shift keying), 8PSK (subcarrier phase shift keying), 16PSK, 32PSK, 16APSK (amplitude phase shift keying), 32APSK, 64APSK, 128APSK, 16QAM (quadrature amplitude modulation), 32QAM, 64QAM, 128QAM, 256QAM, GMSK (gaussian minimum shift keying); the analog modulation includes FM (angle modulation), AM-SSB-WC (amplitude modulation-carrier single sideband modulation), AM-SSB-SC (amplitude modulation-suppressed carrier single sideband modulation), AM-DSB-WC (amplitude modulation-carrier double sideband modulation), AM-DSB-SC (amplitude modulation-suppressed carrier double sideband modulation).
The data set contains 26 SNR values, which are extracted from the range of [ -20dB,30dB ] at 2dB interval. Thus, each modulation type contains 4096 two-channel IQ data at each signal-to-noise ratio value, and each data contains 2 × 1024 samples.
In this embodiment, the data in the data set may be represented by 8:2, so that 3300 pieces of data are contained in the training set and 796 pieces of data are contained in the test set. Training samples in the training set are used for training the neural network model, and the testing set in the testing set is used for testing whether the neural network model meets the conditions. Whether training samples or test samples, the samples contain the radio signal, signal-to-noise ratio, channel information, and actual modulation type.
It should be noted that, because the training of the neural network model is disturbed by the data with too low or too high signal-to-noise ratio, and the generalization capability of the neural network model is reduced, thereby reducing the classification performance of the neural network model, it is better to train the neural network model by using-10 dB to 20dB data, and then apply the trained neural network model to the full signal-to-noise ratio data.
Step 102, inputting a sample set into a created neural network model, wherein the neural network model comprises a first convolution layer, a core layer, a second convolution layer and a full-connection layer which are connected in sequence, and the core layer comprises a dense jump-connection mechanism and a compression excitation mechanism.
In this embodiment, a neural network model may be constructed first, then a sample set is input into the neural network model for training and testing, and when the learning rate is adjusted through multiple iterations until the network model is stable, an optimal model parameter is selected and the neural network model is stored.
Since the neural network model is similar to the processing of RGB (red green blue) three-channel images in computer vision, and the IQ data is regarded as two-channel one-dimensional data with a length of 1024, in the present embodiment, a one-dimensional convolution is used in the neural network model, instead of a two-dimensional convolution widely used in other deep learning methods. The neural network model in this embodiment includes a first convolutional layer, a core layer, a second convolutional layer, and a fully-connected layer, which are connected in sequence, where the core layer includes a dense jump connection mechanism and a compressive excitation mechanism, as shown in fig. 2. The first convolution layer is used for carrying out feature extraction and dimensionality reduction operation on data; the core layer is the key point of the neural network model and comprises a dense jump connection mechanism and a compression excitation mechanism; the second convolution layer is used for carrying out dimensionality reduction operation on the data again; the full connectivity layer is used to classify data.
103, for each sample, performing feature extraction and dimension reduction on the sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using a dense jump connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using a compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map; classifying the third characteristic diagram by using a full connection layer to obtain a predicted modulation type of the wireless signal; and adjusting the model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain the trained neural network model.
The convolution method comprises the steps that a first convolution layer and a second convolution layer are similar and are in the shape of a residual block in two residual neural networks, each convolution layer is composed of two one-dimensional convolutions with a convolution kernel of 1*3, two one-dimensional standardized functions and a linear rectification unit ReLU function, on the basis of the structure, three different convolution modules are designed according to the purposes of Conv1, conv2 and Conv3, convolution kernels of the convolution modules are 1*1, 3*1 and 3*2, and the three convolution kernels are respectively used for channel alignment, feature information extraction and dimension reduction.
The formula of the ReLU function of the active function rectifying linear unit is as follows:
z[i]=y=ReLU(z[i])=max(0,z[i])={0,z[i]<0;z[i],z[i]…0} (1)
wherein: z [ i ] represents the feature map value, y represents the value after function activation, and then put into z [ i ].
In this embodiment, the first convolution layer may be used to perform channel alignment, feature extraction, and dimension reduction on the sample, and the obtained first feature map may be output to the core layer.
The core layer is used for fusing the feature maps with high resolution and low resolution. Specifically, the core layer may include an encoding node, an intermediate node, and a decoding node. The first feature map is downsampled at different depths by using coding nodes of different layers, namely, the coding nodes of each layer perform convolution operation according to the depths. The intermediate node fuses the down-sampling result of the coding node of the layer and the up-sampling result of the coding node or the intermediate node of the lower layer; or, the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the up-sampling result of the intermediate node of the lower layer are fused, and the convolution kernel with the size of 1*1 is used for channel alignment. The method comprises the steps of utilizing a decoding node to carry out up-sampling, compression excitation and channel alignment on a down-sampling result of an encoding node of the layer, or utilizing the decoding node to carry out up-sampling, feature fusion, compression excitation and channel alignment on the down-sampling result of the encoding node of the layer, a fusion result of a previous intermediate node of the layer and a fusion result of a decoding node of a lower layer, and utilizing a convolution kernel of 1*1 size to carry out channel alignment. The feature fusion mode can more fully ensure various information under high resolution and low resolution, and reduce information loss caused by down sampling.
In practical application, the number of layers of the dense hop connection mechanism may be set according to requirements, and in this embodiment, the core layer is a three-layer structure, which is taken as an example for explanation, and the core layer includes three coding nodes, three intermediate nodes, and three decoding nodes. The first layer sequentially comprises coding nodes, middle nodes and decoding nodes, the second layer sequentially comprises coding nodes, middle nodes and decoding nodes, and the third layer sequentially comprises coding nodes and decoding nodes, as shown in fig. 3. The interaction between these nodes is explained below.
The coding node of the first layer carries out down-sampling on the first feature graph, and a first down-sampling result is respectively sent to the coding node of the second layer, the first middle node of the first layer, the second middle node of the first layer and the decoding node of the first layer; the coding node of the second layer performs down sampling on the first down sampling result again, sends the second down sampling result to the coding node of the third layer, the middle node of the second layer and the decoding node of the second layer respectively, and sends the up sampling result of the second down sampling result to the first middle node of the first layer; and the coding node of the third layer performs down sampling on the second down sampling result again, sends the third down sampling result to the decoding node of the third layer, and sends the up sampling result of the third down sampling result to the middle node of the second layer. Specifically, the coding node performs two times of downsampling with the multiple of 2 on the first feature map by using the one-dimensional convolution with the convolution kernel of 1*3 to sequentially obtain 2 feature maps, and changes the number of channels to 2 times of the original number in the process of two times of downsampling to ensure the invariance of data content.
The intermediate node of the second layer performs feature fusion and upsampling on the second downsampling result and the upsampling result of the third downsampling result, and then sends the result to the second intermediate node of the first layer; the intermediate node of the second layer performs feature fusion on the up-sampling results of the second down-sampling result and the third down-sampling result, and then sends the result to the decoding node of the second layer; the first intermediate node of the first layer performs feature fusion on the up-sampling results of the first down-sampling result and the second down-sampling result, and then sends the feature fusion result to the second intermediate node of the first layer; and the second intermediate node of the first layer performs characteristic fusion on the first down-sampling result, the fusion result sent by the first intermediate node of the first layer and the up-sampling result sent by the intermediate node of the second layer, and then sends the result to the decoding node of the first layer. Specifically, the intermediate node needs to use an average interpolation method with the multiple of 2 times for up-sampling, and the feature graph is expanded to realize feature fusion in the next step. And the middle node of the second layer is obtained by aligning the channel of the up-sampling result of the coding node of the third layer with the coding node of the second layer in a one-dimensional convolution mode and then performing feature fusion. The first intermediate node of the first layer fuses the up-sampling result of the coding node of the second layer with the characteristics of the coding node of the first layer, and the second intermediate node fuses the up-sampling result of the intermediate node of the second layer, the first intermediate node of the first layer and the characteristics of the coding node of the first layer.
The decoding node of the third layer performs compression excitation and channel alignment on the up-sampling result of the third down-sampling result and then sends the result to the decoding node of the second layer, the decoding node of the second layer performs feature fusion on the second down-sampling result, the fusion result sent by the middle node of the second layer and the up-sampling result sent by the decoding node of the third layer, and then sends the result to the decoding node of the first layer after performing processing and channel alignment through a compression excitation mechanism (fSE), and the decoding node of the first layer performs feature fusion on the first down-sampling result, the fusion result sent by the first middle node and the second middle node of the first layer and the up-sampling result sent by the decoding node of the second layer, and then sends the result to the second convolutional layer after performing processing and channel alignment through the compression excitation mechanism. Specifically, a compression excitation mechanism is introduced into the decoding node. And the decoding node of the third layer performs channel alignment by using one-dimensional convolution according to the result obtained by up-sampling 2 times of the coding node of the third layer. The decoding node of the second layer is a result obtained by the encoding node of the second layer through one-dimensional convolution of 1*3, a result obtained by the middle node of the second layer through one-dimensional convolution of 1*3, a result obtained by the decoding node of the third layer through one-dimensional convolution of 1*3 and 2 times of upsampling, the three are subjected to channel alignment through convolution of 1*1, feature fusion is carried out, the importance of the channels is learned through a compression excitation mechanism, and 64 channels are endowed with different weights. The decoding node of the first layer carries out channel alignment through convolution of 1*1, carries out feature fusion and learns the importance of channels through a compression excitation mechanism, and gives different weights to 32 channels, wherein the result obtained by one-dimensional convolution of 1*3 of the encoding node of the first layer, the result obtained by one-dimensional convolution of 1*3 of two intermediate nodes and the result obtained by one-dimensional convolution of 1*3 and the result obtained by 2 times of upsampling of the decoding node of the second layer are obtained by the decoding node of the first layer; the decoding node of the first layer outputs the structure passing through the core layer to the second convolutional layer.
As shown in FIG. 4, if the coding node is X [i,0] Setting the intermediate node to X [i,depth] Setting the decoding node to X [i,j] Then, the variation between the nodes is shown as the following formula:
Figure BDA0003355017960000111
Figure BDA0003355017960000112
Figure BDA0003355017960000113
wherein, X (i,0) Represents a node in the first column in fig. 4, i.e., a coding node, E (·) represents a sampling operation of the first layer, n represents a layer depth (depth) -1, i =0 in the first layer, i.e., a first feature map output from the first convolutional layer is downsampled as an input of the layer, and 0 is used when the sampling operation is performed<i<n, down-sampling operation is performed for the non-first node, X (3,0) The node does not perform downsampling any more; through the above operations, the feature information of the first feature graph can be extracted, and the information of the current node needs to be stored in the down-sampling process, so that the subsequent feature fusion is facilitated.
X (i,j) Represents the node in the second column in fig. 4, i.e. the intermediate node, U (-) represents 2 times of the upsampling operation, so that the size of the feature map output by the node is consistent with the column number of the last node; c (-) is used for connecting the node of the previous operation with the first n-1-j column of the row where the node is located, and H (-) is used for carrying out the feature fusion operation in a channel alignment mode; meanwhile, the information of the current node is also stored in the class node so as to facilitate the subsequent cascade operation and the up-sampling.The step is mainly to fully store the information of various characteristic diagrams in the sampling process, and has higher information integrity compared with the step without the operation.
X (i,n) The nodes in the third column of FIG. 4, namely decoding nodes, U (-) and C (-) and X (i,j) The layer where the types are located is operated consistently; d (-) indicates that a compression excitation mechanism is added after the up-sampling and cascading operation to learn the importance of the channel, different importance of the channel is multiplied and weighted, and then the channel enters a node of the previous layer, and then the operation is carried out until the channel is output.
In this embodiment, the compressed excitation mechanism includes a global pooling layer, a first fully connected layer, a ReLU function, a second fully connected layer, a Sigmod activation function, and a Scale scaling function, where the Scale scaling function is formed by a one-dimensional convolution of 1*1 and a ReLU function, as shown in fig. 5. Specifically, the global pooling layer is utilized to perform dimension reduction on the fused feature map; forming a bottleneck by using the first full connection layer, the ReLU function and the second full connection layer, and exciting the dimensionality reduced feature map by using the bottleneck to obtain a numerical value of each channel; normalizing the value of each channel by using a Sigmod activation function; and carrying out multiplication weighting on the fused feature map and the numerical value after the corresponding channel normalization by using a Scale scaling function to obtain a second feature map.
The formula of the improved compression excitation mechanism in this embodiment is as follows:
Figure BDA0003355017960000121
wherein the content of the first and second substances,
Figure BDA0003355017960000122
second feature map with channel importance, F, representing the final output scale Representing a scaling function, v c Representing the original feature map, w c Representing the weight of the channel after extrusion excitation, the output channel of the first full connection layer is C/r, the two full connection layers and the ReLU function form a bottleneck structure, and then exciting the characteristic diagram to force the networkAnd learning the importance of each channel from the feature map. Derived from Sigmoid activation function [0,1]Normalized values in between. And finally, performing multiplication weighting on the feature map on each channel to finish the recalibration of the original feature on the channel dimension. Namely, the result after feature fusion is input into a compression excitation mechanism, each one-dimensional feature channel is converted into a value which represents the global distribution of response on the feature channel, then the channels are subjected to multiplication weighting, and a second feature graph with channel importance is output to the next node.
In this embodiment, a loss function of the neural network model may also be defined, and a method for estimating the model is a combination of a cross entropy error and a softmax function, and a specific formula is as follows:
Figure BDA0003355017960000123
wherein m is the number of tags, t [ i ] is a one-bit effective code (one-hot) of an actual tag, and p [ i ] is the probability of the tag, and the specific formula is as follows: :
Figure BDA0003355017960000124
where z [ i ] represents the feature mapping after activation by the ReLU function.
When the neural network model is trained, the value of trainable parameters in the neural network model can be updated by adopting an adaptive moment estimation (Adam) optimization algorithm and a back propagation mechanism according to the loss value, so that the learning optimization process of the neural network model is realized. Specifically, the neural network model can be optimized with an initial learning rate of 0.001, the total number of iterations is 60, the number of iterations decreases by 0.1 every 20, adam needs to be converged quickly, and when the complex neural network model is trained, the required memory is small, so that the neural network model is suitable for large data sets and high-dimensional spaces and is very suitable for the neural network model with a large number of data sets in the embodiment.
During testing, the trained neural network model can be taken out, model parameters stored by the training module are loaded into the network model, loss values and accuracy under channels of-10 db to 20db are verified, the performance of the neural network model is judged, then the modulation mode under each channel ratio is identified, and the loss values and the accuracy are output.
It should be added that, compared with the current mainstream method, the automatic modulation identification method provided by this embodiment has certain advantages, which are specifically as follows:
1. the generalization capability of the method is stronger, the classification precision of the high-resolution neural network with the compression excitation mechanism under 24 modulation classifications, and compared with other adaptive modulation coding neural network models, the high-resolution classification of the method is better in a high-order modulation method.
2. A core layer of the neural network model adopts a dense jump connection mechanism, modulation information in IQ data is fully extracted, feature maps of depth, coarse granularity and low resolution are effectively fused with feature maps of shallow granularity, fine granularity and high resolution, information loss caused in the down-sampling process is reduced, and the processed features are aligned through a 1*1 one-dimensional convolution method to achieve feature fusion.
3. A compressed excitation mechanism is introduced into the automatic modulation recognition method for the first time and is improved, the characteristic diagram is subjected to multiplication weighting on each channel, the recalibration of the original characteristics on the channel dimension is completed, and the compressed excitation mechanism filters important channels by re-correcting the original characteristics in the channel dimension, so that channel information can be well used as a trainable part to improve the performance of a neural network model.
In summary, according to the automatic modulation identification method for the wireless signal provided by the embodiment of the present application, because the receiving domain is increased and the resolution is reduced in the continuous convolution process of the conventional convolution method, the feature details are also partially lost, so that the accuracy of the model is lost. The dense jump connection mechanism provided by the application can enable information with low resolution and high resolution to be shared, fused and extracted, and reduce information loss caused by down-sampling; introducing a redesigned compression excitation mechanism, deploying the compression excitation mechanism to a core layer decoding node part, recalibrating the weight of each channel through a self-adaptive attention mechanism, forcing a network to learn the importance of each channel from a feature diagram, and completing the recalibration of the original feature on the channel dimension by carrying out multiplication weighting on the feature diagram on each channel; meanwhile, a cross entropy loss function is adopted and combined with a SoftMax function classifier, so that the difference between the estimated value and the true value is better measured, a more accurate signal value is reconstructed, and the accuracy of model training is improved.
Since the sample set is RML2018.10A (radio deep learning data set 2018.10a version), its classification accuracy on the common data set Over the Air is highest, especially for high order modulation SIGNALs, which is better than the data sets of other SIGNALs in the range of SIGNAL-to-NOISE RATIO (SNR) of-10 dB to 20 dB.
Referring to fig. 6, a flowchart of an automatic modulation recognition method for a wireless signal according to an embodiment of the present application is shown, where the automatic modulation recognition method for a wireless signal can be applied to a computer device.
The automatic modulation identification method of the wireless signal can comprise the following steps:
step 601, obtaining input data, where the input data includes a wireless signal to be identified, a signal-to-noise ratio of the wireless signal, and channel information.
Step 602, performing feature extraction and dimension reduction on the input data by using the first convolution layer to obtain a first feature map.
And 603, generating feature maps with different resolutions for the first feature map by using a dense jump connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using a compression excitation mechanism to obtain a second feature map calibrated on the channel dimension.
And step 604, performing dimension reduction on the second feature map by using the second convolution layer to obtain a third feature map.
And 605, classifying the third feature map by using the full connection layer to obtain the modulation type of the wireless signal.
The process of processing input data by the neural network model is the same as the process of processing samples by the neural network model, and is described in detail in the foregoing, and the processing process is not described herein again.
In summary, according to the automatic modulation identification method for the wireless signal provided by the embodiment of the present application, because the receiving domain is increased and the resolution is reduced in the continuous convolution process of the conventional convolution method, the feature details are also partially lost, so that the accuracy of the model is lost. The dense jump connection mechanism provided by the application can enable information with low resolution and high resolution to be shared, fused and extracted, and reduce information loss caused by down-sampling; introducing a redesigned compression excitation mechanism, deploying the compression excitation mechanism to a core layer decoding node part, re-calibrating the weight of each channel through a self-adaptive attention mechanism, forcing a network to learn the importance of each channel from a feature diagram, and completing the recalibration of the original feature on the channel dimension by carrying out multiplication weighting on the feature diagram on each channel; meanwhile, a cross entropy loss function is adopted and combined with a SoftMax function classifier, so that the difference between the estimated value and the true value is better measured, a more accurate signal value is reconstructed, and the accuracy of model training is improved.
Referring to fig. 7, a block diagram of an automatic modulation recognition apparatus for wireless signals according to an embodiment of the present application is shown, where the automatic modulation recognition apparatus for wireless signals can be applied to a computer device. The automatic modulation recognition device of the wireless signal can comprise:
a first obtaining module 710, configured to obtain a sample set, where a sample in the sample set is a training sample or a test sample, and each sample includes a wireless signal, a signal-to-noise ratio, channel information, and an actual modulation type;
an input module 720, configured to input the sample set into a created neural network model, where the neural network model includes a first convolutional layer, a core layer, a second convolutional layer, and a fully-connected layer that are connected in sequence, and the core layer includes a dense jump connection mechanism and a compression excitation mechanism;
the training module 730 is configured to perform feature extraction and dimension reduction on each sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using a dense jump connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using a compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; performing dimension reduction on the second characteristic diagram by using the second convolution layer to obtain a third characteristic diagram; classifying the third characteristic diagram by using a full connection layer to obtain a predicted modulation type of the wireless signal; and adjusting the model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain the trained neural network model.
In a possible implementation manner, the core layer includes an encoding node, an intermediate node, and a decoding node, and the training module 730 is further configured to:
carrying out down-sampling of different depths on the first feature map by using coding nodes of different layers;
utilizing the intermediate node to fuse the down-sampling result of the coding node of the layer with the up-sampling result of the coding node or the intermediate node of the lower layer; or, the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the up-sampling result of the intermediate node of the lower layer are fused by using the intermediate node;
and performing up-sampling, compression excitation and channel alignment on the down-sampling result of the coding node of the layer by using the decoding node, or performing up-sampling, feature fusion, compression excitation and channel alignment on the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the fusion result of the decoding node of the lower layer by using the decoding node.
In a possible implementation manner, when the core layer has a three-layer structure and includes three coding nodes, three intermediate nodes, and three decoding nodes, the training module 730 is further configured to:
the coding node of the first layer carries out down-sampling on the first characteristic graph, and a first down-sampling result is respectively sent to the coding node of the second layer, the first middle node of the first layer, the second middle node of the first layer and the decoding node of the first layer; the coding node of the second layer performs down sampling on the first down sampling result again, sends the second down sampling result to the coding node of the third layer, the middle node of the second layer and the decoding node of the second layer respectively, and sends the up sampling result of the second down sampling result to the first middle node of the first layer; the coding node of the third layer performs down sampling on the second down sampling result again, sends the third down sampling result to the decoding node of the third layer, and sends the up sampling result of the third down sampling result to the middle node of the second layer;
the intermediate node of the second layer performs feature fusion and upsampling on the second downsampling result and the upsampling result of the third downsampling result, and then sends the result to the second intermediate node of the first layer; the intermediate node of the second layer performs characteristic fusion on the up-sampling result of the second down-sampling result and the up-sampling result of the third down-sampling result and then sends the result to the decoding node of the second layer; the first intermediate node of the first layer performs feature fusion on the up-sampling results of the first down-sampling result and the second down-sampling result and then sends the feature fusion to the second intermediate node of the first layer; after the second intermediate node of the first layer performs characteristic fusion on the first down-sampling result, the fusion result sent by the first intermediate node of the first layer and the up-sampling result sent by the intermediate node of the second layer, the first down-sampling result, the fusion result and the up-sampling result are sent to the decoding node of the first layer;
and the decoding node of the first layer performs characteristic fusion on the first downsampling result, the fusion result sent by the first middle node and the second middle node of the first layer and the upsampling result sent by the decoding node of the second layer, and sends the feature fusion result to the decoding node of the first layer after the feature fusion of the first downsampling result, the fusion result sent by the first middle node and the second middle node of the first layer and the upsampling result sent by the decoding node of the second layer.
In a possible implementation manner, the compressed excitation mechanism includes a global pooling layer, a first fully-connected layer, a ReLU function, a second fully-connected layer, a Sigmod activation function, and a Scale scaling function, and then the training module 730 is further configured to:
reducing the dimension of the fused feature map by using a global pooling layer;
forming a bottleneck by using the first full connection layer, the ReLU function and the second full connection layer, and exciting the dimensionality reduced feature map by using the bottleneck to obtain a numerical value of each channel;
normalizing the value of each channel by using a Sigmod activation function;
and carrying out multiplication weighting on the fused feature map and the numerical value after the corresponding channel normalization by using a Scale scaling function to obtain a second feature map.
In one possible implementation, the sample set is a version of the radio deep learning data set 2018.10a, which contains 24 modulation types and 26 signal-to-noise values, each modulation type contains 4096 pieces of two-channel IQ data at each signal-to-noise value, and each data contains 2 × 1024 samples, where the 26 signal-to-noise values are extracted from the [ -20db,30db ] interval at 2dB intervals.
In summary, according to the automatic modulation identification apparatus for wireless signals provided in the embodiment of the present application, because the receiving domain is increased and the resolution is reduced in the continuous convolution process of the conventional convolution method, the feature details are also partially lost, so that the accuracy of the model is lost. The dense jump connection mechanism provided by the application can enable information with low resolution and high resolution to be shared, fused and extracted, and reduce information loss caused by down-sampling; introducing a redesigned compression excitation mechanism, deploying the compression excitation mechanism to a core layer decoding node part, re-calibrating the weight of each channel through a self-adaptive attention mechanism, forcing a network to learn the importance of each channel from a feature diagram, and completing the recalibration of the original feature on the channel dimension by carrying out multiplication weighting on the feature diagram on each channel; meanwhile, a cross entropy loss function is adopted and combined with a SoftMax function classifier, so that the difference between the estimated value and the true value is better measured, a more accurate signal value is reconstructed, and the accuracy of model training is improved.
Since the sample set is RML2018.10A (radio deep learning data set 2018.10a version), its classification accuracy on the common data set Over the Air is highest, especially for high order modulation SIGNALs, which is better than the data sets of other SIGNALs in the range of SIGNAL-to-NOISE RATIO (SNR) of-10 dB to 20 dB.
Referring to fig. 8, a block diagram of an automatic modulation recognition apparatus for wireless signals according to an embodiment of the present application is shown, where the automatic modulation recognition apparatus for wireless signals can be applied to a computer device. The automatic modulation recognition device of the wireless signal can comprise:
a second obtaining module 810, configured to obtain input data, where the input data includes a wireless signal to be identified, a signal-to-noise ratio of the wireless signal, and channel information;
the identification module 820 is configured to perform feature extraction and dimension reduction on input data by using the first convolution layer to obtain a first feature map;
the identification module 820 is further configured to generate feature maps with different resolutions for the first feature map by using a dense jump connection mechanism, perform fusion, and perform multiplication weighting on the fused feature map on each channel by using a compression excitation mechanism to obtain a second feature map calibrated on a channel dimension;
the identifying module 820 is further configured to perform dimension reduction on the second feature map by using the second convolution layer to obtain a third feature map;
the identifying module 820 is further configured to classify the third feature map by using the full link layer to obtain a modulation type of the wireless signal.
In summary, according to the automatic modulation identification apparatus for wireless signals provided in the embodiment of the present application, because the receiving domain is increased and the resolution is reduced in the continuous convolution process of the conventional convolution method, the feature details are also partially lost, so that the accuracy of the model is lost. The dense jump connection mechanism provided by the application can enable information with low resolution and high resolution to be shared, fused and extracted, and reduce information loss caused by down-sampling; introducing a redesigned compression excitation mechanism, deploying the compression excitation mechanism to a core layer decoding node part, recalibrating the weight of each channel through a self-adaptive attention mechanism, forcing a network to learn the importance of each channel from a feature diagram, and completing the recalibration of the original feature on the channel dimension by carrying out multiplication weighting on the feature diagram on each channel; meanwhile, a cross entropy loss function is adopted and combined with a SoftMax function classifier, so that the difference between the estimated value and the true value is better measured, a more accurate signal value is reconstructed, and the accuracy of model training is improved.
One embodiment of the present application provides a computer-readable storage medium having at least one instruction stored therein, the at least one instruction being loaded and executed by a processor to implement the method for automatic modulation recognition of wireless signals as described above.
One embodiment of the present application provides a computer device comprising a processor and a memory, wherein the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the automatic modulation recognition method for wireless signals as described above.
It should be noted that: in the above embodiment, when performing automatic modulation recognition of a wireless signal, the automatic modulation recognition apparatus for a wireless signal is described by way of example only by dividing the functional modules, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal mechanism of the automatic modulation recognition apparatus for a wireless signal is divided into different functional modules to complete all or part of the above described functions. In addition, the automatic modulation identification apparatus for wireless signals and the automatic modulation identification method for wireless signals provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is not intended to limit the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (9)

1. A method for automatic modulation identification of a wireless signal, the method comprising:
acquiring a sample set, wherein samples in the sample set are training samples, and each sample comprises a wireless signal, a signal-to-noise ratio, channel information and an actual modulation type;
inputting the sample set into a created neural network model, wherein the neural network model comprises a first convolution layer, a core layer, a second convolution layer and a full-link layer which are sequentially connected, and the core layer comprises a dense jump-connection mechanism and a compression excitation mechanism;
for each sample, performing feature extraction and dimension reduction on the sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using the dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on the channel dimension; reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map; classifying the third feature map by using the full connection layer to obtain a predicted modulation type of the wireless signal; adjusting model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain a trained neural network model;
the core layer is of a three-layer structure, the first layer comprises coding nodes, intermediate nodes and decoding nodes, the second layer comprises coding nodes, intermediate nodes and decoding nodes, the third layer comprises coding nodes and decoding nodes, the dense jump-connection mechanism is used for generating feature maps with different resolutions for the first feature map and fusing the feature maps, the compression excitation mechanism is used for carrying out multiplication weighting on the fused feature maps on each channel, and a second feature map calibrated on the channel dimension is obtained, and the method comprises the following steps:
downsampling the first feature map at different depths by using the coding nodes of different layers;
for a first intermediate node in a first layer, utilizing the intermediate node to fuse the down-sampling result of the coding node of the layer and the up-sampling result of the coding node of a lower layer; for a second intermediate node in the first layer, fusing a down-sampling result of the coding node of the layer, a fusion result of a previous intermediate node of the layer and an up-sampling result of an intermediate node of a lower layer by using the intermediate node; for an intermediate node in a second layer, fusing a down-sampling result of a coding node of the layer and an up-sampling result of a coding node of a lower layer by using the intermediate node;
for a decoding node in a third layer, performing up-sampling, compression excitation and channel alignment on a down-sampling result of a coding node of the layer by using the decoding node; and for the decoding nodes in the first layer or the second layer, performing up-sampling, feature fusion, compression excitation and channel alignment on the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the fusion result of the decoding node of the lower layer by using the decoding nodes.
2. The method according to claim 1, wherein when the core layer has a three-layer structure and includes three coding nodes, three intermediate nodes and three decoding nodes, the generating and fusing feature maps with different resolutions from the first feature map by using the dense jump and connect mechanism, and performing multiplicative weighting on the fused feature map on each channel by using the compressive excitation mechanism to obtain a second feature map calibrated on a channel dimension includes:
the coding node of the first layer carries out down-sampling on the first characteristic graph, and a first down-sampling result is respectively sent to the coding node of the second layer, the first middle node of the first layer, the second middle node of the first layer and the decoding node of the first layer; the coding node of the second layer performs downsampling on the first downsampling result again, sends the second downsampling result to the coding node of the third layer, the middle node of the second layer and the decoding node of the second layer respectively, and sends the upsampling result of the second downsampling result to the first middle node of the first layer; the coding node of the third layer performs down-sampling on the second down-sampling result again, sends the third down-sampling result to the decoding node of the third layer, and sends the up-sampling result of the third down-sampling result to the middle node of the second layer;
the intermediate node of the second layer performs feature fusion and upsampling on the second downsampling result and the upsampling result of the third downsampling result, and then sends the result to the second intermediate node of the first layer; the intermediate node of the second layer performs feature fusion on the second down-sampling result and the up-sampling result of the third down-sampling result, and then sends the result to the decoding node of the second layer; the first intermediate node of the first layer performs feature fusion on the first down-sampling result and the up-sampling result of the second down-sampling result, and then sends the feature fusion result to the second intermediate node of the first layer; the second intermediate node of the first layer performs feature fusion on the first downsampling result, the fusion result sent by the first intermediate node of the first layer and the upsampling result sent by the intermediate node of the second layer, and then sends the result to the decoding node of the first layer;
and the decoding node of the first layer performs characteristic fusion on the first downsampling result, the fusion result sent by the first intermediate node and the second intermediate node of the first layer and the upsampling result sent by the decoding node of the second layer, and then sends the feature fusion result to the decoding node of the first layer.
3. The method according to claim 1, wherein the compressed excitation mechanism includes a global pooling layer, a first fully connected layer, a ReLU function, a second fully connected layer, a Sigmod activation function, and a Scale scaling function, and then the multiplicative weighting of the fused feature map on each channel by using the compressed excitation mechanism to obtain a second feature map calibrated on a channel dimension includes:
reducing the dimension of the fused feature map by utilizing the global pooling layer;
forming a bottleneck by using the first full connection layer, the ReLU function and the second full connection layer, and exciting the dimensionality-reduced feature map by using the bottleneck to obtain a numerical value of each channel;
normalizing the value of each channel by using the Sigmod activation function;
and carrying out multiplication weighting on the fused feature map and the numerical value after the corresponding channel normalization by using the Scale scaling function to obtain the second feature map.
4. The method of claim 1, wherein the sample set is a radio deep learning data set 2018.10a version, the data set comprises 24 modulation types and 26 signal-to-noise values, each modulation type comprises 4096 two-channel IQ data at each signal-to-noise value, and each data comprises 2 × 1024 samples, wherein the 26 signal-to-noise values are extracted from the [ -20db,30db ] interval at 2dB intervals.
5. A method for automatic modulation recognition of wireless signals, for use in a neural network model trained according to any one of claims 1 to 4, the method comprising:
acquiring input data, wherein the input data comprises a wireless signal to be identified, a signal-to-noise ratio of the wireless signal and channel information;
performing feature extraction and dimension reduction on the input data by using the first convolution layer to obtain a first feature map;
generating feature maps with different resolutions for the first feature map by using the dense jump-connection mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compression excitation mechanism to obtain a second feature map calibrated on the channel dimension;
reducing the dimension of the second feature map by using the second convolution layer to obtain a third feature map;
and classifying the third characteristic diagram by utilizing the full connection layer to obtain the modulation type of the wireless signal.
6. An apparatus for automatic modulation recognition of a wireless signal, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a sample set, samples in the sample set are training samples, and each sample comprises a wireless signal, a signal-to-noise ratio, channel information and an actual modulation type;
the input module is used for inputting the sample set into a created neural network model, the neural network model comprises a first convolutional layer, a core layer, a second convolutional layer and a full-connection layer which are sequentially connected, and the core layer comprises a dense jump connection mechanism and a compression excitation mechanism;
the training module is used for performing feature extraction and dimension reduction on each sample by using the first convolution layer to obtain a first feature map; generating feature maps with different resolutions for the first feature map by using the dense jump-linking mechanism, fusing, and performing multiplication weighting on the fused feature maps on each channel by using the compressed excitation mechanism to obtain a second feature map calibrated on the channel dimension; performing dimension reduction on the second characteristic diagram by using the second convolution layer to obtain a third characteristic diagram; classifying the third feature map by using the full connection layer to obtain a predicted modulation type of the wireless signal; adjusting model parameters of the neural network model according to the predicted modulation type and the actual modulation type to obtain a trained neural network model;
the core layer is of a three-layer structure, the first layer comprises coding nodes, middle nodes and decoding nodes, the second layer comprises coding nodes, middle nodes and decoding nodes, and the third layer comprises coding nodes and decoding nodes, so that the training module is further used for:
downsampling the first feature map at different depths by using the coding nodes of different layers;
for a first intermediate node in a first layer, utilizing the intermediate node to fuse the down-sampling result of the coding node of the layer and the up-sampling result of the coding node of a lower layer; for a second intermediate node in the first layer, fusing a down-sampling result of the coding node of the layer, a fusion result of a previous intermediate node of the layer and an up-sampling result of an intermediate node of a lower layer by using the intermediate node; for an intermediate node in a second layer, fusing a down-sampling result of a coding node of the layer and an up-sampling result of a coding node of a lower layer by using the intermediate node;
for a decoding node in a third layer, performing up-sampling, compression excitation and channel alignment on a down-sampling result of a coding node of the layer by using the decoding node; and for the decoding nodes in the first layer or the second layer, performing up-sampling, feature fusion, compression excitation and channel alignment on the down-sampling result of the coding node of the layer, the fusion result of the previous intermediate node of the layer and the fusion result of the decoding node of the lower layer by using the decoding nodes.
7. An apparatus for automatic modulation recognition of wireless signals, for use in a neural network model trained according to any one of claims 1 to 4, the apparatus comprising:
the second acquisition module is used for acquiring input data, wherein the input data comprises a wireless signal to be identified, a signal-to-noise ratio of the wireless signal and channel information;
the identification module is used for performing feature extraction and dimension reduction on the input data by using the first convolution layer to obtain a first feature map;
the identification module is further configured to generate feature maps with different resolutions for the first feature map by using the dense jump-join mechanism, perform fusion, and perform multiplication weighting on the fused feature maps on each channel by using the compressed excitation mechanism to obtain a second feature map calibrated on a channel dimension;
the identification module is further configured to perform dimension reduction on the second feature map by using the second convolution layer to obtain a third feature map;
the identification module is further configured to classify the third feature map by using the full connection layer to obtain a modulation type of the wireless signal.
8. A computer-readable storage medium, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the method for automatic modulation recognition of a wireless signal according to any one of claims 1 to 4, or the at least one instruction is loaded and executed by a processor to implement the method for automatic modulation recognition of a wireless signal according to claim 5.
9. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one instruction, which is loaded and executed by the processor to implement the method for automatic modulation recognition of a wireless signal according to any one of claims 1 to 4, or which is loaded and executed by the processor to implement the method for automatic modulation recognition of a wireless signal according to claim 5.
CN202111348246.4A 2021-11-15 2021-11-15 Automatic modulation identification method and device for wireless signal, storage medium and equipment Active CN114548201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111348246.4A CN114548201B (en) 2021-11-15 2021-11-15 Automatic modulation identification method and device for wireless signal, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111348246.4A CN114548201B (en) 2021-11-15 2021-11-15 Automatic modulation identification method and device for wireless signal, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN114548201A CN114548201A (en) 2022-05-27
CN114548201B true CN114548201B (en) 2023-04-07

Family

ID=81668654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111348246.4A Active CN114548201B (en) 2021-11-15 2021-11-15 Automatic modulation identification method and device for wireless signal, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN114548201B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442192B (en) * 2022-07-22 2024-02-27 西安电子科技大学 Communication signal automatic modulation recognition method and device based on active learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111490853A (en) * 2020-04-15 2020-08-04 成都海擎科技有限公司 Channel coding parameter identification method based on deep convolutional neural network
CN112767251A (en) * 2021-01-20 2021-05-07 重庆邮电大学 Image super-resolution method based on multi-scale detail feature fusion neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1600351B1 (en) * 2004-04-01 2007-01-10 Heuristics GmbH Method and system for detecting defects and hazardous conditions in passing rail vehicles
CN108282263B (en) * 2017-12-15 2019-11-26 西安电子科技大学 Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network
CN108875787B (en) * 2018-05-23 2020-07-14 北京市商汤科技开发有限公司 Image recognition method and device, computer equipment and storage medium
CN112836569B (en) * 2020-12-15 2023-01-03 泰山学院 Underwater acoustic communication signal identification method, system and equipment based on sequence convolution network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111490853A (en) * 2020-04-15 2020-08-04 成都海擎科技有限公司 Channel coding parameter identification method based on deep convolutional neural network
CN112767251A (en) * 2021-01-20 2021-05-07 重庆邮电大学 Image super-resolution method based on multi-scale detail feature fusion neural network

Also Published As

Publication number Publication date
CN114548201A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
US11477060B2 (en) Systems and methods for modulation classification of baseband signals using attention-based learned filters
CN111898583B (en) Communication signal modulation mode identification method and system based on deep learning
CN110598530A (en) Small sample radio signal enhanced identification method based on ACGAN
Teng et al. Accumulated polar feature-based deep learning for efficient and lightweight automatic modulation classification with channel compensation mechanism
CN112702294A (en) Modulation recognition method for multi-level feature extraction based on deep learning
Abdel‐Moneim et al. A survey of traditional and advanced automatic modulation classification techniques, challenges, and some novel trends
CN114548201B (en) Automatic modulation identification method and device for wireless signal, storage medium and equipment
Liu et al. Toward intelligent wireless communications: Deep learning-based physical layer technologies
Chen et al. Automatic modulation classification using multi-scale convolutional neural network
CN116628566A (en) Communication signal modulation classification method based on aggregated residual transformation network
CN115982613A (en) Signal modulation identification system and method based on improved convolutional neural network
Lin et al. A real-time modulation recognition system based on software-defined radio and multi-skip residual neural network
CN114615118A (en) Modulation identification method based on multi-terminal convolution neural network
Chen et al. End-to-end PSK signals demodulation using convolutional neural network
CN114398931A (en) Modulation recognition method and system based on numerical characteristic and image characteristic fusion
CN112183300B (en) AIS radiation source identification method and system based on multi-level sparse representation
CN116070136A (en) Multi-mode fusion wireless signal automatic modulation recognition method based on deep learning
CN116488974B (en) Light modulation identification method and system combined with attention mechanism
CN113822162B (en) Convolutional neural network modulation identification method based on pseudo constellation diagram
CN115086123B (en) Modulation identification method and system based on fusion of time-frequency diagram and constellation diagram
Jia et al. A hybrid attention mechanism for blind automatic modulation classification
CN114826850A (en) Modulation identification method, device and equipment based on time-frequency diagram and deep learning
CN116132235A (en) Continuous phase modulation signal demodulation method based on deep learning
CN116150603A (en) Complex modulation mode identification method based on multi-scale feature fusion
CN115589349A (en) QAM signal modulation identification method based on deep learning channel self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant