CN113542171A - Modulation pattern recognition method and system based on CNN and combined high-order spectral image - Google Patents

Modulation pattern recognition method and system based on CNN and combined high-order spectral image Download PDF

Info

Publication number
CN113542171A
CN113542171A CN202110782131.XA CN202110782131A CN113542171A CN 113542171 A CN113542171 A CN 113542171A CN 202110782131 A CN202110782131 A CN 202110782131A CN 113542171 A CN113542171 A CN 113542171A
Authority
CN
China
Prior art keywords
layer
neural network
signal
network model
modulation pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110782131.XA
Other languages
Chinese (zh)
Other versions
CN113542171B (en
Inventor
李肯立
叶文华
周旭
刘楚波
陈岑
肖国庆
阳王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110782131.XA priority Critical patent/CN113542171B/en
Publication of CN113542171A publication Critical patent/CN113542171A/en
Application granted granted Critical
Publication of CN113542171B publication Critical patent/CN113542171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a modulation pattern recognition method based on CNN and combination of multiple high-order spectral feature images, which comprises the following steps: receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data, preprocessing the I/Q data to obtain a combined high-order spectral image, obtaining a signal-to-noise ratio of the obtained I/Q data, judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, and inputting the obtained combined high-order spectral image into a trained first convolution neural network model to obtain a modulation pattern recognition result if the signal-to-noise ratio is greater than or equal to the preset threshold value. Aiming at the fact that the difference of high-order spectral characteristics of signals is large under different signal-to-noise ratios, the method trains two convolutional neural network models, improves robustness, and can identify modulation patterns of BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM, 32QAM and other signals.

Description

Modulation pattern recognition method and system based on CNN and combined high-order spectral image
Technical Field
The invention belongs to the technical field of wireless communication and machine learning, and particularly relates to a modulation pattern recognition method and system based on a CNN (compact neural network, CNN for short) and a combined high-order spectral image.
Background
In recent years, with the progress of communication technology and signal processing technology, the types of radio signals have become more and more, and modulation patterns have become more and more complex. Complex radio signals are difficult to cope with using conventional modulation pattern recognition methods, and the accuracy of conventional modulation pattern recognition relies heavily on manual extraction of signal features. And with the wide application of mobile communication technology, the communication environment is full of various radio signals, and various noises and interferences exist at the same time, which presents a plurality of technical difficulties for the identification of modulation patterns.
The automatic identification of the modulation pattern at the present stage is mainly divided into two categories: an identification method based on a decision theory and an identification method based on deep learning.
Aiming at the recognition method based on the decision theory, the problem of automatic recognition of the modulation mode is taken as a composite hypothesis test problem, and based on likelihood function test, the decision criterion is simple and higher accuracy can be obtained; for the recognition method based on deep learning, the deep learning concept is utilized, the deep features are extracted through the multi-hidden-layer artificial neural network, and the deep intrinsic information of the data is obtained.
However, the two above-mentioned identification methods have several not negligible drawbacks:
firstly, for a recognition method based on decision theory, sufficient prior knowledge is required, and higher delay exists in the classification process;
secondly, for the identification method based on the decision theory, when a new signal appears, the complicated judgment condition and the threshold value problem need to be considered, and the algorithm is changed greatly;
thirdly, for the recognition method based on deep learning, higher image pixels are needed, which causes larger calculation amount and causes the problems of slow recognition speed and high system delay;
fourthly, for the recognition method based on deep learning, the recognition method partially depends on the constellation diagram of the signal, and the recognition rate of the modulation pattern is low when the signal has frequency deviation, which can result in the failure of application in the non-cooperative communication field.
Disclosure of Invention
In view of the above drawbacks and needs of the prior art, the present invention provides a modulation pattern recognition method and system based on CNN and combined higher order spectral images, the method aims to solve the technical problems that the existing identification method based on the decision theory needs enough prior knowledge and has higher delay in the classification process, and when a new signal appears, the technical problem of great algorithm change is caused by the need of considering the problems of complicated judgment conditions and threshold values, and the prior identification method based on deep learning needs higher image pixels, which causes the technical problems of large calculation amount, slow identification speed and high system delay, and because of partly depending on the constellation diagram of the signal, when the signal has frequency deviation, the recognition rate of the modulation pattern is low, and further the technical problem that the method can not be applied in the non-cooperative communication field is caused.
To achieve the above object, according to one aspect of the present invention, there is provided a modulation pattern recognition method based on CNN and combining multiple higher order spectral feature images, comprising the steps of:
(1) receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, and sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data;
(2) preprocessing the I/Q data obtained in the step (1) to obtain a combined high-order spectral image;
(3) acquiring the signal-to-noise ratio of the I/Q data obtained in the step (1), judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, if so, entering the step (4), otherwise, entering the step (5);
(4) inputting the combined high-order spectrum image obtained in the step (2) into a trained first convolution neural network model to obtain a modulation pattern recognition result, and ending the process;
(5) and (3) inputting the combined high-order spectrum image obtained in the step (2) into a trained second convolutional neural network model to obtain a modulation pattern recognition result, and ending the process.
Preferably, the digital down-conversion processing in step (1) adopts the following formula:
Figure BDA0003157403870000031
wherein Si(x) Representing the input signal, So(x) Representing the output signal, i representing an imaginary number, fcIs a carrier frequency, fsIs the sampling rate, t is time;
the filtering process in step (1) is a finite long single-bit impulse response filter FIR algorithm.
Preferably, step (2) comprises the sub-steps of:
(2-1) carrying out code element synchronization processing on the I/Q data obtained in the step (1) to obtain an I/Q data set with a single code rate;
(2-2) calculating m groups of high-order spectrums S corresponding to the I/Q data set with the single code rate obtained in the step (2-1)1,S2,...,SmWhere m denotes the number of integers in an array E of randomly generated and incrementally arranged integers, and E ═ a1, a 2.., am }, Sm ═ 10 × log10(abs (fft (f)am) F) represents the I/Q data set of the single code rate obtained in the step (2-1), abs represents an absolute value, fft represents fast Fourier transform, and the number of fft points is 2048;
(2-3) obtaining m sets of higher order spectra S according to the step (2-2)1,S2,…,SmAcquiring a combined high-order spectral image;
preferably, step (2-1) first of all is to compare the I/Q data obtained in step (1) with each other according to a ratio of 1:6: end-5, 2:6: end-4, 3:6: end-3, 4:6: end-2, 5:6: end-1, 6:6: end is divided into 6 paths, amplitude accumulation is carried out on the 6 paths of data respectively to obtain an amplitude accumulated value corresponding to each path, one path of I/Q data with the maximum amplitude accumulated value is taken as an I/Q data set with a single code rate, the number of the data in the data set is 2048, and end represents the last number in the I/Q data obtained in the step (1);
and (2-3) removing 124 data sampling points before and after each group of high-order spectrums, connecting and putting the front m/2 groups of high-order spectrums together in a front-back mode to form a matrix with the data length of (2048-.
Preferably, the first convolutional neural network model and the second convolutional neural network model have the same structure, and the specific structures of the two models are as follows:
layer 1 is a convolutional layer that receives an input 64 x 3 image, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, the convolution kernel is filled with all 0's with data of 1, and the layer output matrix size is 64 x 32;
level 2 is the active level, the activation function is ReLU, the output matrix size of this level is 64 × 32;
layer 3 is a convolutional layer that receives a matrix of inputs 64 x 32, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, and the layer output matrix size is 62 x 32;
level 4 is the active level, the activation function is ReLU, the output matrix size of this level is 62 × 32;
the 5 th layer is a pooling layer, the maximum pooling of 2 x 2 is adopted, the length and width step size are both 2, and the output matrix of the layer is 31 x 32;
the 6 th layer was a reject layer, with a ratio set to 0.25, and the layer output was 31 x 32;
layer 7 is a convolutional layer that receives a matrix of inputs 31 x 32, where the two-dimensional convolution kernel is 3 x 3, 64 in total, steps 1, the kernel is filled with all 0's with data 1, and the layer output matrix size is 31 x 64;
layer 8 is the active layer, the activation function is ReLU, the output matrix size of this layer is 31 × 64;
layer 9 is a convolutional layer that receives a matrix of inputs 31 x 64, where the two-dimensional convolution kernel is 3 x 3, 64 in total, with a step size of 1, and the layer output matrix size is 29 x 64;
layer 10 is the active layer, the activation function is ReLU, the output matrix size of this layer is 29 × 64;
the 11 th layer is a pooling layer, maximum pooling of 2 x 2 is used, the length and width steps are 2, and the output matrix of the layer is 14 x 64;
the 12 th layer is a reject layer, the ratio is set to 0.25, and the output matrix of the layer is 14 × 64;
layer 13 is a spread layer (scatter) that receives a matrix of inputs 14 x 64, and this layer has 12544 output nodes;
the 14 th layer is a full connection layer, and the number of output nodes of the layer is 512;
the 15 th layer is an active layer, the active function is a ReLU, and the number of output nodes of the layer is 512;
the 15 th layer is a abandon layer, the proportion is set to be 0.5, and the number of output nodes of the layer is 512;
the 16 th layer is a full-connection layer, the number of output nodes of the layer is p, wherein p represents the number of modulation patterns;
the layer 17 is an activation layer, and the number of output nodes of the layer is p, wherein the function formula is softmax.
Preferably, the first convolutional neural network model and the second convolutional neural network model are obtained by training through the following steps:
(a1) using a random number as a signal source, using a plurality of modulation patterns to generate modulation signals corresponding to the modulation patterns, generating signal samples under different signal-to-noise ratios for the modulation signals corresponding to each modulation pattern according to the signal-to-noise ratio from 1dB to 24dB and taking 1dB as an interval, and carrying out filter forming processing on the signal samples to obtain a plurality of signal samples corresponding to each modulation pattern, wherein all the signal samples form an I/Q data set corresponding to the modulation pattern;
(a2) preprocessing the I/Q data set corresponding to each modulation pattern obtained in step (a1) to obtain a plurality of combined high-order spectral images corresponding to each modulation pattern;
(a3) dividing the plurality of combined high-order spectral images corresponding to each modulation pattern obtained in the step (a2) according to the signal-to-noise ratio ranges of 1db to 13db and 12db to 24db of the signal samples corresponding to the combined high-order spectral images to obtain a first combined high-order spectral image set (which corresponds to a low signal-to-noise ratio range) and a second combined high-order spectral image set (which corresponds to a high signal-to-noise ratio range);
(a4) for the combined higher order spectral images in the first set of combined higher order spectral images obtained in step (a3), in a training set image to test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a first convolution neural network model;
(a5) updating and optimizing the weight parameters and the bias parameters of each layer in the first convolutional neural network model by using a back propagation algorithm to obtain an updated first convolutional neural network model;
(a6) iteratively training the first convolution neural network model updated in the step (a5) until the loss function of the first convolution neural network model reaches the minimum, thereby obtaining a first convolution neural network model which is preliminarily trained;
(a7) and (c) performing iterative verification on the preliminarily trained first convolution neural network model by using the training test set image obtained in the step (a4) until the obtained classification precision reaches the optimum, so as to obtain the trained first convolution neural network model.
(a8) For the combined higher order spectral image in the second combined higher order spectral image set obtained in step (a3), in a training set image and test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a second convolution neural network model;
(a9) updating and optimizing the weight parameter and the bias parameter of each layer in the second convolutional neural network model by using a back propagation algorithm to obtain an updated second convolutional neural network model;
(a10) iteratively training the second convolutional neural network model updated in the step (a9) until the loss function of the second convolutional neural network model reaches the minimum, thereby obtaining a preliminarily trained second convolutional neural network model;
(a11) and (c) performing iterative verification on the preliminarily trained second convolutional neural network model by using the training test set image obtained in the step (a8) until the obtained classification accuracy reaches the optimal value, so as to obtain the trained second convolutional neural network model.
Preferably, in each iteration process of step (a7), the verification set image obtained in step (a4) is further used to verify the first convolutional neural network model after iterative training, if the difference between the classification precision obtained by verification and the classification precision obtained by iterative verification of the training test set image is greater than a preset threshold after the epoch number exceeds half of the preset value, it indicates that overfitting exists, and then the step (a5) is returned to;
and (c) in each iteration process of the step (a11), verifying the second convolutional neural network model after iterative training by using the verification set image obtained in the step (a8), and if the difference between the classification precision obtained by verification and the classification precision obtained after iterative verification of the training test set image is greater than a preset threshold, indicating that overfitting exists, and returning to the step (a 9).
Preferably, the initial values of the weight parameters in the step (a5) and the step (a9) are random values output using a truncated normal distribution with a standard deviation of 0.1, the initial value of the bias parameter is set to 0, the back propagation algorithm is to use an adaptive moment estimation ADAM function as an optimizer, a learning rate α is set to 0.001, an exponential decay rate β 1 for calculating momentum is set to 0.9, and an exponential decay rate β 2 for calculating variance is set to 0.999.
Preferably, the loss function L of the first and second convolutional neural networks is a cross-entropy loss function and is equal to:
Figure BDA0003157403870000071
wherein N represents the total number of training set images, ti, k represents the prediction result of the kth class ith training set image after being input into the convolutional neural network, yi,kRepresenting the real result corresponding to the ith training set image of the kth class, k belongs to [1, P ∈],i∈[1,N]λ represents the degree of regularization, which is 0.007, Wi,kAnd the weight parameters represent the weight parameters of the ith class ith training set image when the ith class ith training set image is input into the convolutional neural network and are changed along with the training of the convolutional neural network.
According to another aspect of the present invention, there is provided a modulation pattern recognition system based on CNN and combining multiple higher order spectral feature images, comprising:
the first module is used for receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, and sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data;
the second module is used for preprocessing the I/Q data obtained by the first module to obtain a combined high-order spectral image;
the third module is used for acquiring the signal-to-noise ratio of the I/Q data obtained by the first module and judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, if so, the fourth module is started, and if not, the fifth module is started;
the fourth module is used for inputting the combined high-order spectrum image obtained by the second module into the trained first convolution neural network model to obtain a modulation pattern recognition result, and the process is ended;
and the fifth module is used for inputting the combined high-order spectrum image obtained by the second module into the trained second convolutional neural network model to obtain a modulation pattern recognition result, and the process is ended.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the method, as the step (4) or the step (5) is adopted, the modulation pattern recognition result can be obtained only by inputting the combined high-order spectrum image into the trained first or second convolutional neural network model, the calculation process is simple and clear, a large amount of signal processing priori knowledge does not need to be mastered, the GPU is used for accelerating calculation, and the delay is short, so that the technical problems that the existing recognition method based on the decision theory needs enough priori knowledge and the higher delay exists in the classification process can be solved;
(2) according to the invention, as the training process from the step (a1) to the step (a11) is adopted, when some new signal modulation patterns appear, the problem can be solved only by retraining the model without changing the algorithm, so that the technical problem that the algorithm is changed greatly due to the fact that the fussy judgment condition and threshold value taking problem need to be considered when the new signal modulation patterns appear in the existing recognition method based on the decision theory can be solved;
(3) because the step (2) is adopted, the generated small pixel image with the combined high-order spectrum image of 64 x 64 is input into the trained convolutional neural network, and the calculation speed is high, the technical problems of large calculation amount, low recognition speed and high system delay caused by the fact that the existing recognition method based on deep learning needs higher image pixels can be solved;
(4) the invention adopts the step (2), the position and the amplitude of each high-order spectrum peak of the signal are used as the characteristic parameters for identifying the signal modulation mode, the characteristic parameters can be used for well distinguishing different modulation modes, and when the signal has frequency deviation in practical application, only the peak position of the high-order spectrum has certain deviation center position, the presentation of the high-order spectrum peak characteristic is not influenced, and better modulation mode identification rate can still be obtained, so that the technical problem that the existing identification method based on deep learning partially depends on the constellation diagram of the signal, the modulation mode identification rate is low when the signal has frequency deviation, and further the signal cannot be applied in the non-cooperative communication field can be solved.
Drawings
FIG. 1 is a flow chart of a modulation pattern recognition method based on CNN and combined high-order spectral images according to the present invention;
FIG. 2 is a network architecture diagram of a first/second convolutional neural network of the present invention;
FIG. 3 is a schematic diagram of a training process for a first convolutional neural network and a second convolutional neural network of the present invention;
FIG. 4 is a graph of low SNR model classification accuracy of the present invention.
FIG. 5 is a graph of the low SNR loss function of the present invention;
fig. 6 is a graph comparing the performance of the present invention and the prior art methods in terms of modulation pattern recognition accuracy at low signal-to-noise ratios.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a modulation pattern recognition method and system based on a CNN and a combined high-order spectral image. The method comprises the steps of firstly preprocessing signals such as digital down-conversion, filtering and code element synchronization, calculating high-order spectrums of 2, 4, 6 and 8 times by utilizing fast Fourier transform, combining various high-order spectrum data into a matrix, drawing the matrix in an image, inputting the matrix into a trained convolutional neural network, automatically extracting various characteristics of the combined high-order spectrum, and obtaining classification and identification results of various modulation patterns. Aiming at the fact that the difference of high-order spectral characteristics of signals is large under different signal-to-noise ratios, the method trains two models, improves robustness, and can identify modulation patterns of signals such as BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM and 32 QAM.
The basic idea of the invention is that 2, 4, 6, 8 times of high-order spectrums are calculated by utilizing fast Fourier transform, a plurality of high-order spectrums are combined in one image and used as the input of a convolutional neural network for training, and various characteristics of the plurality of spectrums are automatically extracted by fully utilizing CNN through optimizing a network structure, so that the classification and identification of modulation patterns are completed. On the other hand, aiming at the fact that the difference of high-order spectral characteristics of signals under different signal-to-noise ratios is large, the method trains two models and improves robustness. The accuracy of the modulation pattern recognition is improved from two different angles.
As shown in fig. 1, the present invention provides a modulation pattern recognition method based on CNN and combined multiple high-order spectral feature images, comprising the following steps:
(1) receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, and sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data;
specifically, the Signal source may be a SMU200A type Vector Signal Generator (Vector Signal Generator) manufactured by ROHDE & SCHWARZ.
The digital down-conversion processing in this step adopts the following formula:
Figure BDA0003157403870000101
wherein Si(x) Representing the input signal, So(x) Representing the output signal, i representing an imaginary number, fcIs a carrier frequency, fsIs the sampling rate, t is time;
specifically, the filtering process is a Finite Impulse Response (FIR) algorithm.
(2) Preprocessing the I/Q data obtained in the step (1) to obtain a combined high-order spectral image;
specifically, this step includes the following substeps:
(2-1) carrying out code element synchronization processing on the I/Q data obtained in the step (1) to obtain an I/Q data set with a single code rate;
specifically, the step first comprises the steps of respectively performing the following steps on the I/Q data obtained in the step (1) according to a ratio of 1:6: end-5, 2:6: end-4, 3:6: end-3, 4:6: end-2, 5:6: end-1, 6:6: end (wherein end represents the last number, end-1 represents the second number from the last, and the rest are the same) is divided into 6 paths, the 6 paths of data are respectively subjected to amplitude accumulation to obtain an amplitude accumulated value corresponding to each path, the path of I/Q data with the maximum amplitude accumulated value is taken as an I/Q data set with the single code rate, and the number of the data in the data set is 2048.
(2-2) calculating m groups of high-order spectrums S corresponding to the I/Q data set with the single code rate obtained in the step (2-1)1,S2,…,Sm
Specifically, m represents the number of integers in an integer array E that is randomly generated and arranged in an incremental manner, and E ═ a1, a 2.., am };
and has Sm=10*log10(abs(fft(fam)))
Wherein f represents the I/Q data set of the single code rate obtained in the step (2-1), abs represents the absolute value, fft represents fast Fourier transform, and the number of fft points is 2048;
specifically, the parameter n may be 2, 4, 6, 8, 12, 16, 24, or 32, and it should be understood that the present invention is not limited to the value of n, and any other value is included in the scope of the present invention.
The step is to calculate S by using fast Fourier transformnThe formula is as follows:
Sm=10*log10(abs(fft(fn)))
for example, if m is 8, E is {2, 4, 6, 8, 12, 16, 24, 32 }.
(2-3) obtaining m sets of higher order spectra S according to the step (2-2)1,S2,...,SmAcquiring a combined high-order spectral image;
specifically, the method comprises the steps of removing 124 data sampling points before and after each group of high-order spectrums, connecting and putting the front m/2 groups of high-order spectrums together in a front-back manner to form a matrix with the data length of (2048-;
the steps (2-1) to (2-3) have the advantages that the positions and amplitudes of the peaks of the high-order spectrum of the signal are used as characteristic parameters for identifying the modulation patterns of the signal, the characteristics can be well used for distinguishing different modulation patterns, when the signal has frequency deviation, the positions of the peaks of the high-order spectrum have certain deviation center positions, the presentation of the peak characteristics of the high-order spectrum is not influenced, the CNN can automatically extract the characteristics of different modulation patterns, and the method is obviously superior to a method using a constellation map as a classification characteristic.
(3) Acquiring the signal-to-noise ratio of the I/Q data obtained in the step (1), judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, if so, entering the step (4), otherwise, entering the step (5);
specifically, the preset threshold value ranges from 10dB to 15dB, and is preferably 13 dB.
(4) Inputting the combined high-order spectrum image obtained in the step (2) into a trained first convolution neural network model to obtain a modulation pattern recognition result, and ending the process;
(5) and (3) inputting the combined high-order spectrum image obtained in the step (2) into a trained second convolutional neural network model to obtain a modulation pattern recognition result, and ending the process.
Specifically, the modulation patterns include BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM, 32QAM, and the like.
As shown in fig. 2. The first convolution neural network model and the second convolution neural network model have the same structure, and the specific structures of the first convolution neural network model and the second convolution neural network model are as follows:
layer 1 is a convolutional layer that receives an input 64 x 3 image, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, the convolution kernel is filled with all 0's with data of 1, and the layer output matrix size is 64 x 32;
level 2 is the active level, the activation function is ReLU, the output matrix size of this level is 64 × 32;
layer 3 is a convolutional layer that receives a matrix of inputs 64 x 32, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, and the layer output matrix size is 62 x 32;
level 4 is the active level, the activation function is ReLU, the output matrix size of this level is 62 × 32;
the 5 th layer is a pooling layer, the maximum pooling of 2 x 2 is adopted, the length and width step size are both 2, and the output matrix of the layer is 31 x 32;
the 6 th layer was a reject layer, with a ratio set to 0.25, and the layer output was 31 x 32;
layer 7 is a convolutional layer that receives a matrix of inputs 31 x 32, where the two-dimensional convolution kernel is 3 x 3, 64 in total, steps 1, the kernel is filled with all 0's with data 1, and the layer output matrix size is 31 x 64;
layer 8 is the active layer, the activation function is ReLU, the output matrix size of this layer is 31 × 64;
layer 9 is a convolutional layer that receives a matrix of inputs 31 x 64, where the two-dimensional convolution kernel is 3 x 3, 64 in total, with a step size of 1, and the layer output matrix size is 29 x 64;
layer 10 is the active layer, the activation function is ReLU, the output matrix size of this layer is 29 × 64;
the 11 th layer is a pooling layer, maximum pooling of 2 x 2 is used, the length and width steps are 2, and the output matrix of the layer is 14 x 64;
the 12 th layer is a reject layer, the ratio is set to 0.25, and the output matrix of the layer is 14 × 64;
layer 13 is a spread layer (scatter) that receives a matrix of inputs 14 x 64, and this layer has 12544 output nodes;
the 14 th layer is a full connection layer, and the number of output nodes of the layer is 512;
the 15 th layer is an active layer, the active function is a ReLU, and the number of output nodes of the layer is 512;
the 15 th layer is a abandon layer, the proportion is set to be 0.5, and the number of output nodes of the layer is 512;
the 16 th layer is a full-connection layer, and the number of output nodes of the layer is p (p represents the number of modulation patterns, and is a natural number between 2 and 32);
the layer 17 is an activation layer, a function formula softmax is activated, and the number of output nodes of the layer is p;
the final output is the probability of p modulation patterns.
The first convolution neural network model and the second convolution neural network model are obtained by training through the following steps:
(a1) using a random number as an information source, using a plurality of modulation patterns (such as BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM and 32QAM) to generate modulation signals corresponding to various modulation patterns, generating signal samples under different signal-to-noise ratios for the modulation signals corresponding to each modulation pattern according to the signal-to-noise ratio from 1dB to 24dB and with 1dB as an interval, and performing filter shaping processing on the signal samples to obtain a plurality of signal samples corresponding to each modulation pattern, wherein all the signal samples form an I/Q data set corresponding to the modulation pattern;
specifically, in this step, modulation signals corresponding to modulation patterns such as BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM, and 32QAM are generated using modulation methods such as functions pskmod, qammod, and apskmod in matlab software (version R2018 b).
In the step, an Additive White Gaussian Noise (AWGN) model is used to generate signal samples with different signal-to-Noise ratios.
In this step, a root raised cosine Finite length unit Impulse Response filter (FIR for short) is used to implement the shaping filtering.
The present invention has been simulated and tested with 7 modulation patterns, such as BPSK, QPSK, 8PSK, 16APSK, 32APSK, 16QAM, 32QAM, etc., it should be understood that the present invention is not limited to the above modulation patterns, and any other modulation patterns are included in the protection scope of the present invention.
In this embodiment, 200 signal samples are generated for each snr, and 24 × 200 samples are generated for each modulation pattern;
(a2) preprocessing the I/Q data set corresponding to each modulation pattern obtained in step (a1) to obtain a plurality of combined high-order spectral images corresponding to each modulation pattern;
specifically, the process of this step is completely the same as the process of step (2) above, and is not described herein again.
(a3) Dividing the plurality of combined high-order spectral images corresponding to each modulation pattern obtained in step (a2) according to the signal-to-noise ratio ranges 1db to 13db and 12db to 24db of the signal sample corresponding to the combined high-order spectral image to obtain a first combined high-order spectral image set (which corresponds to the low signal-to-noise ratio range) and a second combined high-order spectral image set (which corresponds to the high signal-to-noise ratio range), as shown in fig. 3.
The step (a5) has the advantage that the data is divided according to the different signal-to-noise ratios of the high section and the low section, so that the characteristics of each high-order spectral line under different signal-to-noise ratios can be better highlighted, and the CNN can be better used for automatically extracting the characteristics of different modulation patterns under different signal-to-noise ratios.
(a4) For the combined higher order spectral images in the first set of combined higher order spectral images obtained in step (a3), in a training set image to test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a first convolution neural network model;
(a5) updating and optimizing the weight parameters and the bias parameters of each layer in the first convolutional neural network model by using a back propagation algorithm to obtain an updated first convolutional neural network model;
(a6) iteratively training the first convolution neural network model updated in the step (a5) until the loss function of the first convolution neural network model reaches the minimum, thereby obtaining a first convolution neural network model which is preliminarily trained;
(a7) and (c) performing iterative verification on the preliminarily trained first convolution neural network model by using the training test set image obtained in the step (a4) until the obtained classification precision reaches the optimum, so as to obtain the trained first convolution neural network model.
In each iteration process of the step, the verification set image obtained in the step (a4) is used for verifying the first convolution neural network model after iterative training, if the difference between the classification precision obtained by verification and the classification precision obtained after iterative verification of the training test set image is greater than a preset threshold (the value range of the classification precision is 0.05-0.2, preferably 0.1) after the epoch number (epoch) exceeds half of a preset value (set to 40 in the invention), the overfitting is indicated, and then the step (a5) is returned;
(a8) for the combined higher order spectral image in the second combined higher order spectral image set obtained in step (a3), in a training set image and test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a second convolution neural network model;
(a9) updating and optimizing the weight parameter and the bias parameter of each layer in the second convolutional neural network model by using a back propagation algorithm to obtain an updated second convolutional neural network model;
(a10) iteratively training the second convolutional neural network model updated in the step (a9) until the loss function of the second convolutional neural network model reaches the minimum, thereby obtaining a preliminarily trained second convolutional neural network model;
(a11) and (c) performing iterative verification on the preliminarily trained second convolutional neural network model by using the training test set image obtained in the step (a8) until the obtained classification accuracy reaches the optimal value, so as to obtain the trained second convolutional neural network model.
In each iteration process of the step, the verification set image obtained in the step (a8) is used for verifying the second convolutional neural network model after iterative training, if the difference value between the classification precision obtained by verification and the classification precision obtained after iterative verification of the training test set image is greater than a preset threshold value (the value range of the difference value is 0.05-0.2, preferably 0.1), the overfitting is indicated, and the step (a9) is returned at this time; specifically, the initial values of the weight parameters in the step (a5) and the step (a9) are random values output by using a truncated normal distribution with a standard deviation of 0.1, the initial value of the bias parameter is set to 0, the back propagation algorithm adopts an Adaptive moment estimation (Adam) function as an optimizer, the learning rate α is set to 0.001, the exponential decay rate β 1 for calculating momentum is set to 0.9, and the exponential decay rate β 2 for calculating variance is set to 0.999.
Specifically, the loss function L of the first and second convolutional neural networks is a cross-entropy loss function and is equal to:
Figure BDA0003157403870000161
wherein N represents the total number of training set images, ti, k represents the prediction result of the kth class ith training set image after being input into the convolutional neural network, yi,kRepresenting the real result corresponding to the ith training set image of the kth class, k belongs to [1, P ∈],i∈[1,N]λ represents the degree of regularization, which is 0.007, Wi,kAnd the weight parameters represent the weight parameters of the ith class ith training set image when the ith class ith training set image is input into the convolutional neural network and are changed along with the training of the convolutional neural network.
The classification precision curve under low signal-to-noise ratio (signal-to-noise ratio is 1 db-13 db) trained in the invention is shown in fig. 4, the loss function curve is shown in fig. 5, the accuracy of the final verification set can be found to be 93% according to the change curve, and the accuracy of the final verification set under high signal-to-noise ratio (signal-to-noise ratio is 12 db-24 db) is 99%.
Results of the experiment
In the verification set, signal samples with signal-to-noise ratios from 1dB to 10dB are taken for verification, and a comparison graph of the modulation pattern recognition accuracy of the method and the modulation pattern recognition accuracy of the method in the prior art is shown in FIG. 6 under the condition of low signal-to-noise ratio.
Specifically, the first method is from "a Deep Learning Framework for Signal Detection and Modulation pattern recognition", see Xiong Zha, Hua Peng, Xin Qin, Guang Li and shan Yang, a Deep Learning Framework for Signal Detection and Modulation classification, Sensors 2019,19, 4042.
The second method available is from "automatic modulation pattern recognition: a method for Deep Learning is disclosed in the first paragraph, which is specifically referred to Fan Meng, Peng Chen, Lenan Wu, Xianbin Wang, Automatic Modulation Classification A Deep Learning Enabled Approach, IEEE TRANSACTIONS VEHICULAR TECHNOLOGY, VOL.67, NO.11, NOVEMBER 2018.
In practical application, the signal-to-noise ratio is set in a signal source from 1dB to 22dB every other 3dB, the code rate is 1MBd, all signals are sequentially collected by using an FPGA processing module, and the identification accuracy of an actual signal modulation pattern is given in table 1 after the identification of a low signal-to-noise ratio model, and the accuracy is very similar to that of a simulation signal, so that the method has good generalization capability and robustness.
TABLE 1 actual signal modulation Pattern recognition accuracy
Figure BDA0003157403870000171
The experiments show that the method can obtain better identification accuracy, is verified in the identification task of the actual signal, and has higher practical value.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A modulation pattern recognition method based on CNN and combination of multiple high-order spectral feature images is characterized by comprising the following steps:
(1) receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, and sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data;
(2) preprocessing the I/Q data obtained in the step (1) to obtain a combined high-order spectral image;
(3) acquiring the signal-to-noise ratio of the I/Q data obtained in the step (1), judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, if so, entering the step (4), otherwise, entering the step (5);
(4) inputting the combined high-order spectrum image obtained in the step (2) into a trained first convolution neural network model to obtain a modulation pattern recognition result, and ending the process;
(5) and (3) inputting the combined high-order spectrum image obtained in the step (2) into a trained second convolutional neural network model to obtain a modulation pattern recognition result, and ending the process.
2. The CNN-based modulation pattern recognition method for combining multiple higher-order spectral feature images according to claim 1,
the digital down-conversion treatment in the step (1) adopts the following formula:
Figure FDA0003157403860000011
wherein Si(x) Representing the input signal, So(x) Representing the output signal, i representing an imaginary number, fcIs a carrier frequency, fsIs the sampling rate, t is time;
the filtering process in step (1) is a finite long single-bit impulse response filter FIR algorithm.
3. The CNN-based modulation pattern recognition method combining multiple higher order spectral feature images according to claim 1 or 2, wherein the step (2) comprises the sub-steps of:
(2-1) carrying out code element synchronization processing on the I/Q data obtained in the step (1) to obtain an I/Q data set with a single code rate;
(2-2) calculating m groups of high-order spectrums S corresponding to the I/Q data set with the single code rate obtained in the step (2-1)1,S2,...,SmWhere m represents the number of integers in an array of integers E randomly generated and arranged in an incremental manner, and E ═ a1, a2, …, am, Sm=10*log10(abs(fft(fam) F) represents the I/Q data set of the single code rate obtained in the step (2-1), abs represents an absolute value, fft represents fast Fourier transform, and the number of fft points is 2048;
(2-3) obtaining m sets of higher order spectra according to the step (2-2)S1,S2,...,SmA combined high order spectral image is acquired.
4. The CNN-based modulation pattern recognition method combining multiple higher-order spectral feature images according to any one of claims 1 to 3,
step (2-1) firstly, dividing the I/Q data obtained in step (1) into 6 paths according to the ratio of 1:6: end-5, 2:6: end-4, 3:6: end-3, 4:6: end-2, 5:6: end-1 and 6:6: end, respectively performing amplitude accumulation on the 6 paths of data to obtain an amplitude accumulated value corresponding to each path, taking the path of I/Q data with the maximum amplitude accumulated value as an I/Q data set with a single-code rate, wherein the number of data in the data set is 2048, and end represents the last number in the I/Q data obtained in step (1);
and (2-3) removing 124 data sampling points before and after each group of high-order spectrums, connecting and putting the front m/2 groups of high-order spectrums together in a front-back mode to form a matrix with the data length of (2048-.
5. The modulation pattern recognition method based on CNN and combined multiple higher-order spectral feature images according to claim 1, wherein the first convolutional neural network model and the second convolutional neural network model have the same structure, and the specific structures of the two are as follows:
layer 1 is a convolutional layer that receives an input 64 x 3 image, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, the convolution kernel is filled with all 0's with data of 1, and the layer output matrix size is 64 x 32;
level 2 is the active level, the activation function is ReLU, the output matrix size of this level is 64 × 32;
layer 3 is a convolutional layer that receives a matrix of inputs 64 x 32, where the two-dimensional convolution kernel is 3 x 3, 32 in total, with a step size of 1, and the layer output matrix size is 62 x 32;
level 4 is the active level, the activation function is ReLU, the output matrix size of this level is 62 × 32;
the 5 th layer is a pooling layer, the maximum pooling of 2 x 2 is adopted, the length and width step size are both 2, and the output matrix of the layer is 31 x 32;
the 6 th layer was a reject layer, with a ratio set to 0.25, and the layer output was 31 x 32;
layer 7 is a convolutional layer that receives a matrix of inputs 31 x 32, where the two-dimensional convolution kernel is 3 x 3, 64 in total, steps 1, the kernel is filled with all 0's with data 1, and the layer output matrix size is 31 x 64;
layer 8 is the active layer, the activation function is ReLU, the output matrix size of this layer is 31 × 64;
layer 9 is a convolutional layer that receives a matrix of inputs 31 x 64, where the two-dimensional convolution kernel is 3 x 3, 64 in total, with a step size of 1, and the layer output matrix size is 29 x 64;
layer 10 is the active layer, the activation function is ReLU, the output matrix size of this layer is 29 × 64;
the 11 th layer is a pooling layer, maximum pooling of 2 x 2 is used, the length and width steps are 2, and the output matrix of the layer is 14 x 64;
the 12 th layer is a reject layer, the ratio is set to 0.25, and the output matrix of the layer is 14 × 64;
layer 13 is a spread layer (scatter) that receives a matrix of inputs 14 x 64, and this layer has 12544 output nodes;
the 14 th layer is a full connection layer, and the number of output nodes of the layer is 512;
the 15 th layer is an active layer, the active function is a ReLU, and the number of output nodes of the layer is 512;
the 15 th layer is a abandon layer, the proportion is set to be 0.5, and the number of output nodes of the layer is 512;
the 16 th layer is a full-connection layer, the number of output nodes of the layer is p, wherein p represents the number of modulation patterns;
the layer 17 is an activation layer, and the number of output nodes of the layer is p, wherein the function formula is softmax.
6. The modulation pattern recognition method based on CNN and combined multiple higher-order spectral feature images according to claim 1, wherein the first convolutional neural network model and the second convolutional neural network model are obtained by training through the following steps:
(a1) using a random number as a signal source, using a plurality of modulation patterns to generate modulation signals corresponding to the modulation patterns, generating signal samples under different signal-to-noise ratios for the modulation signals corresponding to each modulation pattern according to the signal-to-noise ratio from 1dB to 24dB and taking 1dB as an interval, and carrying out filter forming processing on the signal samples to obtain a plurality of signal samples corresponding to each modulation pattern, wherein all the signal samples form an I/Q data set corresponding to the modulation pattern;
(a2) preprocessing the I/Q data set corresponding to each modulation pattern obtained in step (a1) to obtain a plurality of combined high-order spectral images corresponding to each modulation pattern;
(a3) dividing the plurality of combined high-order spectral images corresponding to each modulation pattern obtained in the step (a2) according to the signal-to-noise ratio ranges of 1db to 13db and 12db to 24db of the signal samples corresponding to the combined high-order spectral images to obtain a first combined high-order spectral image set (which corresponds to a low signal-to-noise ratio range) and a second combined high-order spectral image set (which corresponds to a high signal-to-noise ratio range);
(a4) for the combined higher order spectral images in the first set of combined higher order spectral images obtained in step (a3), in a training set image to test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a first convolution neural network model;
(a5) updating and optimizing the weight parameters and the bias parameters of each layer in the first convolutional neural network model by using a back propagation algorithm to obtain an updated first convolutional neural network model;
(a6) iteratively training the first convolution neural network model updated in the step (a5) until the loss function of the first convolution neural network model reaches the minimum, thereby obtaining a first convolution neural network model which is preliminarily trained;
(a7) and (c) performing iterative verification on the preliminarily trained first convolution neural network model by using the training test set image obtained in the step (a4) until the obtained classification precision reaches the optimum, so as to obtain the trained first convolution neural network model.
(a8) For the combined higher order spectral image in the second combined higher order spectral image set obtained in step (a3), in a training set image and test set image ratio of 7: 3, dividing the test set image into a training test set image and a verification set image according to the proportion of 5:5, and inputting the training set image into a second convolution neural network model;
(a9) updating and optimizing the weight parameter and the bias parameter of each layer in the second convolutional neural network model by using a back propagation algorithm to obtain an updated second convolutional neural network model;
(a10) iteratively training the second convolutional neural network model updated in the step (a9) until the loss function of the second convolutional neural network model reaches the minimum, thereby obtaining a preliminarily trained second convolutional neural network model;
(a11) and (c) performing iterative verification on the preliminarily trained second convolutional neural network model by using the training test set image obtained in the step (a8) until the obtained classification accuracy reaches the optimal value, so as to obtain the trained second convolutional neural network model.
7. The CNN-based modulation pattern recognition method for combining multiple higher-order spectral feature images according to claim 6,
in each iteration process of the step (a7), the verification set image obtained in the step (a4) is used for verifying the first convolution neural network model after iterative training, if the difference between the classification precision obtained by verification and the classification precision obtained by iterative verification of the training test set image is greater than a preset threshold value after the number of epochs exceeds half of the preset value, the overfitting is indicated, and the step (a5) is returned at this time;
and (c) in each iteration process of the step (a11), verifying the second convolutional neural network model after iterative training by using the verification set image obtained in the step (a8), and if the difference between the classification precision obtained by verification and the classification precision obtained after iterative verification of the training test set image is greater than a preset threshold, indicating that overfitting exists, and returning to the step (a 9).
8. The CNN-based modulation pattern recognition method combining multiple higher-order spectral feature images according to claim 6, wherein the initial values of the weight parameters in step (a5) and step (a9) are random values output using a truncated normal distribution with a standard deviation of 0.1, the initial value of the bias parameter is set to 0, the back propagation algorithm is to use an adaptive moment estimation ADAM function as an optimizer, set the learning rate α to 0.001, the exponential decay rate β 1 for calculating momentum to 0.9, and the exponential decay rate β 2 for calculating variance to 0.999.
9. The CNN-based modulation pattern recognition method for combining multiple higher-order spectral feature images according to claim 6, wherein the loss functions L of the first and second convolutional neural networks are cross-entropy loss functions and are equal to:
Figure FDA0003157403860000061
wherein N represents the total number of training set images, ti, k represents the prediction result of the kth class ith training set image after being input into the convolutional neural network, yi,kRepresenting the real result corresponding to the ith training set image of the kth class, k belongs to [1, P ∈],i∈[1,N]λ represents the degree of regularization, which is 0.007, Wi,kAnd the weight parameters represent the weight parameters of the ith class ith training set image when the ith class ith training set image is input into the convolutional neural network and are changed along with the training of the convolutional neural network.
10. A modulation pattern recognition system based on CNN and combination of multiple high-order spectral feature images is characterized by comprising:
the first module is used for receiving a radio frequency signal from a signal source, performing analog-to-digital conversion on the radio frequency signal to obtain a digital signal, and sequentially performing digital down-conversion and filtering processing on the digital signal to obtain I/Q data;
the second module is used for preprocessing the I/Q data obtained by the first module to obtain a combined high-order spectral image;
the third module is used for acquiring the signal-to-noise ratio of the I/Q data obtained by the first module and judging whether the signal-to-noise ratio is greater than or equal to a preset threshold value, if so, the fourth module is started, and if not, the fifth module is started;
the fourth module is used for inputting the combined high-order spectrum image obtained by the second module into the trained first convolution neural network model to obtain a modulation pattern recognition result, and the process is ended;
and the fifth module is used for inputting the combined high-order spectrum image obtained by the second module into the trained second convolutional neural network model to obtain a modulation pattern recognition result, and the process is ended.
CN202110782131.XA 2021-07-12 2021-07-12 Modulation pattern recognition method and system based on CNN and combined high-order spectrum image Active CN113542171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110782131.XA CN113542171B (en) 2021-07-12 2021-07-12 Modulation pattern recognition method and system based on CNN and combined high-order spectrum image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782131.XA CN113542171B (en) 2021-07-12 2021-07-12 Modulation pattern recognition method and system based on CNN and combined high-order spectrum image

Publications (2)

Publication Number Publication Date
CN113542171A true CN113542171A (en) 2021-10-22
CN113542171B CN113542171B (en) 2022-06-21

Family

ID=78098426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782131.XA Active CN113542171B (en) 2021-07-12 2021-07-12 Modulation pattern recognition method and system based on CNN and combined high-order spectrum image

Country Status (1)

Country Link
CN (1) CN113542171B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422310A (en) * 2022-01-21 2022-04-29 山东大学 Digital orthogonal modulation signal identification method based on joint distribution matrix and multi-input neural network
CN114615118A (en) * 2022-03-14 2022-06-10 中国人民解放军国防科技大学 Modulation identification method based on multi-terminal convolution neural network
CN114795258A (en) * 2022-04-18 2022-07-29 浙江大学 Child hip joint dysplasia diagnosis system
CN115314348A (en) * 2022-08-03 2022-11-08 电信科学技术第五研究所有限公司 Convolutional neural network-based QAM signal modulation identification method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682574A (en) * 2016-11-18 2017-05-17 哈尔滨工程大学 One-dimensional deep convolution network underwater multi-target recognition method
CN107194404A (en) * 2017-04-13 2017-09-22 哈尔滨工程大学 Submarine target feature extracting method based on convolutional neural networks
CN109802905A (en) * 2018-12-27 2019-05-24 西安电子科技大学 Digital signal Automatic Modulation Recognition method based on CNN convolutional neural networks
CN110327055A (en) * 2019-07-29 2019-10-15 桂林电子科技大学 A kind of classification method of the heart impact signal based on higher-order spectrum and convolutional neural networks
CN110855591A (en) * 2019-12-09 2020-02-28 山东大学 QAM and PSK signal intra-class modulation classification method based on convolutional neural network structure
CN111259798A (en) * 2020-01-16 2020-06-09 西安电子科技大学 Modulation signal identification method based on deep learning
CN111310700A (en) * 2020-02-27 2020-06-19 电子科技大学 Intermediate frequency sampling sequence processing method for radiation source fingerprint feature identification
US20210081630A1 (en) * 2019-09-13 2021-03-18 Tektronix, Inc. Combined higher order statistics and artificial intelligence signal analysis

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682574A (en) * 2016-11-18 2017-05-17 哈尔滨工程大学 One-dimensional deep convolution network underwater multi-target recognition method
CN107194404A (en) * 2017-04-13 2017-09-22 哈尔滨工程大学 Submarine target feature extracting method based on convolutional neural networks
CN109802905A (en) * 2018-12-27 2019-05-24 西安电子科技大学 Digital signal Automatic Modulation Recognition method based on CNN convolutional neural networks
CN110327055A (en) * 2019-07-29 2019-10-15 桂林电子科技大学 A kind of classification method of the heart impact signal based on higher-order spectrum and convolutional neural networks
US20210081630A1 (en) * 2019-09-13 2021-03-18 Tektronix, Inc. Combined higher order statistics and artificial intelligence signal analysis
CN110855591A (en) * 2019-12-09 2020-02-28 山东大学 QAM and PSK signal intra-class modulation classification method based on convolutional neural network structure
CN111259798A (en) * 2020-01-16 2020-06-09 西安电子科技大学 Modulation signal identification method based on deep learning
CN111310700A (en) * 2020-02-27 2020-06-19 电子科技大学 Intermediate frequency sampling sequence processing method for radiation source fingerprint feature identification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOEIN MIRMOHAMMADSADEGHI: ""Modulation classification using convolutional neural networks and spatial transformer networks"", 《 2017 51ST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS》 *
ZAN YIN: ""The Performance Analysis of Signal Recognition Using Attention Based CNN Method"", 《IEEE RELIABILITY SOCIETY SECTION》 *
张昊: ""基于深度学习的通信信号识别关键技术研究"", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422310A (en) * 2022-01-21 2022-04-29 山东大学 Digital orthogonal modulation signal identification method based on joint distribution matrix and multi-input neural network
CN114422310B (en) * 2022-01-21 2023-12-22 山东大学 Digital quadrature modulation signal identification method based on joint distribution matrix and multi-input neural network
CN114615118A (en) * 2022-03-14 2022-06-10 中国人民解放军国防科技大学 Modulation identification method based on multi-terminal convolution neural network
CN114615118B (en) * 2022-03-14 2023-09-22 中国人民解放军国防科技大学 Modulation identification method based on multi-terminal convolution neural network
CN114795258A (en) * 2022-04-18 2022-07-29 浙江大学 Child hip joint dysplasia diagnosis system
CN115314348A (en) * 2022-08-03 2022-11-08 电信科学技术第五研究所有限公司 Convolutional neural network-based QAM signal modulation identification method
CN115314348B (en) * 2022-08-03 2023-10-24 电信科学技术第五研究所有限公司 QAM signal modulation identification method based on convolutional neural network

Also Published As

Publication number Publication date
CN113542171B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN113542171B (en) Modulation pattern recognition method and system based on CNN and combined high-order spectrum image
CN108234370B (en) Communication signal modulation mode identification method based on convolutional neural network
Güner et al. Automatic digital modulation classification using extreme learning machine with local binary pattern histogram features
CN109890043B (en) Wireless signal noise reduction method based on generative countermeasure network
CN109672639B (en) Signal demodulation method based on machine learning
CN113094993B (en) Modulation signal denoising method based on self-coding neural network
CN110706181A (en) Image denoising method and system based on multi-scale expansion convolution residual error network
CN112380939B (en) Deep learning signal enhancement method based on generation countermeasure network
CN112733811B (en) Method for identifying underwater sound signal modulation modes based on improved dense neural network
CN113837959B (en) Image denoising model training method, image denoising method and system
CN109543643A (en) Carrier signal detection method based on one-dimensional full convolutional neural networks
CN114492522B (en) Automatic modulation classification method based on improved stacked hourglass neural network
CN112466320A (en) Underwater acoustic signal noise reduction method based on generation countermeasure network
CN114422311A (en) Signal modulation identification method and system combining deep neural network and expert prior characteristics
CN112737992A (en) Underwater sound signal modulation mode self-adaptive in-class identification method
CN116257752A (en) Signal modulation pattern recognition method
CN110927750A (en) Low-orbit satellite Doppler frequency offset capturing method based on lattice filtering Burg spectrum estimation algorithm
CN113962260A (en) Radar signal intelligent sorting method based on denoising depth residual error network
Juan-ping et al. Automatic modulation recognition of digital communication signals
CN109167744B (en) Phase noise joint estimation method
Limin et al. Low probability of intercept radar signal recognition based on the improved AlexNet model
CN116319210A (en) Signal lightweight automatic modulation recognition method and system based on deep learning
CN116896492A (en) Modulation and coding joint identification method and system based on multichannel attention network
CN115378776A (en) MFSK modulation identification method based on cyclic spectrum parameters
CN115913849A (en) Electromagnetic signal identification method based on one-dimensional complex value residual error network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant