CN109597043B - Radar signal identification method based on quantum particle swarm convolutional neural network - Google Patents

Radar signal identification method based on quantum particle swarm convolutional neural network Download PDF

Info

Publication number
CN109597043B
CN109597043B CN201811366323.7A CN201811366323A CN109597043B CN 109597043 B CN109597043 B CN 109597043B CN 201811366323 A CN201811366323 A CN 201811366323A CN 109597043 B CN109597043 B CN 109597043B
Authority
CN
China
Prior art keywords
convolutional neural
neural network
layer
output
particle swarm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811366323.7A
Other languages
Chinese (zh)
Other versions
CN109597043A (en
Inventor
田雨波
赵毅
范箫鸿
夏俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201811366323.7A priority Critical patent/CN109597043B/en
Publication of CN109597043A publication Critical patent/CN109597043A/en
Application granted granted Critical
Publication of CN109597043B publication Critical patent/CN109597043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a radar signal identification method based on a quantum particle swarm convolutional neural network, which comprises the following steps: 1) Training convolutional neural networks: collecting radar signals with different modulation modes, and converting time domain data into frequency domain data to obtain a frequency domain characteristic data sequence as a training sample; the training samples are sent into a convolutional neural network to perform forward calculation, and a quantum particle swarm algorithm is used for adjusting the weight and the threshold of the convolutional neural network, so that a trained convolutional neural network is obtained; 2) Radar signal identification is carried out based on the trained convolutional neural network: performing time-frequency conversion on the acquired radar signals to be identified; and (3) sending the obtained frequency domain data into the convolutional neural network trained in the step 1) and outputting a modulation mode of a radar signal. Simulation experiments prove that the method improves the recognition accuracy and recognition efficiency of the radar radiation source signals, and provides a good solution to the radar radiation source recognition problem in increasingly complex electromagnetic environments.

Description

Radar signal identification method based on quantum particle swarm convolutional neural network
Technical Field
The invention relates to a radar signal identification method, in particular to a method for identifying different modulation modes of radar signals based on a quantum particle swarm algorithm and a convolutional neural network.
Background
Radar signal recognition is a key processing process in electronic intelligence reconnaissance (ELINT), electronic support reconnaissance (ESM) and radar threat warning (RWR) systems, but with the rapid development of modern electronic information technology, electromagnetic environments are increasingly dense and complex, new system radars are continuously emerging, radar signal waveforms are increasingly complex, and the traditional radar signal recognition method cannot meet the requirements of modern electronic warfare.
In order to solve the above problems, many expert scholars propose identification methods such as principal component analysis, support vector machine-based methods, spectrum correlation methods, frequency domain analysis methods, time-frequency domain analysis, and the like. However, the method cannot thoroughly solve the changeable, complex and dense environmental problems and signal waveform problems faced in the current radar signal identification field. The method consumes a great deal of time on the feature extraction of the signals, and the effect of the feature extraction is often poor, the extracted features are not representative, and the identification accuracy of the method is less ideal under the condition of low signal-to-noise ratio. The defects of the traditional identification method can be well avoided by the technology of a Neural Network (NN), but the most core reverse calculation training algorithm of the traditional Neural Network usually adopts an error back propagation (Backpropagation algorithm, BP) algorithm, the BP algorithm easily causes the Network to be in local optimum, the convergence speed is low, the generalization capability cannot be ensured, and the calculation model is complex.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, the invention provides a method for identifying radar signals of different modulation systems by combining a Quantum-behaved Particle SwarmOptimization Algorithm (QPSO) algorithm and a convolutional neural network Convolutional Neural Network (CNN) model, which can rapidly and accurately identify the radar signal modulation mode and has stronger generalization capability.
The technical scheme is as follows: the invention discloses a radar signal identification method based on a quantum particle swarm convolutional neural network, which comprises the following steps:
1) Training convolutional neural networks: collecting radar signals with different modulation modes, and converting time domain data into frequency domain data to obtain a frequency domain characteristic data sequence as a training sample; sending the training sample into a convolutional neural network for forward calculation, using a quantum particle swarm algorithm to adjust the weight and the threshold of the convolutional neural network, and stopping iterative training when the iteration stopping condition is met to obtain a trained convolutional neural network;
2) Radar signal identification is carried out based on the trained convolutional neural network: performing time-frequency conversion on the acquired radar signals to be identified; and (3) sending the obtained frequency domain data into the convolutional neural network trained in the step 1), classifying the characteristics, and outputting a modulation mode of the radar signal.
Preferably, the step 1) and the step 2) transform the time domain data into the frequency domain data using a fast fourier transform.
Preferably, after the frequency domain data is obtained in the step 1), the method further includes denoising and normalizing the data, where a denoising formula is as follows:
Figure BDA0001868663000000021
wherein Y is t (n) represents the data set before denoising after being subjected to the FFT, A f Representing the mean value of the radar signal frequency domain sequence after fast Fourier transform; y is Y d (n) represents the denoised frequency domain feature data sequence.
Preferably, the convolutional neural network in the step 1) is a one-dimensional seven-layer convolutional neural network model, and the basic structure of the convolutional neural network model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a full-connection layer and an output layer which are sequentially connected.
The forward algorithm of the convolution layer is as follows:
Figure BDA0001868663000000022
in the above formula:
Figure BDA0001868663000000023
an output value representing the L-th neuron of the output characteristic plane n in the convolution layer; />
Figure BDA0001868663000000024
Representing the z-th neuron of the input feature plane m; omega m(Z) n (L) represents the weight from the Z-th neuron of the input characteristic surface m to the L-th neuron of the output characteristic surface n; b n Representing the threshold of the output characteristic surface, the same output characteristic surface shares a threshold, and f (·) is a nonlinear activation function.
The forward algorithm of the pooling layer is as follows:
Figure BDA0001868663000000025
in the above formula:
Figure BDA0001868663000000026
represents the Kth neuron of the nth output face of the pooling layer, < >>
Figure BDA0001868663000000027
Representing the nth input face p neurons, f (·) is a pooling representation function.
Preferably, the weight and threshold process for adjusting the convolutional neural network using a quantum particle swarm algorithm is as follows: each particle is encoded into a vector by adopting a vector encoding strategy, and each vector represents a set of weights or thresholds required to be trained and adjusted by a convolutional neural network, and the specific encoding mode is as follows:
Figure BDA0001868663000000031
in the above formula, i represents the particle number, i=1, 2, …, M;
Figure BDA0001868663000000032
convolution kernel weights representing the kth input neuron through the lth output neuron of the nth convolution layer; the single particle codes into a 1 XN vector, N dimension is dimension of the problem to be processed, and the size of N is equal to the calculated number of parameters to be trained and optimized of the whole convolutional neural network; finally, the size M of the whole particle population is determined, and then the whole particle population is encoded into an N x M matrix of the following formula:
Figure BDA0001868663000000033
after the coding strategy and the population matrix are determined, the particles can be mapped to the weights and the thresholds of all layers of the convolutional neural network correspondingly.
Preferably, the quantum particle swarm algorithm is of the form:
p id (t)=φ·pbest id (t)+(1-φ)·gbest d (t) (1)
Figure BDA0001868663000000034
g(t)=p g (t)/p i (t) (3)
k(t)=(k 0 -k m )·exp(-(t/T) 2 ·g(t))+k m (4)
x id (t+1)=φ·δ d +(1-φ)·gbest d (t)±k(t)·|mbest-x id (t)·ln(1/u) (5)
in formula (1), p id (t) particles i are between pbest id Random position with gbest; in the formula (2), mbest represents the center point of the current optimal position of all individuals; in (3)p g (t) represents the optimal position of the population, p i (t) represents an individual optimal location; k in (4) 0 Represents the initial value, k, of the set coefficient of contraction and expansion m The final value of the set contraction and expansion coefficient is represented, T represents the current iteration number, and T represents the maximum iteration number; x in (5) id (t+1) is the position of the next generation of particle i, δ d =X K -X j (i≠j≠k),gbest d (t) is the global optimum for all particle positions of the current generation, x id And (t) is the current generation position of the particle i, and phi and u are random numbers uniformly distributed between (0 and 1).
Preferably, the iteration stop condition in step 1) includes that the error of the convolutional neural network is smaller than a predetermined threshold value or reaches a predetermined number of iterations, wherein the error of the convolutional neural network is represented by a mean square error, and the formula is as follows:
Figure BDA0001868663000000041
wherein N is the number of training samples input by the convolutional neural network, H is the number of the output layer nerves of the convolutional neural network,
Figure BDA0001868663000000042
is the expected output of the jth output node of the ith sample, y ji Is the network actual output of the jth output node of the ith sample.
The beneficial effects are that:
1. the method for combining the quantum particle swarm method with the convolutional neural network and extracting the characteristics of the radar signal modulation mode is provided for the first time, can be applied to the radar signal modulation mode identification in a complex electromagnetic environment, reduces the characteristic extraction time compared with the traditional identification mode, improves the signal characteristic extraction effect, accelerates the convergence speed of the convolutional neural network, and has the advantages of higher identification rate, more accurate identification model and stronger generalization capability.
2. The invention further provides improvement on the traditional molecular particle swarm algorithm, introduces a differential evolution algorithm term, improves the most important shrinkage expansion coefficient updating formula in the position formula, has higher convergence speed, is less prone to sinking into local optimum and has early maturing phenomenon, and the optimizing effect is more effective than that of the traditional method.
Drawings
FIG. 1 is a flow chart of a method for identifying radar signals based on a quantum particle swarm convolutional neural network;
FIG. 2 is a diagram of training sample characteristics simulating seven common radar signal modulation modes in a battlefield according to an embodiment of the present invention;
fig. 3 is a block diagram of an improved quantum particle swarm convolutional neural network according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
Fig. 1 shows a flow chart of a radar signal identification method based on a quantum particle swarm convolutional neural network according to the invention, and as shown in the figure, the method mainly comprises two stages: training a convolutional neural network and identifying radar signals using the trained convolutional neural network, wherein the training of the convolutional neural network comprises: collecting radar signals with different modulation modes, and converting time domain data into frequency domain data to obtain a frequency domain characteristic data sequence as a training sample; and sending the training sample into the convolutional neural network for forward calculation, using a quantum particle swarm algorithm to adjust the weight and the threshold of the convolutional neural network, and stopping iterative training of the neural network when a preset iteration stopping condition is met, so as to obtain the trained convolutional neural network. In the radar signal recognition stage based on the trained convolutional neural network: performing time-frequency conversion and corresponding processing on the acquired radar signals to be identified; the obtained frequency domain data sequence is sent into a trained convolutional neural network to classify the characteristics, and a modulation mode of a radar signal is output. Since the main inventive concept of the present invention is included in the training process, the training process is described in detail with reference to examples.
(1) Acquisition of sample signals
The radar signals with different modulation modes are collected, and the current modulation modes commonly used in the battlefield are respectively BPSK, BFSK, QPSK, QFSK digital modulation modes and LFM, NLFM, CW analog modulation modes. The sample signals collected in the embodiment cover the seven modes, and the signal parameters of the radar radiation source are set as follows: the sampling frequency is 1GHZ, and the number of sampling points is 500; the carrier frequencies of BPSK and QPSK are set to be 100MHz, the BPSK adopts 13-bit barker codes, and the QPSK signals adopt 16-bit Franker codes; the four carrier frequencies of QFSK are 200MHz, 400MHz, 600MHz and 800MHz; LFM, NLFM, CW carrier frequency is set to 100MHz and LFM frequency offset is set to 40MHz. It should be understood that the application of the method of the present invention is not limited to the above-mentioned seven-system radar signals, and the method of the present invention is equally applicable to identification when signals of other modulation methods are employed in the radar.
Fig. 2 is a diagram of radar signal characteristics of seven common modulation modes simulated by MATLAB software, and a time domain data set of seven modulation modes is obtained according to set parameters and a formula of each signal modulation mode. Wherein (a) - (g) correspond to QPSK, QFSK, NLFM, LFM, CW, BPSK, BFSK, respectively.
(2) Preprocessing of sample signals
Transforming the obtained time domain radar radiation source signal into a frequency domain through Fast Fourier Transform (FFT), and denoising the generated frequency domain data set, wherein the denoising formula is as follows:
Figure BDA0001868663000000051
wherein Y is d (n) represents the denoised dataset, Y t (n) represents the data set before denoising after FFT conversion, A f Representing the mean value of the radar signal frequency domain sequence after FFT conversion. Denoised frequency domain feature data sequence Y d And (n) finally, carrying out normalization processing, and marking the category of the modulation mode to which the characteristic sequence data belongs, wherein the category is used as a training and testing data sample.
(3) Convolutional neural network setup and training
The convolutional neural network model is the most widely used model in deep learning, can establish a nonlinear complex mapping relation between the training set input X and the training set output y, and gives a predicted value corresponding to the test sample X' according to the mapping relation. The invention constructs a one-dimensional convolutional neural network model with seven layers of networks, and the network layers are as follows: the basic network structure of the input layer, the first convolution layer, the first pooling layer, the second convolution layer, the second pooling layer, the full connection layer and the output layer is shown in fig. 3. Inputting a feature vector sequence of 500 multiplied by 1, firstly performing convolution operation on a first convolution layer and 4 convolution kernels of 25 multiplied by 1 to obtain 4 476 multiplied by 1 feature vectors, and then performing dimension reduction processing on the feature vectors through a pooling layer of a first 2 multiplied by 1 filter to obtain 4 238 multiplied by 1 feature vectors; through the second convolution layer, obtaining 2 214×1 feature vectors through local connection operation of 8 convolution kernels of 25×1; finally, 2 feature vectors 107×1 are obtained through the pooling layer of the second 2×1 filter, and are integrated through the full connection layer, the 2 feature vectors 107×1 are combined into 1 feature vector 214, and finally are input into the output layer. And obtaining the label category of the input sample feature vector through calculation.
Inputting a batch of samples into an input layer of a convolutional neural network, and transmitting the samples to an output layer through layer-by-layer forward computation, wherein the forward algorithm of the convolutional layer is as follows:
Figure BDA0001868663000000061
in the above formula:
Figure BDA0001868663000000062
an output value representing the L-th neuron of the output characteristic plane n in the convolution layer; />
Figure BDA0001868663000000063
Representing the z-th neuron of the input feature plane m; b n Representing the threshold value of the output characteristic surface, wherein the same output characteristic surface shares a threshold value; omega m(Z)n(L) The weight of the Z-th neuron from the input characteristic surface m to the L-th neuron from the output characteristic surface n is represented; f (·) is a nonlinear activation function, in the embodiment used is Sigmoid function.
The forward algorithm of the pooling layer is as follows:
Figure BDA0001868663000000064
in the above formula:
Figure BDA0001868663000000065
represents the Kth neuron of the nth output face of the pooling layer, < >>
Figure BDA0001868663000000066
Representing the nth input plane p neurons, f (·) depends on the pooling scheme chosen by the network, with the maximum function being employed in the example.
The calculation mode of the full-connection layer is the same as that of the full-connection layer algorithm of the traditional neural network, and the activation function of the full-connection layer adopts a ReLu function. The learning rate of the neural network is set to be 0.01, the maximum iteration number is set to be 1000, and the number of the batch training samples is 50. The training sample set is input into the whole convolutional neural network after data preprocessing, and the weight and the threshold value of the neural network are optimized and adjusted by continuously forward iterative operation and using a quantum particle swarm algorithm. And when the set training iteration times are reached or the error of the whole network reaches an expected value, the training of the whole convolutional neural network is completed.
The Quantum-enhanced PSO (QPSO) is inspired by Quantum mechanics and standard particle swarm algorithm (Particle Swarm Optimization, PSO), and the QPSO considers that the particles possess Quantum behaviors, and cannot determine the accuracy of a position vector and a velocity vector at the same time, so that the QPSO has no velocity vector, describes the state of the particles through a wave function, and calculates a position equation by solving a schrodinger equation and a monte carlo random simulation mode. The following describes the improvement of QPSO and the improved algorithm execution process in connection with the present invention.
1) Initializing position information for vector-encoded populations
The invention improves the standard quantum particle group, introduces differential evolution algorithm, and improves the most important shrinkage expansion coefficient updating formula in the position formula. On one hand, in the searching process of the quantum particle swarm, the group moves in a small space at the later stage of searching, so that the group diversity is lost and falls into local optimum, and a differential evolution operator is introduced, on the other hand, the contraction expansion coefficient has a certain influence on the convergence speed and precision of the algorithm, and the traditional updating formula has poor adaptability to the algorithm at each different stage, so that the updating formula is improved. The specific update formula is as follows:
p id (t)=φ·pbest id (t)+(1-φ)·gbest d (t) (1)
Figure BDA0001868663000000071
g(t)=p g (t)/avgp i (t) (3)
k(t)=(k 0 -k m )·exp(-(t/T) 2 ·g(t))+k m (4)
x id (t+1)=φ·δ d +(1-φ)·gbest d (t)±k(t)·|mbest-x id (t)|·ln(1/u) (5)
in formula (1), p id (t) particles i are between pbest id With the random position of gbest, pbest represents the optimal position of the individual, and gbest represents the optimal position of the group; in the formula (2), mbest represents the center point of the current optimal position of all individuals; p in formula (3) g (t) represents the global optimum, avgp, of the current particle i (t) represents the average value of all individual extrema of the particles, g (t) is the ratio of the global optimum of the current particle to the average value of all individual extrema of the particles, for calculation of equation (4); in the formula (4), k (t) represents a coefficient of contraction and expansion, k 0 Represents the initial value, k, of the set coefficient of contraction and expansion m The final value of the set contraction and expansion coefficient is represented, T represents the current iteration number, and T represents the maximum iteration number; x in (5) id (t+1) is the position of the next generation of particle i, δ d =X K -X j (i.noteq.j.noteq.k), k and j denote two different positions in the randomly selected population than the current position i. gbest (g best) d (t) is the global optimum for all particle positions of the current generation, x id And (t) is the current generation position of the particle i, and phi and u are random numbers uniformly distributed between (0 and 1).
Setting initial parameters of Improved Quantum-particle-group-behaved Particle Swarm Optimization Algorithm (IQPSO) algorithm, namely initial expansion coefficient value k 0 Final expansion coefficient value k m Maximum number of iterations T.
After the formula and initial parameters of the IQPSO algorithm are set, the coding mode of the improved quantum particle swarm particles is determined, and the common coding mode is divided into matrix coding and vector coding. In an embodiment, a vector encoding strategy is used, each particle being encoded as a vector. Each vector represents a set of weights or thresholds to be trained and adjusted by the convolutional neural network, and the specific coding mode is as follows:
Figure BDA0001868663000000081
in the above formula, i represents the number of particles, i=1, 2, …, M.
Figure BDA0001868663000000082
The convolution kernel weights representing the kth input neuron through the lth output neuron of the nth convolution layer. The single particle codes into a 1 XN vector, N dimension is dimension of the problem to be processed, and the size of N is equal to the calculated number of parameters to be trained and optimized of the whole convolutional neural network. Finally, the size M of the whole particle population is determined, and then the whole particle population is encoded into an N x M matrix with the following formula:
Figure BDA0001868663000000083
2) The particles are decoded in sequence and assigned to the corresponding parameters of each layer
After the coding strategy and the population matrix are determined, the particles can be mapped to the weight and the threshold value of each layer of the convolutional neural network correspondingly by positioning the parameters of different network layers at different positions.
3) Calculating the mean square error of the assigned network and using the mean square error as the fitness
Then, the Mean Square Error (MSE) of the convolutional neural network is calculated, and the mean square error formula is as follows:
Figure BDA0001868663000000084
wherein N is the number of samples of the feature sequence which is input by the convolutional neural network and is preprocessed by the radar, H is the number of the output layer nerves of the convolutional neural network, namely the corresponding classification category,
Figure BDA0001868663000000085
the expected output class number of the jth output node, y, which is the ith sample ji Is the actual number of output classes of the network of the jth output node of the ith sample.
And taking the mean square error Net.E of the network as a fitness function of the particle swarm, updating the individual extremum and the global extremum of the particles according to the fitness, then carrying out optimizing updating on the position information of the particles according to an updating formula of the IQPSO, and searching a group of particles with the minimum mean square error of the whole convolutional neural network as the final weight and the final threshold of the convolutional neural network model so as to achieve the purposes of ensuring the convolutional neural network model to be accurate and having strong generalization capability.
(4) Detecting reliability of training model
Inputting the training sample into the convolutional neural network model trained in the step (3) to obtain an output result of classification and identification, comparing the output result with an expected classification result of the training sample, and if the error is smaller than the precision requirement, considering to obtain an accurate convolutional neural network model; if the error is greater than the precision requirement, adding the optimal particles and the accurate solution into the original training sample, so as to update and improve the precision of the convolutional neural network model until the requirement is met.
(5) Testing training model accuracy
The test sample is input into a trained accurate convolutional neural network model, and is compared with an expected value of the test sample, the difference between the test output classification of the network and the expected classification output is judged in an error range, and the mean square error of the network and the recognition classification accuracy of the method are calculated. The accuracy of the modulation recognition of the seven radar signals of the final test experiment is shown in the following table:
Figure BDA0001868663000000091
the experimental result shows that even under the condition of low signal-to-noise ratio, the accuracy of the identification of various signal modulation modes is over 93 percent on average, and under the condition of high signal-to-noise ratio, the characteristics of various modulation signals are obviously different, so that the network can accurately distinguish and identify various modulation modes. Overall, the method for combining the autonomous improved quantum particle swarm algorithm and the convolutional neural network model can improve the model accuracy, prevent the model from sinking into local optimum, strengthen the generalization capability of the network, and achieve good recognition effect on the radar signal radiation source recognition problem under the increasingly complex electromagnetic environment.

Claims (7)

1. The radar signal identification method based on the quantum particle swarm convolutional neural network is characterized by comprising the following steps of:
1) Training convolutional neural networks: collecting radar signals with different modulation modes, and converting time domain data into frequency domain data to obtain a frequency domain characteristic data sequence as a training sample; sending the training sample into a convolutional neural network for forward calculation, using a quantum particle swarm algorithm to adjust the weight and the threshold of the convolutional neural network, and stopping iterative training when the iteration stopping condition is met to obtain a trained convolutional neural network;
2) Radar signal identification is carried out based on the trained convolutional neural network: performing time-frequency conversion on the acquired radar signals to be identified; the obtained frequency domain data is sent into the convolutional neural network trained in the step 1), the characteristics are classified, and the modulation mode of the radar signal is output;
the quantum particle swarm algorithm in the step 1) is as follows:
p id (t)=φ·pbest id (t)+(1-φ)·gbest d (t) (1)
Figure FDA0004193493200000011
g(t)=p g (t)/p i (t) (3)
k(t)=(k 0 -k m )·exp(-(t/T) 2 ·g(t))+k m (4)
x id (t+1)=φ·δ d +(1-φ)·gbest d (t)±k(t)·|mbest-x id (t)|·ln(1/u) (5)
in formula (1), p id (t) particles i are between pbest id Random position with gbest; in the formula (2), mbest represents the center point of the current optimal position of all individuals; p in formula (3) g (t) represents the optimal position of the population, p i (t) represents an individual optimal location; k in (4) 0 Represents the initial value, k, of the set coefficient of contraction and expansion m The final value of the set contraction and expansion coefficient is represented, T represents the current iteration number, and T represents the maximum iteration number; x in (5) id (t+1) is the position of the next generation of particle i, δ d =X K -X j (i≠j≠k),gbest d (t) is the global optimum for all particle positions of the current generation, x id (t) is the current generation position of the particle i, and phi and u are random numbers uniformly distributed between (0 and 1);
the process of using the quantum particle swarm algorithm to adjust the weight and threshold of the convolutional neural network is as follows: each particle is encoded into a vector by adopting a vector encoding strategy, and each vector represents a set of weights or thresholds required to be trained and adjusted by a convolutional neural network, and the specific encoding mode is as follows:
Figure FDA0004193493200000021
in the above formula, i represents the number of particles, i=1,2,…,M;
Figure FDA0004193493200000022
Convolution kernel weights representing the kth input neuron through the lth output neuron of the nth convolution layer; the single particle codes into a 1 XN vector, N dimension is dimension of the problem to be processed, and the size of N is equal to the calculated number of parameters to be trained and optimized of the whole convolutional neural network; finally, the size M of the whole particle population is determined, and then the whole particle population is encoded into an N x M matrix of the following formula:
Figure FDA0004193493200000023
after the coding strategy and the population matrix are determined, the particles can be mapped to the weights and the thresholds of all layers of the convolutional neural network correspondingly.
2. The method for identifying radar signals based on the quantum particle swarm convolutional neural network according to claim 1, wherein the steps 1) and 2) use fast fourier transform to transform time domain data into frequency domain data.
3. The method for identifying radar signals based on the quantum particle swarm convolutional neural network according to claim 2, wherein after the frequency domain data is obtained in the step 1), the method further comprises denoising and normalizing the data, wherein a denoising formula is as follows:
Figure FDA0004193493200000024
wherein Y is t (n) represents the data set before denoising after being subjected to the FFT, A f Representing the mean value of the radar signal frequency domain sequence after fast Fourier transform; y is Y d (n) represents the denoised frequency domain feature data sequence.
4. The radar signal identification method based on the quantum particle swarm convolutional neural network according to claim 1, wherein the convolutional neural network in the step 1) is a one-dimensional seven-layer convolutional neural network model, and the basic structure of the convolutional neural network model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a full-connection layer and an output layer which are sequentially connected, and when in use, a batch of samples are input to the input layer of the convolutional neural network, and are transmitted to the output layer through layer-by-layer forward computation.
5. The method for identifying radar signals based on the quantum particle swarm convolutional neural network according to claim 4, wherein the forward algorithm of the convolutional layer in the step 1) is as follows:
Figure FDA0004193493200000031
in the above formula:
Figure FDA0004193493200000032
an output value representing the L-th neuron of the output characteristic plane n in the convolution layer; />
Figure FDA0004193493200000033
Representing the z-th neuron of the input feature plane m; omega m(Z)n(L) The weight of the Z-th neuron from the input characteristic surface m to the L-th neuron from the output characteristic surface n is represented; b n Representing the threshold of the output characteristic surface, the same output characteristic surface shares a threshold, and f (·) is a nonlinear activation function.
6. The method for identifying radar signals based on the quantum particle swarm convolutional neural network according to claim 4, wherein the forward algorithm of the pooling layer in step 1) is as follows:
Figure FDA0004193493200000034
in the above formula:
Figure FDA0004193493200000035
represents the Kth neuron of the nth output face of the pooling layer, < >>
Figure FDA0004193493200000036
Representing the nth input face p neurons, f (·) is a pooling representation function.
7. The method for identifying radar signals based on the quantum particle swarm convolutional neural network according to claim 1, wherein the iteration stop condition in the step 1) includes that the error of the convolutional neural network is smaller than a predetermined threshold value or reaches a predetermined iteration number, wherein the error of the convolutional neural network is represented by a mean square error, and the formula is as follows:
Figure FDA0004193493200000037
wherein N is the number of training samples input by the convolutional neural network, H is the number of the output layer nerves of the convolutional neural network,
Figure FDA0004193493200000038
is the expected output of the jth output node of the ith sample, y ji Is the network actual output of the jth output node of the ith sample. />
CN201811366323.7A 2018-11-16 2018-11-16 Radar signal identification method based on quantum particle swarm convolutional neural network Active CN109597043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811366323.7A CN109597043B (en) 2018-11-16 2018-11-16 Radar signal identification method based on quantum particle swarm convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811366323.7A CN109597043B (en) 2018-11-16 2018-11-16 Radar signal identification method based on quantum particle swarm convolutional neural network

Publications (2)

Publication Number Publication Date
CN109597043A CN109597043A (en) 2019-04-09
CN109597043B true CN109597043B (en) 2023-05-26

Family

ID=65957653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811366323.7A Active CN109597043B (en) 2018-11-16 2018-11-16 Radar signal identification method based on quantum particle swarm convolutional neural network

Country Status (1)

Country Link
CN (1) CN109597043B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263810B (en) * 2019-05-17 2021-04-13 西北大学 LoRa signal source identification method
CN110222748B (en) * 2019-05-27 2022-12-20 西南交通大学 OFDM radar signal identification method based on 1D-CNN multi-domain feature fusion
TWI698102B (en) * 2020-01-06 2020-07-01 財團法人資訊工業策進會 Threat detection system for mobile communication system, and global device and local device thereof
CN111474955B (en) * 2020-04-22 2023-11-14 上海特金信息科技有限公司 Identification method, device and equipment for unmanned aerial vehicle graph signaling system and storage medium
CN111507299A (en) * 2020-04-24 2020-08-07 中国人民解放军海军航空大学 Method for identifying STBC (space time Block coding) signal on frequency domain by using convolutional neural network
CN111680737B (en) * 2020-06-03 2023-03-24 西安电子科技大学 Radar radiation source individual identification method under differential signal-to-noise ratio condition
CN111680666B (en) * 2020-06-30 2023-03-24 西安电子科技大学 Under-sampling frequency hopping communication signal deep learning recovery method
CN112039820B (en) * 2020-08-14 2022-06-21 哈尔滨工程大学 Communication signal modulation and identification method for quantum image group mechanism evolution BP neural network
CN112187413B (en) * 2020-08-28 2022-05-03 中国人民解放军海军航空大学航空作战勤务学院 SFBC (Small form-factor Block code) identifying method and device based on CNN-LSTM (convolutional neural network-Link State transition technology)
CN112034434B (en) * 2020-09-04 2022-05-20 中国船舶重工集团公司第七二四研究所 Radar radiation source identification method based on sparse time-frequency detection convolutional neural network
CN112134818A (en) * 2020-09-23 2020-12-25 青岛科技大学 Underwater sound signal modulation mode self-adaptive in-class identification method
CN112308194B (en) * 2020-09-24 2022-06-21 广西大学 Quantum migration parallel multilayer Monte Carlo doubly-fed fan parameter optimization method
CN113221200B (en) * 2021-04-15 2022-10-25 哈尔滨工程大学 Three-dimensional efficient random arrangement method suitable for uncertainty analysis of reactor core particle distribution
CN113159179B (en) * 2021-04-22 2023-04-18 中车株洲电力机车有限公司 Subway and subway bogie running state identification method and system
CN116738376B (en) * 2023-07-06 2024-01-05 广东筠诚建筑科技有限公司 Signal acquisition and recognition method and system based on vibration or magnetic field awakening

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0845686A2 (en) * 1996-11-29 1998-06-03 Alcatel Device and method for automatic target classification
CN107220606A (en) * 2017-05-22 2017-09-29 西安电子科技大学 The recognition methods of radar emitter signal based on one-dimensional convolutional neural networks
CN107301381A (en) * 2017-06-01 2017-10-27 西安电子科技大学昆山创新研究院 Recognition Method of Radar Emitters based on deep learning and multi-task learning strategy
CN108182450A (en) * 2017-12-25 2018-06-19 电子科技大学 A kind of airborne Ground Penetrating Radar target identification method based on depth convolutional network
CN108564006A (en) * 2018-03-26 2018-09-21 西安电子科技大学 Based on the polarization SAR terrain classification method from step study convolutional neural networks
CN108805039A (en) * 2018-04-17 2018-11-13 哈尔滨工程大学 The Modulation Identification method of combination entropy and pre-training CNN extraction time-frequency image features

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0845686A2 (en) * 1996-11-29 1998-06-03 Alcatel Device and method for automatic target classification
CN107220606A (en) * 2017-05-22 2017-09-29 西安电子科技大学 The recognition methods of radar emitter signal based on one-dimensional convolutional neural networks
CN107301381A (en) * 2017-06-01 2017-10-27 西安电子科技大学昆山创新研究院 Recognition Method of Radar Emitters based on deep learning and multi-task learning strategy
CN108182450A (en) * 2017-12-25 2018-06-19 电子科技大学 A kind of airborne Ground Penetrating Radar target identification method based on depth convolutional network
CN108564006A (en) * 2018-03-26 2018-09-21 西安电子科技大学 Based on the polarization SAR terrain classification method from step study convolutional neural networks
CN108805039A (en) * 2018-04-17 2018-11-13 哈尔滨工程大学 The Modulation Identification method of combination entropy and pre-training CNN extraction time-frequency image features

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
PCA预训练的卷积神经网络目标识别算法;史鹤欢等;《西安电子科技大学学报》;20150727(第03期);全文 *
一种改进的卷积神经网络SAR目标识别算法;许强等;《西安电子科技大学学报》;20180321(第05期);全文 *
基于微分进化算子和混沌扰动的量子粒子群算法;田雨波等;《江苏科技大学学报(自然科学版)》;20110430(第02期);摘要、第159页第05-07段、第160页第01-02段 *
基于微分进化算子的量子粒子群优化算法及应用;方伟等;《系统仿真学报》;20081220(第24期);全文 *
基于改进的QPSO训练BP网络的网络流量预测;王鹏等;《计算机应用研究》;20090115(第01期);全文 *
基于短时傅里叶变换和卷积神经网络的轴承故障诊断方法;李恒等;《振动与冲击》;20181015(第19期);全文 *
基于粒子群优化的神经网络算法在辐射源特征聚类中的应用;孙毓富等;《舰船电子对抗》;20100625(第03期);全文 *
基于肤色特征和卷积神经网络的手势识别方法;杨文斌等;《重庆工商大学学报(自然科学版)》;20180709(第04期);全文 *
基于量子粒子群优化的BP神经网络的汽轮机振动故障诊断研究;彭双飞;《电子质量》;20100820(第08期);摘要、第21页第03-06段 *
基于量子行为粒子群优化人工神经网络的电能质量扰动识别;杨耿煌等;《中国电机工程学报》;20080405(第10期);全文 *

Also Published As

Publication number Publication date
CN109597043A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN109597043B (en) Radar signal identification method based on quantum particle swarm convolutional neural network
CN107220606B (en) Radar radiation source signal identification method based on one-dimensional convolutional neural network
CN109993280B (en) Underwater sound source positioning method based on deep learning
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
CN110222748B (en) OFDM radar signal identification method based on 1D-CNN multi-domain feature fusion
CN110133599B (en) Intelligent radar radiation source signal classification method based on long-time and short-time memory model
CN109471074B (en) Radar radiation source identification method based on singular value decomposition and one-dimensional CNN network
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN112966667B (en) Method for identifying one-dimensional distance image noise reduction convolution neural network of sea surface target
CN109766791B (en) Communication signal modulation identification method based on self-encoder
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
CN113673312B (en) Deep learning-based radar signal intra-pulse modulation identification method
CN111126134A (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN112861066B (en) Machine learning and FFT (fast Fourier transform) -based blind source separation information source number parallel estimation method
CN113759323B (en) Signal sorting method and device based on improved K-Means joint convolution self-encoder
CN112597820A (en) Target clustering method based on radar signal sorting
CN112036239A (en) Radar signal working mode identification method and system based on deep learning network
CN110933633A (en) Onboard environment indoor positioning method based on CSI fingerprint feature migration
CN110889207B (en) Deep learning-based intelligent assessment method for credibility of system combination model
Zhao et al. A novel aggregated multipath extreme gradient boosting approach for radar emitter classification
CN113111786A (en) Underwater target identification method based on small sample training image convolutional network
Bakrania et al. Using dimensionality reduction and clustering techniques to classify space plasma regimes
CN112328965A (en) Method for multi-maneuvering-signal-source DOA tracking by using acoustic vector sensor array
CN108834043B (en) Priori knowledge-based compressed sensing multi-target passive positioning method
CN116243248A (en) Multi-component interference signal identification method based on multi-label classification network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant