CN108449295A - Combined modulation recognition methods based on RBM networks and BP neural network - Google Patents

Combined modulation recognition methods based on RBM networks and BP neural network Download PDF

Info

Publication number
CN108449295A
CN108449295A CN201810113576.7A CN201810113576A CN108449295A CN 108449295 A CN108449295 A CN 108449295A CN 201810113576 A CN201810113576 A CN 201810113576A CN 108449295 A CN108449295 A CN 108449295A
Authority
CN
China
Prior art keywords
layer
rbm
neural network
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810113576.7A
Other languages
Chinese (zh)
Inventor
李文刚
艾灿
王屹伟
钱天蓉
黄辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Innovation Institute of Xidian University
Original Assignee
Kunshan Innovation Institute of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Innovation Institute of Xidian University filed Critical Kunshan Innovation Institute of Xidian University
Priority to CN201810113576.7A priority Critical patent/CN108449295A/en
Publication of CN108449295A publication Critical patent/CN108449295A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The combined modulation recognition methods based on RBM networks and BP neural network that the invention discloses a kind of, including:(1) modulated signal is pre-processed;(2) characteristic parameter is extracted;(3) training sample and test sample of every class modulation system are randomly generated;(4) BP input layers are trained with first layer hidden layer as a RBM;(5) intersection of the parameter of initialization RBM;(6) training RBM, obtains the intersection of parameter and the output feature of hidden layer;(7) visible layer and hidden layer by the first layer hidden layer of BP and next layer as second RBM are trained, input of the output of first RBM as second RBM, repeats the intersection of the parameter of (5) (6) (7) until obtaining all RBM;(8) re -training BP, until reaching optimal solution state;(9) test data is normalized, inputs trained BP, calculate Modulation Mode Recognition rate.The invention has the beneficial effects that:Input dimension is reduced, Modulation Identification rate is improved.

Description

Combined modulation identification method based on RBM network and BP neural network
Technical Field
The invention relates to a modulation identification method, in particular to a combined modulation identification method based on an RBM (radial basis function) network and a BP (back propagation) neural network, and belongs to the technical field of communication.
Background
With the development of communication technology, the application of signal modulation mode identification covers almost the whole commercial and military communication fields, has important functions in the aspects of signal authentication, interference identification, electronic countermeasure and the like, and has very wide application value and prospect.
Modulation identification refers to identifying a modulation scheme before demodulation of a received signal. Generally, there are three methods of modulation identification: decision tree classifier, cluster analysis, neural network classifier.
Nandi and Azzouz propose a decision tree algorithm for classification based on characteristic parameters. Decision tree classification is a low complexity and intuitive method that is susceptible to noise, and is therefore often combined with other methods in practical applications. The cluster analysis is a multivariate statistical classification method, which performs blind classification according to the pattern similarity in unlabeled samples, but the cluster analysis method is susceptible to noise, and different extracted characteristic parameters have great influence on the recognition performance. As the most common method, a Back Propagation (BP) and Radial Basis Function (RBF) neural network with self-learning and generalization capability is very suitable for the classification problem of potential nonlinear mapping of input signals and outputs, but the neural network classification method is easy to fall into the problem of local optimal solution. In addition, the neural network classification method has too slow convergence rate when approaching the optimal solution, poor generalization capability and low recognition rate under the condition of low signal-to-noise ratio (SNR).
The Restricted Boltzmann Machine (RBM) is a non-connected structure network in a layer, which is proposed on the basis of the Boltzmann Machine (BM) proposed by Hinton and Sejnowski, and can fit any discrete distribution under the condition that the hidden units are enough; after Hinton proposed the fast learning algorithm-Contrast Divergence (CD) for RBM in 2002, RBM has been successfully applied to various machine learning problems such as classification, regression, dimensionality reduction, etc.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a combined modulation identification method which is based on an RBM network and a BP neural network and can improve the modulation identification rate.
In order to achieve the above object, the present invention adopts the following technical solutions:
the joint modulation identification method based on the RBM network and the BP neural network is characterized by comprising the following steps of:
step 1: preprocessing a modulation signal to be classified;
step 2: extracting characteristic parameters of the preprocessed signals, wherein the characteristic parameters are characteristic parameters based on a time domain, a frequency domain and statistics;
step 3: randomly generating a training sample and a testing sample of each type of modulation mode according to the characteristic parameters extracted at Step2 to obtain a training sample set, a testing data set and a corresponding class label set;
step 4: setting relevant parameters of a BP neural network, and taking an input layer and a first hidden layer of the BP neural network as an RBM network for training;
step 5: initializing a collection of parameters theta of the RBM network, wherein theta is (W, a, b), wherein W is a weight matrix between a hidden layer and a visible layer of the RBM network, a is a bias vector of the visible layer, and b is a bias vector of the hidden layer;
step 6: dividing the training samples generated by Step3 into small batch of data, normalizing the data and then training the RBM network to obtain a parameter collection theta of the RBM network and an output characteristic p (h) of a hidden layerj1| v), where h is the state vector of the hidden layer, hjRepresenting the state of the jth neuron in the hidden layer, and v is the state vector of the visible layer;
step 7: training a first hidden layer and a next layer of the BP neural network as a visible layer and a hidden layer of a second RBM network, and outputting p (h) of the first RBM networkj1| v) as input to the second RBM network, repeating Step5, Step6 and Step7 until a collection θ of parameters for all RBM networks is obtained;
step 8: setting an initial parameter theta 'of the BP neural network as a parameter set theta of the trained RBM network, retraining the BP neural network with supervision, and finely adjusting the parameter theta' to achieve an optimal solution state;
step 9: and (4) normalizing the test data, inputting the normalized test data into the trained BP neural network, and calculating the recognition rate of the modulation mode.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that, in Step1, the preprocessing includes: zero averaging and normalization.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that, in Step2, the characteristic parameters include: instantaneous amplitude characteristics, spectral characteristics of the signal, and higher order spectral characteristics of the signal.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that, in Step4, the relevant parameters include: the number of nodes of each layer of the BP neural network and the number of hidden layers.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that in Step5, a specific process of initializing a parameter set θ of the RBM network is as follows:
(5a) dividing the training samples generated by Step3 into small batches of data containing 20 samples of each type, and setting the number n of visible layer unitsvSetting the number n of hidden layer units for the number of input layer nodes of BP neural networkhThe number of nodes of a first hidden layer of the BP neural network is;
(5b) the weight matrix W is initialized to random numbers from a normal distribution N (0,0.01), and aiAnd bjInitialized to 0, and the objective function is related to w at each iterationijApproximation of partial derivative of (a) Δ wijObjective function with respect to aiApproximation of partial derivative of (a) Δ aiObjective function with respect to bjApproximation of partial derivative of (a) Δ bjAll are initialized to 0, and the weight learning rate epsilon between the visible layer and the hidden layer is setwLearning rate between visible layer biases εaAnd learning rate between hidden layer biases εbAre both set to 0.01, the momentum learning rate and the final momentum learning rate are set to 0.5 and 0.9, respectively, and the weight is setThe attenuation coefficient lambda takes any value between 0.01 and 0.0001, and a parameter k in the k-step contrast divergence algorithm is set to be 1.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that in Step6, the specific process of training the RBM network is as follows:
(6a) when the modulated signal to be classified is transferred in the forward direction, vi (0)Representing the input value at the first forward pass, the output of the hidden layer of the RBM network is:
wherein,is an activation function of the RBM network, using input vi (0)Calculating output probability;
(6b) p (h)j (0)=1|v(0)) Binarization as hj (0)If this number is less than p (h)j (0)=1|v(0)) Then h isj (0)If the number is greater than or equal to p (h)j (0)=1|v(0)) Then h isj (0)Has a value of 1;
(6c) when the modulation signal to be classified is transmitted reversely, the calculated hj (0)As inputs, the output of the visible layer of the RBM network is:
(6d) from v calculated in back propagationi (1)Repeating the step (6a) and the step (6b), and calculating p (h) after the second iterationj (1)=1|v(1)) And hj (1)A value of (d);
(6e) using a CD-k algorithm to perform k-step alternating Gibbs sampling to obtain the w of the objective function at each iterationij、ai、bjSince k is 1, Δ w is an approximation of the partial derivative of (c)ij、Δai、ΔbjAre respectively formulated as:
Δwij≈p(hi (0)=1|v(0))·vj (0)-p(hi (1)=1|v(1))·vj (1)(4)
Δai=vi (0)-vi (1)(5)
Δbi=p(hi (0)=1|v(0))-p(hi (1)=1|v(1)) (6)
(6f) when iterating to l +1 times, updating w by gradient ascending methodij、ai、bjThe update formula is:
wij (l+1)=wij (l)+Δwij (l+1)(7)
ai (l+1)=ai (l)+Δai (l+1)(8)
bj (l+1)=bj (l)+Δbj (l+1)(9)
wherein, Δ wij、Δai、ΔbjCan be expressed as:
where ρ is the momentum learning rate, nblockIs the number of small batches of data,is the weight decay term.
The joint modulation identification method based on the RBM network and the BP neural network is characterized in that in Step8, the process of training the BP neural network and finely adjusting the parameter θ' to reach the optimal solution state specifically includes the following steps:
(8a) initializing parameters theta' of the BP neural network into a parameter collection theta of the trained RBM network, and then setting a transfer function of the BP neural network into a Sigmoid function:
(8b) when modulation signals to be classified are transmitted in the forward direction, a new training set pair formed by original training samples and class labels is transmitted from an input layer, the new training set pair is transmitted to an output layer after being processed layer by layer through hidden layers, and if the actual output error and the expected output error of the output layer are too large, the error is transmitted in the backward direction;
(8c) when the modulation signals to be classified are reversely propagated, the output is reversely propagated to the input layer by layer through the hidden layer, errors are distributed to all units of all the input layers, so that error signals of all the units of all the input layers are obtained, the weight parameters of all the input layers are corrected by the error signals, and when the errors are smaller than the minimum errors, the training is completed.
The invention has the advantages that:
1. the modulation signals to be classified are preprocessed in a zero-mean (zero center) mode, a normalization mode and the like, characteristic parameters are extracted, and input dimensionality is reduced;
2. the problems that the BP neural network training is trapped in a local minimum value, the convergence speed is too low at the optimal solution and the like are effectively solved, and the modulation recognition rate of the system is improved.
Drawings
FIG. 1 is a general flow chart of the joint modulation identification method of the present invention;
FIG. 2 is a schematic diagram of the modulation performed by the RBM network in conjunction with the BP neural network;
FIG. 3 is a flow chart of RBM network training.
Detailed Description
The invention carries out preprocessing such as zero equalization, normalization and the like on the modulation signals to be classified, extracts characteristic parameters, reduces input dimension, trains an input layer, a multilayer hidden layer and an output layer of the BP neural network according to a multilayer RBM network mode, obtains the weight of the BP neural network and the initial value of an offset parameter, and trains the BP neural network to finely adjust the parameter, thereby carrying out signal modulation mode identification.
The invention is described in detail below with reference to the figures and the embodiments.
Referring to fig. 1, the joint modulation identification method based on the RBM network and the BP neural network of the present invention is specifically implemented as follows:
step 1: and carrying out zero equalization and normalization preprocessing on the modulation signals x (n) to be classified.
The signal sequence s (n) after zero-averaging and normalization is expressed by the formula:
wherein,is a zero-averaged sequence of the modulated signal x (n) to be classified, HT(s)Z) Is sz(N) Hilbert change, N being the length of the signal sequence.
Step 2: extracting characteristic parameters of the preprocessed signals, wherein the characteristic parameters mainly refer to characteristic parameters based on a time domain, a frequency domain and statistics, and the characteristic parameters comprise:
(1) the characteristic parameter based on the time domain is an instantaneous amplitude characteristic, and the characteristic parameter is used for distinguishing a MASK modulation mode and an MQAM modulation mode;
(2) the characteristic parameter based on the frequency domain is the frequency spectrum characteristic of the signal, and the characteristic parameter is used for distinguishing the MFSK modulation mode;
(3) the characteristic parameter based on the statistic is the high-order spectral characteristic of the signal, and the characteristic parameter is used for distinguishing the modulation mode of the MPSK.
Step 3: and randomly generating a training sample and a testing sample of each type of modulation mode according to the characteristic parameters extracted at Step2 to obtain a training sample set, a testing data set and a corresponding class label set.
Steps 4 to 7 show the joint modulation process of the RBM network and the BP neural network, and refer to fig. 2.
Step 4: setting relevant parameters of the BP neural network, and taking an input layer and a first hidden layer of the BP neural network as an RBM network for training.
Step 5: initializing a set of parameters of the RBM network, θ ═ (W, a, b), where W is a weight matrix between the hidden and visible layers of the RBM network, W ═ W { (W)ij},wijIs the connection weight of the ith neuron in the hidden layer and the jth neuron in the visible layer; a is the bias vector for the visible layer, a ═ ai},aiIs made bySee bias of the ith neuron in the layer; b is the bias vector of the hidden layer, b ═ bj},bjIs the bias of the jth neuron in the hidden layer.
The specific process of initializing the parameter set theta of the RBM network is as follows:
(5a) divide the training samples generated at Step3 into small batches of data containing 20 samples per class (to improve computational efficiency), set the number of visible layer cells nvSetting the number n of hidden layer units for the number of input layer nodes of BP neural networkhThe number of nodes of the first hidden layer of the BP neural network.
(5b) The weight matrix W is initialized to random numbers from a normal distribution N (0,0.01), and aiAnd bjInitialized to 0, and the objective function is related to w at each iterationijApproximation of partial derivative of (a) Δ wijObjective function with respect to aiApproximation of partial derivative of (a) Δ aiObjective function with respect to bjApproximation of partial derivative of (a) Δ bjAre all initialized to 0, i.e. Δ wij=0,Δai=0,Δbj0; learning the weight between the visible layer and the hidden layerwLearning rate between visible layer biases εaAnd learning rate between hidden layer biases εbAre all set to 0.01, i.e. epsilonw=εa=εbThe momentum learning rate and the final momentum learning rate are set to 0.5 and 0.9, respectively, the weight attenuation coefficient λ may take any value between 0.01 and 0.0001, and the parameter k in the k-step contrast divergence algorithm is set to 1.
Step 6: dividing the training samples generated by Step3 into small batch of data, normalizing the data and then training the RBM network to obtain a parameter collection theta of the RBM network and an output characteristic p (h) of a hidden layerj1| v), wherein hjIs the state of the jth neuron in the hidden layer, hjE h, h is the state vector of the hidden layer, v is the state vector of the visible layer, v ═ vi},viIs the state of the ith neuron in the visible layer.
Referring to fig. 3, the specific process of training the RBM network is as follows:
(6a) when the modulated signal to be classified is transferred in the forward direction, vi (0)Representing the input value at the first forward pass, the output of the hidden layer of the RBM network is:
wherein,is an activation function of the RBM network, using input vi (0)And calculating the output probability.
(6b) P (h)j (0)=1|v(0)) Binarization as hj (0)I.e. a number between 0 and 1 is randomly generated, if this number is smaller than p (h)j (0)=1|v(0)) Then h isj (0)If the number is greater than or equal to p (h)j (0)=1|v(0)) Then h isj (0)Has a value of 1.
(6c) When the modulation signal to be classified is transmitted reversely, the calculated hj (0)As inputs, the output of the visible layer of the RBM network is:
due to input vi (0)Is at [0,1 ]]In between, so after calculating the output probability of the visible layer, the binary is not needed, and the method can be directly regarded as the input v of the next forward propagationi (1)
(6d) From v calculated in back propagationi (1)Repeating the step (6a) and the step (6b), and calculating p (h) after the second iterationj (1)=1|v(1)) And hj (1)The value of (c).
(6e) Using a CD-k algorithm to perform k-step alternating Gibbs sampling to obtain the w of the objective function at each iterationij、ai、bjSince k is 1, Δ w is an approximation of the partial derivative of (c)ij、Δai、ΔbjAre respectively formulated as:
Δwij≈p(hi (0)=1|v(0))·vj (0)-p(hi (1)=1|v(1))·vj (1)(4)
Δai=vi (0)-vi (1)(5)
Δbi=p(hi (0)=1|v(0))-p(hi (1)=1|v(1)) (6)
(6f) when iterating to l +1 times, updating w by gradient ascending methodij、ai、bjThe update formula is:
wij (l+1)=wij (l)+Δwij (l+1)(7)
ai (l+1)=ai (l)+Δai (l+1)(8)
bj (l+1)=bj (l)+Δbj (l+1)(9)
wherein, Δ wij、Δai、ΔbjCan be expressed as:
where ρ is the momentum learning rate, nblockIs the number of small batches of data,is the weight decay term.
Step 7: training a first hidden layer and a next layer of the BP neural network as a visible layer and a hidden layer of a second RBM network, and outputting p (h) of the first RBM networkj1| v) as input to the second RBM network, and repeating Step5, Step6, and Step7 until a collection θ of parameters for all RBM networks is obtained.
Step 8: setting an initial parameter theta 'of the BP neural network as a parameter set theta of the trained RBM network, retraining the BP neural network with supervision, and finely adjusting the parameter theta' to reach an optimal solution state, wherein the whole process specifically comprises the following steps:
(8a) initializing parameters theta 'of the BP neural network into a parameter set theta of the RBM network which is trained, wherein theta is (W, a, b), namely the parameters theta' of the BP neural network are initialized into a weight matrix W which is trained, a bias vector a and a bias vector b, and then setting a transfer function of the BP neural network into a Sigmoid function:
(8b) when the modulation signals to be classified are transmitted in the forward direction, a new training set pair formed by original training samples and class labels is transmitted from an input layer, the training set pair is transmitted to an output layer after being processed layer by layer through hidden layers, and if the actual output error and the expected output error of the output layer are too large, the error is transmitted in the backward direction.
(8c) When the modulation signals to be classified are reversely propagated, the output is reversely propagated to the input layer by layer through the hidden layer, errors are distributed to all units of all the input layers, so that error signals of all the units of all the input layers are obtained, the weight parameters of all the input layers are corrected by the error signals, and when the errors are smaller than the minimum errors, the training is completed.
Step 9: and (4) normalizing the test data, inputting the normalized test data into the trained BP neural network, and calculating the recognition rate of the modulation mode.
Because the trained parameters of the RBM network are used as the initial parameters of the BP neural network, the selection of the initial parameters of the BP neural network is close to the optimal solution, the method effectively avoids the problems that the training of the BP neural network is trapped in a local minimum value, the convergence speed at the optimal solution is too slow and the like, and improves the modulation recognition rate of the system.
The combined modulation identification method based on the RBM network and the BP neural network can be used for identification of a modulation mode in a communication signal transmission process.
It should be noted that the above-mentioned embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the protection scope of the present invention.

Claims (7)

1. The joint modulation identification method based on the RBM network and the BP neural network is characterized by comprising the following steps of:
step 1: preprocessing a modulation signal to be classified;
step 2: extracting characteristic parameters of the preprocessed signals, wherein the characteristic parameters are characteristic parameters based on a time domain, a frequency domain and statistics;
step 3: randomly generating a training sample and a testing sample of each type of modulation mode according to the characteristic parameters extracted at Step2 to obtain a training sample set, a testing data set and a corresponding class label set;
step 4: setting relevant parameters of a BP neural network, and taking an input layer and a first hidden layer of the BP neural network as an RBM network for training;
step 5: initializing a collection of parameters theta of the RBM network, wherein theta is (W, a, b), wherein W is a weight matrix between a hidden layer and a visible layer of the RBM network, a is a bias vector of the visible layer, and b is a bias vector of the hidden layer;
step 6: dividing the training samples generated by Step3 into small batch of data, normalizing the data and then training the RBM network to obtain a parameter collection theta of the RBM network and an output characteristic p (h) of a hidden layerj1| v), where h is the state vector of the hidden layer, hjRepresenting the state of the jth neuron in the hidden layer, and v is the state vector of the visible layer;
step 7: training a first hidden layer and a next layer of the BP neural network as a visible layer and a hidden layer of a second RBM network, and outputting p (h) of the first RBM networkj1| v) as input to the second RBM network, repeating Step5, Step6 and Step7 until a collection θ of parameters for all RBM networks is obtained;
step 8: setting an initial parameter theta 'of the BP neural network as a parameter set theta of the trained RBM network, retraining the BP neural network with supervision, and finely adjusting the parameter theta' to achieve an optimal solution state;
step 9: and (4) normalizing the test data, inputting the normalized test data into the trained BP neural network, and calculating the recognition rate of the modulation mode.
2. The joint modulation recognition method based on the RBM network and the BP neural network as claimed in claim 1, wherein in Step1, the preprocessing comprises: zero averaging and normalization.
3. The joint modulation recognition method based on the RBM network and the BP neural network as claimed in claim 1, wherein in Step2, the characteristic parameters comprise: instantaneous amplitude characteristics, spectral characteristics of the signal, and higher order spectral characteristics of the signal.
4. The RBM network and BP neural network-based joint modulation recognition method of claim 1, wherein at Step4, the related parameters comprise: the number of nodes of each layer of the BP neural network and the number of hidden layers.
5. The joint modulation recognition method based on the RBM network and the BP neural network as claimed in claim 1, wherein in Step5, the specific process of initializing the parameter set θ of the RBM network is as follows:
(5a) dividing the training samples generated by Step3 into small batches of data containing 20 samples of each type, and setting the number n of visible layer unitsvSetting the number n of hidden layer units for the number of input layer nodes of BP neural networkhThe number of nodes of a first hidden layer of the BP neural network is;
(5b) the weight matrix W is initialized to random numbers from a normal distribution N (0,0.01), and aiAnd bjInitialized to 0, and the objective function is related to w at each iterationijApproximation of partial derivative of (a) Δ wijObjective function with respect to aiApproximation of partial derivative of (a) Δ aiObjective function with respect to bjApproximation of partial derivative of (a) Δ bjAll are initialized to 0, and the weight learning rate epsilon between the visible layer and the hidden layer is setwLearning rate between visible layer biases εaAnd learning rate between hidden layer biases εbThe momentum learning rate and the final momentum learning rate are respectively set to 0.5 and 0.9, the weight attenuation coefficient lambda takes any value between 0.01 and 0.0001, and the parameter k in the k-step contrast divergence algorithm is set to be 1.
6. The joint modulation recognition method based on the RBM network and the BP neural network as claimed in claim 1, wherein in Step6, the specific process of training the RBM network is as follows:
(6a) when the modulated signal to be classified is transferred in the forward direction, vi (0)Representing the input value at the first forward pass,the output of the hidden layer of the RBM network is:
wherein,is an activation function of the RBM network, using input vi (0)Calculating output probability;
(6b) p (h)j (0)=1|v(0)) Binarization as hj (0)If this number is less than p (h)j (0)=1|v(0)) Then h isj (0)If the number is greater than or equal to p (h)j (0)=1|v(0)) Then h isj (0)Has a value of 1;
(6c) when the modulation signal to be classified is transmitted reversely, the calculated hj (0)As inputs, the output of the visible layer of the RBM network is:
(6d) from v calculated in back propagationi (1)Repeating the step (6a) and the step (6b), and calculating p (h) after the second iterationj (1)=1|v(1)) And hj (1)A value of (d);
(6e) using a CD-k algorithm to perform k-step alternating Gibbs sampling to obtain the w of the objective function at each iterationij、ai、bjSince k is 1, Δ w is an approximation of the partial derivative of (c)ij、Δai、ΔbjAre respectively formulated as:
Δwij≈p(hi (0)=1|v(0))·vj (0)-p(hi (1)=1|v(1))·vj (1)(4)
Δai=vi (0)-vi (1)(5)
Δbi=p(hi (0)=1|v(0))-p(hi (1)=1|v(1)) (6)
(6f) when iterating to l +1 times, updating w by gradient ascending methodij、ai、bjThe update formula is:
wij (l+1)=wij (l)+Δwij (l+1)(7)
ai (l+1)=ai (l)+Δai (l+1)(8)
bj (l+1)=bj (l)+Δbj (l+1)(9)
wherein, Δ wij、Δai、ΔbjCan be expressed as:
where ρ is the momentum learning rate, nblockIs the number of small batches of data,is the weight decay term.
7. The joint modulation recognition method based on the RBM network and the BP neural network as claimed in claim 1, wherein in Step8, the process of training the BP neural network and fine-tuning the parameter θ' to reach the optimal solution state specifically comprises the following steps:
(8a) initializing parameters theta' of the BP neural network into a parameter collection theta of the trained RBM network, and then setting a transfer function of the BP neural network into a Sigmoid function:
(8b) when modulation signals to be classified are transmitted in the forward direction, a new training set pair formed by original training samples and class labels is transmitted from an input layer, the new training set pair is transmitted to an output layer after being processed layer by layer through hidden layers, and if the actual output error and the expected output error of the output layer are too large, the error is transmitted in the backward direction;
(8c) when the modulation signals to be classified are reversely propagated, the output is reversely propagated to the input layer by layer through the hidden layer, errors are distributed to all units of all the input layers, so that error signals of all the units of all the input layers are obtained, the weight parameters of all the input layers are corrected by the error signals, and when the errors are smaller than the minimum errors, the training is completed.
CN201810113576.7A 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network Pending CN108449295A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810113576.7A CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810113576.7A CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Publications (1)

Publication Number Publication Date
CN108449295A true CN108449295A (en) 2018-08-24

Family

ID=63191728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810113576.7A Pending CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Country Status (1)

Country Link
CN (1) CN108449295A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109787927A (en) * 2019-01-03 2019-05-21 荆门博谦信息科技有限公司 Modulation Identification method and apparatus based on deep learning
CN109918794A (en) * 2019-03-11 2019-06-21 哈尔滨理工大学 A kind of blade analysis method for reliability based on RBMBP extreme value response phase method
CN110120926A (en) * 2019-05-10 2019-08-13 哈尔滨工程大学 Modulation mode of communication signal recognition methods based on evolution BP neural network
CN110224956A (en) * 2019-05-06 2019-09-10 安徽继远软件有限公司 Modulation Identification method based on interference cleaning and two stages training convolutional neural networks model
CN110472501A (en) * 2019-07-10 2019-11-19 南京邮电大学 A kind of fingerprint pore coding specification method neural network based
CN110536257A (en) * 2019-08-21 2019-12-03 成都电科慧安科技有限公司 A kind of indoor orientation method based on depth adaptive network
CN112115821A (en) * 2020-09-04 2020-12-22 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112132191A (en) * 2020-09-01 2020-12-25 兰州理工大学 Intelligent evaluation and identification method for early damage state of rolling bearing
CN112288020A (en) * 2020-10-30 2021-01-29 江南大学 Digital modulation identification method based on discriminant limited Boltzmann machine
CN114626635A (en) * 2022-04-02 2022-06-14 北京乐智科技有限公司 Steel logistics cost prediction method and system based on hybrid neural network
CN117933499A (en) * 2024-03-22 2024-04-26 中国铁建电气化局集团有限公司 Invasion risk prediction method, device and storage medium for high-speed railway catenary

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
US20120065976A1 (en) * 2010-09-15 2012-03-15 Microsoft Corporation Deep belief network for large vocabulary continuous speech recognition
CN103795592A (en) * 2014-01-21 2014-05-14 中国科学院信息工程研究所 Online water navy detection method and device
CN104166548A (en) * 2014-08-08 2014-11-26 同济大学 Deep learning method based on motor imagery electroencephalogram data
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
US20120065976A1 (en) * 2010-09-15 2012-03-15 Microsoft Corporation Deep belief network for large vocabulary continuous speech recognition
CN103795592A (en) * 2014-01-21 2014-05-14 中国科学院信息工程研究所 Online water navy detection method and device
CN104166548A (en) * 2014-08-08 2014-11-26 同济大学 Deep learning method based on motor imagery electroencephalogram data
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109787927A (en) * 2019-01-03 2019-05-21 荆门博谦信息科技有限公司 Modulation Identification method and apparatus based on deep learning
CN109918794A (en) * 2019-03-11 2019-06-21 哈尔滨理工大学 A kind of blade analysis method for reliability based on RBMBP extreme value response phase method
CN110224956A (en) * 2019-05-06 2019-09-10 安徽继远软件有限公司 Modulation Identification method based on interference cleaning and two stages training convolutional neural networks model
CN110120926A (en) * 2019-05-10 2019-08-13 哈尔滨工程大学 Modulation mode of communication signal recognition methods based on evolution BP neural network
CN110120926B (en) * 2019-05-10 2022-01-07 哈尔滨工程大学 Communication signal modulation mode identification method based on evolution BP neural network
CN110472501B (en) * 2019-07-10 2022-08-30 南京邮电大学 Neural network-based fingerprint sweat pore coding classification method
CN110472501A (en) * 2019-07-10 2019-11-19 南京邮电大学 A kind of fingerprint pore coding specification method neural network based
CN110536257A (en) * 2019-08-21 2019-12-03 成都电科慧安科技有限公司 A kind of indoor orientation method based on depth adaptive network
CN110536257B (en) * 2019-08-21 2022-02-08 成都电科慧安科技有限公司 Indoor positioning method based on depth adaptive network
CN112132191A (en) * 2020-09-01 2020-12-25 兰州理工大学 Intelligent evaluation and identification method for early damage state of rolling bearing
CN112115821A (en) * 2020-09-04 2020-12-22 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112115821B (en) * 2020-09-04 2022-03-11 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112288020A (en) * 2020-10-30 2021-01-29 江南大学 Digital modulation identification method based on discriminant limited Boltzmann machine
CN112288020B (en) * 2020-10-30 2024-07-12 南京模数智芯微电子科技有限公司 Digital modulation identification method based on discriminant type limited Boltzmann machine
CN114626635A (en) * 2022-04-02 2022-06-14 北京乐智科技有限公司 Steel logistics cost prediction method and system based on hybrid neural network
CN114626635B (en) * 2022-04-02 2024-08-06 北京乐智科技有限公司 Steel logistics cost prediction method and system based on hybrid neural network
CN117933499A (en) * 2024-03-22 2024-04-26 中国铁建电气化局集团有限公司 Invasion risk prediction method, device and storage medium for high-speed railway catenary

Similar Documents

Publication Publication Date Title
CN108449295A (en) Combined modulation recognition methods based on RBM networks and BP neural network
CN112418014B (en) Modulated signal identification method based on wavelet transformation and convolution long-term and short-term memory neural network
CN110728360B (en) Micro-energy device energy identification method based on BP neural network
Gao et al. Fusion image based radar signal feature extraction and modulation recognition
CN107607954B (en) FNN precipitation particle phase state identification method based on T-S model
CN113569742B (en) Broadband electromagnetic interference source identification method based on convolutional neural network
CN114564982B (en) Automatic identification method for radar signal modulation type
CN114881093B (en) Signal classification and identification method
CN114157539A (en) Data-aware dual-drive modulation intelligent identification method
CN112749633B (en) Separate and reconstructed individual radiation source identification method
CN109617845A (en) A kind of design and demodulation method of the wireless communication demodulator based on deep learning
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
CN110601764A (en) Radio frequency modulation format identification method based on optical assistance
Huang et al. Radar waveform recognition based on multiple autocorrelation images
Ding et al. Data-and-knowledge dual-driven automatic modulation recognition for wireless communication networks
CN112347844A (en) Signal countermeasure sample detector design method based on LID
CN116471154A (en) Modulation signal identification method based on multi-domain mixed attention
CN110826425A (en) VHF/UHF frequency band radio signal modulation mode identification method based on deep neural network
Zhang et al. Heterogeneous deep model fusion for automatic modulation classification
CN113887806B (en) Long-tail cascade popularity prediction model, training method and prediction method
CN114298113A (en) Internet of things-oriented dual-path machine learning modulation mode identification method
Du et al. D-GF-CNN algorithm for modulation recognition
CN112488238B (en) Hybrid anomaly detection method based on countermeasure self-encoder
CN114970638A (en) Radar radiation source individual open set identification method and system
CN112434716B (en) Underwater target data amplification method and system based on condition countermeasure neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180824