CN101667252B - Classification and identification method for communication signal modulating mode based on ART2A-DWNN - Google Patents

Classification and identification method for communication signal modulating mode based on ART2A-DWNN Download PDF

Info

Publication number
CN101667252B
CN101667252B CN2009100730588A CN200910073058A CN101667252B CN 101667252 B CN101667252 B CN 101667252B CN 2009100730588 A CN2009100730588 A CN 2009100730588A CN 200910073058 A CN200910073058 A CN 200910073058A CN 101667252 B CN101667252 B CN 101667252B
Authority
CN
China
Prior art keywords
vector
art2a
dwnn
neural network
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100730588A
Other languages
Chinese (zh)
Other versions
CN101667252A (en
Inventor
赵雅琴
陈淞
任广辉
吴芝路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN2009100730588A priority Critical patent/CN101667252B/en
Publication of CN101667252A publication Critical patent/CN101667252A/en
Application granted granted Critical
Publication of CN101667252B publication Critical patent/CN101667252B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a classification and identification method for a communication signal modulating mode based on ART2A-DWNN, belonging to the field of classification and identification of communication signal modulating modes and solving the problem that single neural network has long period and low accuracy for classifying and identifying communication signals. In the method, an ART2A-E algorithm based on an ART2A network is taken as a first layer of a combined neural network, and similar modes is roughly classified by selecting relatively smaller vigilance parameters; a DWNN is directly connected with the output layer with the corresponding type of the ART2A network, Morlet mother wavelet Phi(x) with higher resolution in frequency domain and time domain are adopted, learning is carried out by utilizing error back-propagation algorithm, a synaptic weight can be modified with a conjugate gradient method till output is within the error range, and the number of modes in each type is reduced after rough ART2-E classification, so that the DWNN can quickly converge. The invention is used for classification and identification of communication signals.

Description

Classifying identification method based on the modulation mode of communication signal of ART2A-DWNN
Technical field
The present invention relates to a kind of classifying identification method of the modulation mode of communication signal based on ART2A-DWNN, belong to the Classification and Identification field of modulation mode of communication signal.
Background technology
At present the recognition methods of the modulation system of signal of communication mainly is divided into based on the maximum likelihood hypothesis testing method of decision theory with based on the statistical pattern recognition method of feature extraction, wherein is subjected to widespread use owing to having non-linear and adaptive characteristic based on the recognition methods in the statistical pattern recognition method of feature extraction based on artificial neural network.The network that the present sorter based on artificial neural network that signal of communication is classified is used mainly contains feedforward BP neural network, radial basis function RBF neural network, wavelet neural network WNN, Support Vector Machine SVM, self-adapting resonance ART neural network etc.The BP network has very strong non-linear mapping capability and dirigibility, but learning time is long, convergent is local minimum easily.The RBF network has overcome the shortcoming of the local convergence of BP network, adopts the supervised learning rule, but the input of new modulation system will influence the pattern of having trained modulation system.The WNN network organically blends wavelet theory and neural theory, made full use of the good local characteristics of wavelet transformation and the self-learning function of neural network.SVM for solve small sample, non-linear and higher-dimension pattern recognition problem has good effect.The unsupervised learning rules of ART network using, the adaptivity of sorter increases, but anti-noise ability is relatively poor.When adopting above-mentioned single neural network that the modulation system of multiple signal of communication is discerned, all can exist the judgement cycle long, the low problem of identification accuracy.
The ART neural network has 3 kinds of forms at present: the ART1 type is handled binary signal; The ART2 type is the extend type of ART1, is used to handle continuous analog signal and binary signal; The ART3 type is the hierarchical search model, the function of its compatible preceding two kinds of structure is also expanded as any multilayer neuroid with two-layer neuroid, and owing in neuronic moving model, included the bioelectricity and the chemical reaction mechanism of biological neuron in, thereby possessed stronger function and expandability.ART network and algorithm thereof have bigger dirigibility aspect the new input pattern adapting to, and can avoid the modification to the previous training result of network simultaneously, have solved stability and dirigibility preferably and have taken into account problem.When network receives input from environment, check that by the reference thresholding of design in advance matching degree between this input pattern and all the pre-stored mode type vectors is with definite similarity.Similarity is surpassed all mode class with reference to thresholding, selects the representative class of the most similar this pattern of conduct, and the adjustment weights relevant, so that can obtain bigger similarity with the input of this pattern similarity during again with this pattern match afterwards with this classification.If similarity all is no more than with reference to thresholding, just need in network, set up a new mode class, set up the weights that link to each other with this mode class simultaneously, in order to all same quasi-modes of representing and store this pattern and importing afterwards.What application was wider in this theory is ART2 network and modified thereof.
The structure of ART2A neural network and principle:
The structural principle of ART2A neural network and ART2 is basic identical, can not only be to ambipolar or scale-of-two input pattern classification, and can carry out the self-organization classification to the arbitrary sequence of analog input pattern, its basic design philosophy is to adopt competitive learning strategy and homeostasis.The ART2/2A neural network structure as shown in Figure 1, Fig. 2 has provided the topological structure of j processing unit.The ART2/2A neural network is made up of attention subsystem and orientation subsystem.Note comprising short-term memory (STM, Short Time Memory) character representation field F in the subsystem 1Represent a F with the STM classification 2F 1Comprise several processing gain per stage control system, F 2Be responsible for the present mode coupling that is at war with.F 1And F 2Total N neuron, wherein F 1The field has M, F 2The field has N-M, has constituted N dimension state vector jointly, represents the short-term memory of network.F 1And F 2Between interior extraterrestrial connection weight vector constituted the self-adaptation long-term memory (LTM, Long Time Memory) of network, weights z from the bottom to top IjExpression, weights z from top to bottom JiExpression.
With the proper vector x input F that extracts 1Layer, and pass through by 4 neuron z i, q i, v i, u iVectorial normalization, Filtering Processing and nonlinear transformation are carried out to input signal in the positive feedback closed-loop path that constitutes.Layer model u in iteration obtains stablizing, ρ sends into F through the warning thresholding 2Layer is by F 2Layer selects to activate F through competition 2Layer corresponding node obtains the stn mode of system.F 2The output of layer is through LTM Z JiFeed back to F after the weighting 1Layer, feedback information and u together send into the orientation subsystem, and the similarity of checking system long-term memory pattern and input pattern is if by the similarity degree check, can determine that then input pattern belongs to F 2The candidate pattern of layer, and by fast learning algorithm, a step is finished the study of weights; If, then do not increase a new output node, represent new modulation type by match check.Find out that by last surface analysis the ART2/2A neural network has two kinds of memory mechanisms, two kinds of connection weights and two kinds of inhibition signals.Two kinds of memory mechanisms refer to: long-term memory and short-term memory; Two kinds of connection weights refer to: F 1→ F 2Interior star power, decision F 2The triumph neuron, F 1← F 2Extraterrestrial power, as F 1The coding of input pattern; Two kinds are suppressed signal and refer to: F 1The neuronic inhibition signal in field is from gain control subsystem, F 2The neuronic inhibition signal in field comes the auto-orientation subsystem.
By the ART2/2A neural network structure as can be seen in the ART2/2A algorithm input vector and each weight vector all carry out normalized, the mould value is all 1, has lost vectorial amplitude information.Because digital modulation mode is distinguished by the proper vector amplitude information greatly, so this method is inadvisable.
The DWNN neural network is to propose on the basis of wavelet neural network WNN, independent DWNN neural network increases the weight of the training process oscillatory occurences of more mode combinations, the training speed advantage is not obvious, and recognition performance decreases, and is relatively poor for the expandability of mode of learning not.
Summary of the invention
The objective of the invention is to adopt single neural network long, discern the low problem of accuracy, a kind of classifying identification method of the modulation mode of communication signal based on ART2A-DWNN is provided the Classification and Identification judgement cycle of modulation mode of communication signal in order to solve.
The present invention is based on ART2A neural network and DWNN neural network and realize, the ART2A neural network is made up of attention subsystem, orientation subsystem and modulation type secondary judging module, notices that subsystem is by short-term memory character representation field F 1Represent a F with the short-term memory classification 2Form, the warning threshold value that prestores in orientation subsystem ρ based on the Classification and Identification process of the modulation mode of communication signal of ART2A-DWNN is:
Step 1, signal of communication is carried out proper vector extract, with the input vector of the proper vector after extracting as the ART2A neural network, using the ART2A-E algorithm handles the input vector of input ART2A neural network: setting input vector is N dimensional vector X (k), N dimensional vector X (k) is inputed to the short-term memory character representation field F of ART2A neural network 1N neuron, X (k)=[x 1(k) ..., x N(k)] T, k is the sequence number of input N dimensional vector X (k);
Represent that with M the short-term memory classification represents a F 2Middle neuron sum, the short-term memory classification was represented a F when μ (k) was input N dimensional vector X (k) 2In occupied neuron number, unappropriated neuron is not set weight vector;
With short-term memory character representation field F 1Represent a F with the short-term memory classification 2Between interior star connection weight and extraterrestrial connection weight merge into by short-term memory character representation field F 1→ short-term memory classification is represented a F 2The weight vector W of single direction j(k), W j(k)=[w J1(k) ..., w JN(k)] T, j=1~μ (k);
Utilizing linear transformation makes x i(k) satisfy 0≤x i(k)≤1, i=1-N;
Step 2, N dimensional vector X (k) is at war with and mates study and carry out cluster: when the sequence number k=1 of input N dimensional vector X (k), W j(k)=W 1(1)=and X (1), μ (1)=0, μ (2)=1; Execution in step three then;
When the sequence number k>1 of input N dimensional vector X (k), competition and matching degree are calculated one and are gone on foot and finishes the first triumph label j that matching degree is the highest 1 *(k) expression, the first triumph label j 1 *(k) pass through X (k) and W J* (k)(k) calculating of Euclidean distance is with the first triumph label j 1 *(k) Dui Ying matching degree is η 1, with matching degree η 1Compare with the warning threshold value ρ that prestores in the orientation subsystem:
Work as η 1During<ρ, the pattern of input N dimensional vector X (k) does not conform to the pattern of pre-stored, and μ (k) is compared judgement with the M value, if μ (k)<M, opening up a label is the new cluster district of μ (k)+1, sets W μ (k)+1(k)=X (k); If μ (k)=M, the short-term memory classification is represented a F 2Middle neuron is taken, and finishes cluster, changes next beat study over to;
Work as η 1During 〉=ρ, the pattern of input N dimensional vector X (k) conforms to the pattern of pre-stored, and matching degree is qualified, and input N dimensional vector X (k) enters the self-adapting resonance state, adjusts weight vector and finishes cluster simultaneously: if j ≠ j 1 *(k), W j(k+1)=W j(k); If j=j 1 *(k), W j(k+1)=W j(k)+α [X (k)-W j(k)], α is the learning rate that presets in the formula; Execution in step three then;
Step 3, to carrying out rough sort through the vector that obtains after the cluster in the step 2: the vector of setting through exporting after the cluster is Y (k)=[Y 1(k) ..., Y M(k)] T, Y (k) is inputed to modulation type secondary judging module, modulation type secondary judging module is judged to be L class modulation type MT with Y (k) i, L≤M, 1≤i≤L;
Step 4, sophisticated category: with each class output vector [Y of process rough sort in the step 3 i(k) ..., Y j(k)] TRespectively as the input vector of a DWNN neural network, weights are trained and adjusted to the DWNN neural network, when the error of the output vector of DWNN neural network and default desired output vector during less than pre-set threshold, finish Classification and Identification, otherwise proceed the next round iterative learning modulation mode of communication signal.
Advantage of the present invention is:
The present invention adopts the design philosophy of competitive learning strategy and homeostasis, utilizes the ART neural network to propose ART2A-E neural network algorithm based on the ART2A neural network, and it has good adaptive study and sorts out ability certainly, and extensibility is good.The ART2A-E neural network algorithm is mainly by pre-service, competition, coupling and adaptive learning three parts are formed, at first signal of communication being carried out proper vector according to modulation system extracts, with the input vector of the proper vector after extracting as the ART2A neural network, owing in to the pre-service of input vector, do not do vectorial normalized, kept vectorial amplitude information, it is more accurate to make to the differentiation identification of input vector, behind the output node of ART2A-E neural network, designed simultaneously a modulation type secondary judging module, to carrying out rough sort, can be a kind of modulation type to the judgement of the sample in the arbitrary cluster district that drops on same modulation type through the vector after the cluster.In matching degree is calculated relatively be the Euclidean distance of two vectors, promptly have only angle and the amplitude difference of the two when all smaller when the two, its matching degree just may be than higher.For the situation that amplitude information be can not ignore, this is a kind of rational selection.In addition, this algorithm computation process is simple and calculated amount is little, and has the total characteristics of ART neural network: training is self-organization, has the unsupervised learning ability; The pattern of having learnt is had stable quick identification ability, can adapt to the new model of not learning rapidly again simultaneously; Can carry out adaptive cluster and identification to dynamic input pattern sample, can adapt to non-stationary environment.
The present invention adopts combination neural net that the modulation system of signal of communication is carried out Classification and Identification, can discern the more integrated mode of kind, discern close mode capabilities and strengthen, have the wide advantage of identification range, overcome the few shortcoming of identification kind when adopting independent a kind of neural network to discern; Good to the multiple extensibility of modulation system of not training, need not use all patterns to train DWNN again, only need after the ground floor self-adaptation is sorted out the new classification that obtains, to add simple relatively DWNN, and with training mode training DWNN not, experiment shows that training speed at this moment is very fast.Therefore, the inventive method has adaptivity and extensibility, and it is better than BP network classifier and DWNN sorter from this angle.Algorithm of the present invention not only has noise immunity, has the ability of isolating the modulation system that is subjected to The noise simultaneously.
The training speed of the inventive method is greatly faster than the single DWNN sorter or the training speed of ART2A-E sorter, and calculated amount is little, help requirement of real time, from this angle, combined classifier of neural network also is better than single DWNN sorter or ART2A-E sorter.
Description of drawings
Fig. 1 is the network structure of standard A RT2/2A neural network, Fig. 2 is the topology diagram of j processing unit in the neural network shown in Figure 1, Fig. 3 is the structural drawing of DWNN neural network, Fig. 4 is an overall procedure synoptic diagram of the present invention, Fig. 5 is an ART2A-E neural network barycenter synoptic diagram of all categories, Fig. 6 is the synoptic diagram in the zone of the inseparable pattern after the ART2A-E algorithm identified among Fig. 5, and Fig. 7 utilizes the Sammon nonlinear algorithm to be mapped as the mapping graph of the pattern that can divide the inseparable pattern among Fig. 6.
Embodiment
Embodiment one: present embodiment is described below in conjunction with Fig. 1~Fig. 5, present embodiment realizes based on ART2A neural network and DWNN neural network, the ART2A neural network is made up of attention subsystem, orientation subsystem and modulation type secondary judging module, notices that subsystem is by short-term memory character representation field F 1Represent a F with the short-term memory classification 2Form, the warning threshold value that prestores in orientation subsystem ρ, the Classification and Identification process of its modulation mode of communication signal is:
Step 1, signal of communication is carried out proper vector extract, with the input vector of the proper vector after extracting as the ART2A neural network, using the ART2A-E algorithm handles the input vector of input ART2A neural network: setting input vector is N dimensional vector X (k), N dimensional vector X (k) is inputed to the short-term memory character representation field F of ART2A neural network 1N neuron, X (k)=[x 1(k) ..., x N(k)] T, k is the sequence number of input N dimensional vector, the transposition of T representing matrix;
Represent that with M the short-term memory classification represents a F 2Middle neuron sum, the short-term memory classification was represented a F when μ (k) was input N dimensional vector X (k) 2In occupied neuron number, unappropriated neuron is not set weight vector;
With short-term memory character representation field F 1Represent a F with the short-term memory classification 2Between interior star connection weight and extraterrestrial connection weight merge into by short-term memory character representation field F 1→ short-term memory classification is represented a F 2The weight vector W of single direction j(k), W j(k)=[w J1(k) ..., w JN(k)] T, j=1~μ (k);
Utilizing linear transformation makes x i(k) satisfy 0≤x i(k)≤1, i=1-N;
Step 2, the N dimensional vector is at war with and mates study and carry out cluster: when the sequence number k=1 of input N dimensional vector, W j(k)=W 1(1)=and X (1), μ (1)=0, μ (2)=1; Execution in step three then;
When the sequence number k of input N dimensional vector>1, competition and matching degree are calculated one and are gone on foot and finishes the first triumph label j that matching degree is the highest 1 *(k) expression, the first triumph label j 1 *(k) pass through X (k) and W J* (k)(k) calculating of Euclidean distance is with the first triumph label j 1 *(k) Dui Ying matching degree is η 1, with matching degree η 1Compare with the warning threshold value ρ that prestores in the orientation subsystem:
Work as η 1During<ρ, the pattern of input N dimensional vector does not conform to the pattern of pre-stored, and μ (k) is compared judgement with the M value, if μ (k)<M, opening up a label is the new cluster district of μ (k)+1, sets W μ (k)+1(k)=X (k); If μ (k)=M, the short-term memory classification is represented a F 2Middle neuron is taken, and finishes cluster, changes next beat study over to;
Work as η 1During 〉=ρ, the pattern of input N dimensional vector conforms to the pattern of pre-stored, and matching degree is qualified, and input vector enters the self-adapting resonance state, adjusts weight vector and finishes cluster simultaneously: if j ≠ j 1 *(k), W j(k+1)=W j(k); If j=j 1 *(k), W j(k+1)=W j(k)+α [X (k)-W j(k)], α is the learning rate that presets in the formula; Execution in step three then;
Step 3, to carrying out rough sort through the vector that obtains after the cluster in the step 2: the vector of setting through exporting after the cluster is Y (k)=[Y 1(k) ..., Y M(k)] T, Y (k) is inputed to modulation type secondary judging module, modulation type secondary judging module is judged to be L class modulation type MT with Y (k) i, L≤M, 1≤i≤L;
Step 4, sophisticated category: with each class output vector [Y of process rough sort in the step 3 i(k) ..., Y j(k)] TRespectively as the input vector of a DWNN neural network, weights are trained and adjusted to the DWNN neural network, when the error of the output vector of DWNN neural network and default desired output vector during less than pre-set threshold, finish Classification and Identification, otherwise proceed the next round iterative learning modulation mode of communication signal.
In preprocessing process: the ART2A-E algorithm is to satisfy 0≤x for the requirement of input vector X (k) i(k)≤1, i=1~N makes it to be met to the employing linear transformation that does not meet the demands, and does not do the normalized of vector here.
Formula W j(k+1)=W j(k)+α [X (k)-W j(k)] α is a learning rate in, the speed that its each weight vector of decision network conforms.Learning rate determine there is not clear and definite computing formula, choose excessive causing and can not restrain, choose too smallly, cause learning process slow excessively.For on-line study, pace of learning is required to need learning rate moderate than higher, for non-on-line study process, as long as learning rate is not high.
Adopt the signal of communication of ART2A-E algorithm classification identification, cluster degree difference in the class of the sample characteristics collection of different modulating mode signal makes a modulation system type have a plurality of clusters district, promptly takies a plurality of F 2The field output node, but under certain permissible error condition, ignore some less important cluster districts, when warning parameter ρ value is suitable, main cluster district is more stable, therefore, has designed a modulation type secondary judging module and do final modulation type judgement behind the output node of ART2A-E.The ART2A-E network all takies continuous a plurality of output nodes (representing corresponding a plurality of clusters district) through the every kind of pattern in training back, modulation type secondary judgement effect is exactly the sample in the arbitrary cluster district that drops on same modulation type all to be adjudicated be this modulation type, promptly can be with continuous F according to training result 2The a plurality of relevant output node judgement of field is with a kind of modulation type.The selection of warning parameter ρ has a significant impact the accuracy of identification of network, and ρ value macroreticular more is responsive more, and classification is also just thin more, but will exhaust rapidly at the memory capacity of noisy input environment lower network; The more little then classification of ρ value is thick more, easily inhomogeneous signal is classified as a class.ART2A-E is the self-organization sorter network, and wherein, the warning parameter value has important effect.The warning parameter value is big, and the ART2A-E network can be discerned the pattern with similar features, but but can not discern the pattern of noise or Characteristic Distortion; The warning parameter value is little, and then a plurality of close patterns are classified as a class, and noise immunity is good.Yet that guards against parameter value at present determines not have quantitative theoretical direction, therefore, has only the method for comparative analysis by experiment to select the warning parameter value of relative adaptation.
All input pattern samples and cynapse vector can both carry out normalization and be mapped to R nThe space.Utilize the Nonlinear Mapping theory of Sammon can be with normalized vector from R nSpatial mappings is to R 2The space, as shown in Figure 5.This method makes the higher dimensional space vector be converted to the lower dimensional space vector based on a mapping, but the data inner structure is almost constant.Among Fig. 5, D 1, D 2..., D 5Five classifications that expression is divided into behind the ART2A-E network, m 1, m 2..., m 5It is respectively barycenter of all categories.Y represents test pattern, and θ represents test pattern and classification D 3The angle of barycenter.D j∈ R nAnd R n=∪ 5 J=1D j, D i∩ D j=Φ (i ≠ j).By analyzing the competitive learning mechanism of ART2A-E, the cynapse weight vector m of Huo Shenging as can be known jBe D jBarycenter, i.e. m jRepresent classification D jThe judgement center.
Embodiment two: present embodiment is that step 2 in the embodiment one is further specified: the first triumph label j in the described step 2 1 *(k) computing formula is:
j 1 *(k)=arg(min[‖X(k)-W j*(k)(k)‖]),j=1~μ(k),
In the formula: ‖ X (k)-W J* (k)(k) ‖ represents X (k) and W J* (k)(k) Euclidean distance,
X (k) and W J* (k)The computing formula of Euclidean distance (k) is:
| | X ( k ) - W j * ( k ) ( k ) | | = [ Σ i = 1 N ( x i ( k ) - w j * ( k ) i ( k ) ) 2 ] 1 / 2 ,
With the first triumph label j 1 *(k) Dui Ying matching degree η 1Formula be:
η 1 = 1 - | | X ( k ) - W j * ( k ) ( k ) | | / N .
Embodiment three: present embodiment is that step 3 in the embodiment one is further specified: in the step 3, modulation type secondary judging module will be judged to be L class modulation type MT through the vectorial Y (k) that exports after the cluster iModulator approach be: with L class modulation type MT iIn every class modulation type set corresponding warning threshold value, according to the warning threshold value with one or more continuous output node [Y i(k) ..., Y j(k)] TJudgement is a kind of modulation type MT i, 1≤i≤j≤M wherein.
Embodiment four: below in conjunction with Fig. 6 and Fig. 7 present embodiment is described, present embodiment is that step 4 in the embodiment one is further specified: in the step 4 DWNN neural network trained and the method for adjusting weights is: set in each DWNN neural network and input vector [Y i(k) ..., Y j(k)] TCorresponding output vector is
Figure GSB00000492938400093
In the formula: v n(p) be the output vector of n the node corresponding with p input vector of DWNN neural network, the n span is 1 to B, and B represents the output layer node number of DWNN neural network, and A is the hidden layer node number of DWNN neural network, I is the input layer number, w NtBe the connection weight vector of middle layer to output layer, U TsBe the connection weight vector of input layer to the middle layer, 2 jBe the small echo contraction-expansion factor, span is 2 -J~2 J, Y s(p) be s input value of p input training sample, the s span is 1 to L, k1 is the shift factor of wavelet transformation, and span-K is to K, and t is the label of hidden layer neuron node, the t span from 1 to (2*J+1) (2*K+1), j=t/ (2*K+1)-J, A=(2*J+1) (2*K+1), p=mod (t, 2*K+1)-1, mod is for getting complementary function;
Set
Figure GSB00000492938400094
Be the desired output vector of DWNN neural network, then the output error of DWNN neural network is
Figure GSB00000492938400095
The output error of DWNN neural network to input layer to the partial differential of the connection weight vector in middle layer is:
δU ts = ∂ E / ∂ U ts = - Σ p = 1 P Σ n = 1 B [ d n p - v n p ] w nt 2 j / 2 ( ∂ ψ / ∂ s ′ ) Y s ( p ) , s ′ = Σ s = 1 I U ts Y s ( p )
The output error of DWNN neural network to output layer to the partial differential of the connection weight vector in middle layer is:
δw nt = ∂ E / ∂ w nt = - Σ p = 1 P [ d n p - v n p ] 2 j / 2 ψ ( 2 j Σ s = 1 I U ts - k 1 ) ,
The input layer of DWNN neural network to the adjustment amount of the connection weight vector in middle layer is:
ΔU ts i + 1 = - η ∂ E / ∂ U ts i + γΔ U ts i ,
η is the study factor that presets in the formula, and γ is the factor of momentum that presets,
The output layer of DWNN neural network to the adjustment amount of the connection weight vector in middle layer is:
Δw nt i + 1 = - η ∂ E / ∂ w nt i + γΔ w nt i ,
The input layer of adjustment DWNN neural network to the connection weight vector in middle layer is:
U ts i+1=U ts i+ΔU ts i
The output layer of adjustment DWNN neural network to the connection weight vector in middle layer is:
w nt i+1=w nt i+Δw nt i
The DWNN algorithm:
The DWNN algorithm proposes on the wavelet neural network basis, and ψ (x) is a mother wavelet function, can select corresponding female small echo according to the characteristics of want identification signal.Wavelet neural network not only structure is similar with the BP network, and its learning rules have also adopted the anti-phase propagation algorithm of error.
The ART2A-DWNN combination neural net:
The structure of ART2A-DWNN as shown in Figure 4.The ART2A-E network can be classified as the phase plesiotype one class as the ground floor of combination neural net by choosing less relatively warning parameter value, promptly carries out rough sort; DWNN directly is connected in the output layer of ART2A network respective classes.DWNN adopts the three-layer network structure, adopts time-domain and frequency-domain to have the female small echo ψ (x) of Morlet of high-resolution simultaneously.By many inputs, export the feed-forward type neural network more, as the neuronal activation function, construct single hidden layer feedforward neural network with the dyadic wavelet basis function, utilize the error back propagation method to learn, adopt method of conjugate gradient to revise the cynapse weights, up to exporting in error range.Behind ART2A-E layer rude classification, the model number in each class significantly reduces, so DWNN can restrain fast, and the identification accuracy is higher.
As shown in Figure 6, [Y Min, Y Max] be the boundary of class, they depend on the warning parameter value size of ART2A-E.The warning parameter value is little, and then the scope of class increases, and promptly comprises more pattern in the class.Subordinate phase is utilized the pattern in class of DWNN identification.Here, suppose that classification 1 comprises 6 kinds of pattern t 1, t 2, t 3..., t 6, barycenter is m 1At the output layer of DWNN, formulate a specific output vector for each pattern, represent pattern t as [001] 1, [010] represents pattern t 2Utilize the Nonlinear Mapping algorithm of Sammon, as shown in Figure 7, t 1, t 2, t 3..., t 6Be assigned to a round territory around, explanation can be found clear and definite boundary, so the DWNN network can be discerned this six kinds of patterns.As seen, the ART2A-DWNN network has improved the recognition capability of sorter, two kinds of patterns that it can the mutual aliasing in recognition feature space, and single ART2A-E sorter or DWNN sorter can not successfully be discerned this six kinds of patterns.
When new pattern input ART2A-DWNN network, it can basis and the similarity degree cluster of each signal interior or do not influence original cluster situation as one of the ART2A neural network new class automatically to already present class.Therefore, have only the DWNN needs that link to each other with the class that adds this new model to train again.From this angle of extensibility, this point is extremely important.On the contrary, if only use a DWNN sorter, must train the DWNN network again with all patterns to be identified, the training time is very long and can cause the problem of the slow and poor stability of speed of convergence.Because model number to be identified is many more, the recognition performance of ART2A-E and DWNN sorter is just poor more, and the ART2A-DWNN sorter can be discerned more pattern than single ART2A-E sorter or DWNN sorter.And, in the ART2A-DWNN sorter, input pattern is through behind the ART2A-E neural network rough classification of ground floor, model number in each classification significantly reduces, and like this, re-uses the DWNN sorter in each classification, the training time of each DWNN also will significantly reduce, and each DWNN is separate, parallel training simultaneously, thus also saved the processing time.
In addition, for the ART2A-DWNN sorter, because Nonlinear Mapping algorithm by Sammon, can find clear and definite judgement surface, therefore, combination neural net does not exist graph of errors to converge to the problem of local minimum point, thereby has avoided adopting the problem of the DWNN training time error curve convergence of Error Feedback algorithm to local minimum point.
Recognition methods of the present invention is at the sorter of ART2A and two kinds of sorters of DWNN characteristics design class ART2A-DWNN structure separately, adopt the double-deck structure of differentiating, have very wide identification range, higher discrimination, stronger extensibility and very strong noise immunity, and training and recognition speed are fast, have remedied the deficiency of single neural network classifier.

Claims (3)

1. classifying identification method based on the modulation mode of communication signal of ART2A-DWNN, it is characterized in that this classifying identification method is based on ART2A neural network and the realization of DWNN neural network, the ART2A neural network is made up of attention subsystem, orientation subsystem and modulation type secondary judging module, notices that subsystem is by short-term memory character representation field F 1Represent a F with the short-term memory classification 2Form, the warning threshold value that prestores in orientation subsystem ρ based on the Classification and Identification process of the modulation mode of communication signal of ART2A-DWNN is:
Step 1, signal of communication is carried out proper vector extract, with the input vector of the proper vector after extracting as the ART2A neural network, using the ART2A-E algorithm handles the input vector of input ART2A neural network: setting input vector is N dimensional vector X (k), N dimensional vector X (k) is inputed to the short-term memory character representation field F of ART2A neural network 1N neuron, X (k)=[x 1(k) ..., x N(k)] T, k is the sequence number of input N dimensional vector X (k);
Represent that with M the short-term memory classification represents a F 2Middle neuron sum, the short-term memory classification was represented a F when μ (k) was input N dimensional vector X (k) 2In occupied neuron number, unappropriated neuron is not set weight vector;
With short-term memory character representation field F 1Represent a F with the short-term memory classification 2Between interior star connection weight and extraterrestrial connection weight merge into by short-term memory character representation field F 1→ short-term memory classification is represented a F 2The weight vector W of single direction j(k), W j(k)=[w J1(k) ..., w JN(k)] T, j=1~μ (k);
Utilizing linear transformation makes x i(k) satisfy 0≤x i(k)≤1, i=1-N;
Step 2, N dimensional vector X (k) is at war with and mates study and carry out cluster: when the sequence number k=1 of input N dimensional vector X (k), W j(k)=W 1(1)=and X (1), μ (1)=0, μ (2)=1; Execution in step three then;
When the sequence number k>1 of input N dimensional vector X (k), competition and matching degree are calculated one and are gone on foot and finishes the first triumph label j that matching degree is the highest 1 *(k) expression, the first triumph label j 1 *(k) pass through X (k) and W J* (k)(k) calculating of Euclidean distance is with the first triumph label j 1 *(k) Dui Ying matching degree is η 1, with matching degree η 1Compare with the warning threshold value ρ that prestores in the orientation subsystem:
Work as η 1During<ρ, the pattern of input N dimensional vector X (k) does not conform to the pattern of pre-stored, and μ (k) is compared judgement with the M value, if μ (k)<M, opening up a label is the new cluster district of μ (k)+1, sets W μ (k)+1(k)=X (k); If μ (k)=M, the short-term memory classification is represented a F 2Middle neuron is taken, and finishes cluster, changes next beat study over to;
Work as η 1During 〉=ρ, the pattern of input N dimensional vector X (k) conforms to the pattern of pre-stored, and matching degree is qualified, and input N dimensional vector X (k) enters the self-adapting resonance state, adjusts weight vector and finishes cluster simultaneously: if j ≠ j 1 *(k), W j(k+1)=W j(k); If j=j 1 *(k), W j(k+1)=W j(k)+α [X (k)-W j(k)], α is the learning rate that presets in the formula; Execution in step three then;
Step 3, to carrying out rough sort through the vector that obtains after the cluster in the step 2: the vector of setting through exporting after the cluster is Y (k)=[Y 1(k) ..., Y M(k)] T, Y (k) is inputed to modulation type secondary judging module, modulation type secondary judging module is judged to be L class modulation type MT with Y (k) i, L≤M, 1≤i≤L;
Step 4, sophisticated category: with each class output vector [Y of process rough sort in the step 3 i(k) ..., Y j(k)] TRespectively as the input vector of a DWNN neural network, weights are trained and adjusted to the DWNN neural network, when the error of the output vector of DWNN neural network and default desired output vector during less than pre-set threshold, finish Classification and Identification, otherwise proceed the next round iterative learning modulation mode of communication signal.
2. the classifying identification method of the modulation mode of communication signal based on ART2A-DWNN according to claim 1 is characterized in that: the first triumph label j in the described step 2 1 *(k) computing formula is:
j 1 *(k)=arg(min[‖X(k)-W j*(k)(k)‖]),j=1~μ(k),
In the formula: ‖ X (k)-W J* (k)(k) ‖ represents X (k) and W J* (k)(k) Euclidean distance,
X (k) and W J* (k)The computing formula of Euclidean distance (k) is:
| | X ( k ) - W j * ( k ) ( k ) | | = [ Σ i = 1 N ( x i ( k ) - w j * ( k ) i ( k ) ) 2 ] 1 / 2 ,
With the first triumph label j 1 *(k) Dui Ying matching degree η 1Formula be:
η 1 = 1 - | | X ( k ) - W j * ( k ) ( k ) | | / N .
3. the classifying identification method of the modulation mode of communication signal based on ART2A-DWNN according to claim 1, it is characterized in that: in the step 3, modulation type secondary judging module will be judged to be L class modulation type MT through the vectorial Y (k) that exports after the cluster iModulator approach be: with L class modulation type MT iIn every class modulation type set corresponding warning threshold value, according to the warning threshold value with one or more continuous output node [Y i(k) ..., Y j(k)] TJudgement is a kind of modulation type MT i, 1≤i≤j≤M wherein.
CN2009100730588A 2009-10-15 2009-10-15 Classification and identification method for communication signal modulating mode based on ART2A-DWNN Expired - Fee Related CN101667252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100730588A CN101667252B (en) 2009-10-15 2009-10-15 Classification and identification method for communication signal modulating mode based on ART2A-DWNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100730588A CN101667252B (en) 2009-10-15 2009-10-15 Classification and identification method for communication signal modulating mode based on ART2A-DWNN

Publications (2)

Publication Number Publication Date
CN101667252A CN101667252A (en) 2010-03-10
CN101667252B true CN101667252B (en) 2011-07-20

Family

ID=41803868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100730588A Expired - Fee Related CN101667252B (en) 2009-10-15 2009-10-15 Classification and identification method for communication signal modulating mode based on ART2A-DWNN

Country Status (1)

Country Link
CN (1) CN101667252B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893704B (en) * 2010-07-20 2012-07-25 哈尔滨工业大学 Rough set-based radar radiation source signal identification method
CN102868654B (en) * 2012-09-10 2015-02-18 电子科技大学 Method for classifying digital modulation signal in cognitive network
CN105938558B (en) * 2015-03-06 2021-02-09 松下知识产权经营株式会社 Learning method
CN108154164A (en) * 2017-11-15 2018-06-12 上海微波技术研究所(中国电子科技集团公司第五十研究所) Signal of communication modulation classification system and method based on deep learning
CN108596204B (en) * 2018-03-15 2021-11-09 西安电子科技大学 Improved SCDAE-based semi-supervised modulation mode classification model method
CN108540202B (en) * 2018-03-15 2021-01-26 西安电子科技大学 Satellite communication signal modulation mode identification method and satellite communication system
CN108764077B (en) * 2018-05-15 2021-03-19 重庆邮电大学 Digital signal modulation classification method based on convolutional neural network
CN110070171A (en) * 2019-03-29 2019-07-30 中国科学院深圳先进技术研究院 Classification method, device, terminal and readable medium neural network based
CN110048978A (en) * 2019-04-09 2019-07-23 西安电子科技大学 A kind of signal modulate method
CN110613445B (en) * 2019-09-25 2022-05-24 西安邮电大学 DWNN framework-based electrocardiosignal identification method
CN111428655A (en) * 2020-03-27 2020-07-17 厦门大学 Scalp detection method based on deep learning
CN111680601A (en) * 2020-06-01 2020-09-18 浙江工业大学 Wireless signal modulation classifier visualization method based on long-term and short-term memory network
CN112115821B (en) * 2020-09-04 2022-03-11 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN113762492B (en) * 2021-08-27 2024-03-01 同济大学 Image recognition system and method based on organic synaptic transistor artificial neural network
CN113792852B (en) * 2021-09-09 2024-03-19 湖南艾科诺维科技有限公司 Signal modulation mode identification system and method based on parallel neural network
CN114567512B (en) * 2022-04-26 2022-08-23 深圳市永达电子信息股份有限公司 Network intrusion detection method, device and terminal based on improved ART2

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1036171C (en) * 1994-06-17 1997-10-15 武汉中南通信技术发展公司 Method and device for monitoring and testing non-voice business in telephone metwork

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1036171C (en) * 1994-06-17 1997-10-15 武汉中南通信技术发展公司 Method and device for monitoring and testing non-voice business in telephone metwork

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘安斐等.一种新的ART网络遥感影像分类方法.《微计算机信息》.2005,第21卷(第10-1期),96,97,123. *
吴芝路等.基于ART2A-E神经网络的数字调制识别.《哈尔滨商业大学学报(自然科学版)》.2004,第20卷(第6期),651-654. *

Also Published As

Publication number Publication date
CN101667252A (en) 2010-03-10

Similar Documents

Publication Publication Date Title
CN101667252B (en) Classification and identification method for communication signal modulating mode based on ART2A-DWNN
Haykin Neural networks and learning machines, 3/E
Murphey et al. Neural learning from unbalanced data
Zhao et al. Capsule networks with max-min normalization
Parvin et al. A classifier ensemble of binary classifier ensembles
Sokar et al. A generic OCR using deep siamese convolution neural networks
Khuat et al. An improved online learning algorithm for general fuzzy min-max neural network
Patel et al. Neural networks, fuzzy inference systems and adaptive-neuro fuzzy inference systems for financial decision making
Parvin et al. A scalable method for improving the performance of classifiers in multiclass applications by pairwise classifiers and GA
Kaur Implementation of backpropagation algorithm: A neural net-work approach for pattern recognition
Amasyali et al. Cline: A new decision-tree family
Xiao et al. Dynamic classifier ensemble selection based on GMDH
Parvin et al. Divide & conquer classification and optimization by genetic algorithm
Shahid et al. A new approach to image classification by convolutional neural network
Baruque et al. Hybrid classification ensemble using topology-preserving clustering
Mousavi A New Clustering Method Using Evolutionary Algorithms for Determining Initial States, and Diverse Pairwise Distances for Clustering
Guerfala et al. Data classification using logarithmic spiral method based on RBF classifiers
US20040064425A1 (en) Physics based neural network
Zhou et al. An extreme learning machine method for multi-classification with mahalanobis distance
Sapkal et al. Analysis of classification by supervised and unsupervised learning
Wesołowski et al. Time Series Classification Based on Fuzzy Cognitive Maps and Multi-Class Decomposition with Ensembling
Ozyildirim et al. Handwritten digits classification with generalized classifier neural network
Cordella et al. An adaptive reject option for LVQ classifiers
Cleofas et al. Use of ensemble based on ga for imbalance problem
Al Abdouli Handling the class imbalance problem in binary classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110720

Termination date: 20141015

EXPY Termination of patent right or utility model