CN108694390B - Modulation signal classification method for cuckoo search improved wolf optimization support vector machine - Google Patents

Modulation signal classification method for cuckoo search improved wolf optimization support vector machine Download PDF

Info

Publication number
CN108694390B
CN108694390B CN201810462952.3A CN201810462952A CN108694390B CN 108694390 B CN108694390 B CN 108694390B CN 201810462952 A CN201810462952 A CN 201810462952A CN 108694390 B CN108694390 B CN 108694390B
Authority
CN
China
Prior art keywords
parameter
hyper
training
signal
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810462952.3A
Other languages
Chinese (zh)
Other versions
CN108694390A (en
Inventor
孙洪波
杨苏娟
郭永安
朱洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201810462952.3A priority Critical patent/CN108694390B/en
Publication of CN108694390A publication Critical patent/CN108694390A/en
Application granted granted Critical
Publication of CN108694390B publication Critical patent/CN108694390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06K9/00536
    • G06K9/00523
    • G06K9/6269
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Complex Calculations (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a modulated signal classification method of a cuckoo search improved wolf optimized support vector machine, wherein the method selects high-order cumulant and local mean decomposition quantity approximate entropy as characteristic parameters of a modulated signal, utilizes the cuckoo to search twice updated wolf group positions to optimize two key parameters of a least square support vector machine model and punish coefficient gamma and kernel parameter sigma so as to obtain an optimal kernel limit learning machine parameter value, and the method reduces the influence of noise factors on a signal identification result, makes up the defects of under-enveloping, over-enveloping and boundary effects in the traditional modal empirical decomposition, effectively improves the defects of weak global search capability of the wolf optimization and easy falling into local optimal solution when processing high-dimensional data, and proves that the method can intelligently classify the modulated signal more efficiently and accurately by comparing MATLAB simulation with the original wolf optimization result, has good application prospect.

Description

Modulation signal classification method for cuckoo search improved wolf optimization support vector machine
Technical Field
The invention relates to the field of modulated signal classification and the field of swarm intelligence optimization, in particular to a modulated signal classification method for improving a wolf optimization support vector machine by cuckoo search.
Background
The signal modulation mode identification refers to identifying the modulation mode and each parameter of each signal in the environment of multi-modulation signals and noise interference. Generally, a receiver can only intercept a signal whose priori knowledge is unknown, so that it becomes more and more important to effectively identify a modulation mode of the signal.
In published literature relating to modulation identification, intelligent identification of signals can be roughly divided into two categories: a decision theory-based maximum likelihood hypothesis testing method and a statistical pattern classification identification method based on feature extraction. When the problem of modulation signal classification is solved, the method is mainly characterized in that the waveform of a signal to be classified is observed, then the waveform is assumed as one of candidate modulation modes, and similarity judgment is carried out according to a selected judgment threshold so as to determine the modulation mode of the signal to be classified; the latter requires first extracting the characteristic parameters of the received signal and then determining the signal type by means of a pattern recognition system. Generally, a mode identification framework of a modulation signal mainly comprises three main modules of signal preprocessing, characteristic parameter extraction and classifier. The research field of the invention focuses on a classifier module, and the quality of the classifier is directly related to the accuracy of identification. There are three types of classifier design structures when the following hot door is used: a decision tree theory based recognition method, a support vector machine based method and a neural network based recognition method.
A Support Vector Machine (SVM) is an advanced pattern recognition method based on a minimum empirical risk theory and a VC (volt-ampere) theory. The SVM has the property of minimizing structure statistics, and is favorable for processing the problems of nonlinear, small sample and high-dimensional pattern recognition.
The Least Squares Support Vector Machine (LSSVM) is an improvement over the standard SVM. Unlike the inequality constraint adopted by the standard SVM, the LSSVM adopts equality constraint to obtain a linear equation set, thereby greatly simplifying the calculation process, reducing the calculation cost and simultaneously enabling the support vector machine to be easier to train. The penalty coefficient and the nuclear parameter in the LSSVM model are collectively called as a hyper-parameter. The above analysis can be used to find that the problem of establishing the LSSVM estimation model, namely the problem of selecting the hyper-parameters, improperly selects the hyper-parameters, and the reliability of the prediction result is reduced. The selection of appropriate parameters is decisive for the estimation accuracy and complexity of the model.
Grayish optimization (GWO) was inspired by the wisdom behavior of the group of sirius living in asia-european continent. GWO mainly simulates the leading rank system and hunting mechanism of wolves in nature, and divides the wolves into 4 types by simulating the leading rank system of the wolves, as shown in fig. 1. α, β, δ are seen as the first three wolves in the wolve cluster that perform best (best fitness), which guide the other wolves (ω) towards searching for the best region in space (with a globally optimal solution to the solution problem). Three wolves of alpha, beta and delta are used for predicting and evaluating the possible positions of the prey in the whole iterative search process, namely, the prey is searched by a trend to jump to a key group, namely, an individual with a high fitness value.
GWO, the optimizing process comprises: randomly generating a group of wolfs in a search space, in the evolution process, evaluating and positioning the positions (global optimal solution) of the prey by alpha, beta and delta, calculating the distance between the wolfs and the prey by using the positions as the standard by other individuals in the group, completing the actions of omnibearing approach, surrounding, attacking and the like to the prey, and finally capturing the prey. GWO this location update approach has significant drawbacks: the global searching capability is weak, and the local optimal solution is trapped with higher probability, especially when high-dimensional data is used.
The Cuckoo Search (CS) concept is based on a parasitic feeding mechanism and Levy flight of cuckoos of a specific population, and researches show that the cuckoo search does not need to match parameters for many times in solving the optimization problem, and has the advantages of less set parameters, easiness in implementation and the like. The initial solution of cuckoo search represents the existing eggs in the host nest, and the new solution generated by cuckoo search represents the position where cuckoo lays eggs, and the final implementation needs to be established on the following three assumption rules:
firstly, randomly selecting a host nest of cuckoo to lay eggs, and only laying one cuckoo at a time, namely only allowing one cuckoo to lay eggs finally;
Secondly, the optimal host bird nest and the cuckoo bird egg with the highest priority (optimal solution) are reserved to the next iteration;
third, the host bird is based on the probability PaAnd (3) determining whether cuckoo eggs are found, and if so, discarding the eggs to retain the nests of the birds, or discarding the birds to rebuild new nests at other places. Pa∈[0,1]Is a transition probability by which a third hypothesis can be estimated.
According to the three assumptions, the nesting path and position update of the cuckoo can be derived. Based on the above rules, cuckoo search is mainly achieved by two operations, namely, fostering behavior and levy flight, when the position of the host bird nest is updated, so that the cuckoo search step length or both are almost equal in probability, and the selection of the moving direction is highly random. In addition, cuckoo search jumps to other areas from the current area more easily, and global search is completed.
From the above background analysis, it can be known that, what plays a key role in the pattern recognition and classification process of the modulation signal is whether to find a method which has strong search capability and high efficiency and cannot easily fall into a local optimal solution due to high-order complexity of data so as to determine the hyper-parameters in the least squares support vector machine estimation model.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a method for determining the hyperparameter in the least square support vector machine estimation model by combining cuckoo search and wolf optimization, and the method has a good classification effect in the classification application of the modulation signal with high-dimensional data characteristics.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a modulation signal classification method of a cuckoo search improved wolf optimization support vector machine comprises a training stage and a testing stage, and comprises the following steps:
the training phase comprises the steps of:
(1) randomly extracting N modulation signals from five digital modulation signals of M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, ensuring that the N modulation signals comprise the 5 signals, and naturally forming a test signal set array2 by the rest M-N modulation signals;
(2) for each signal x in training signal set array1iN extracting a characteristic parameter F based on the high-order cumulant1,F2And an approximate entropy characterizing parameter ApFn based on the local mean decomposition1,ApFn2And the extracted characteristic parameters form a four-dimensional characteristic vector of the training signal: f. of i kK is 1, 2, 3, 4; all the feature vectors of the training signal constitute the data training sample fi,i=1,2,3,...N;
(3) Substituting the training sample data obtained in the step (2) as one item in an equation constraint into the following least square support vector machine estimation model for training:
Figure BDA0001661360840000031
so that it satisfies the equality constraint:
Figure BDA0001661360840000032
wherein y isiFor the modulation signal type corresponding to the ith training sample, 1, 2, 3, 4, 5 respectively represent different classes of BPSK, QPSK, 8PSK, 16QAM, 64QAM, w is a weight vector,
Figure BDA0001661360840000033
is a non-linear function mapping the modulation signal to a high-dimensional feature space, u represents the deviation, eiThe error amount between the actual result of the ith group of training samples and the estimation output is determined, and gamma is a penalty coefficient;
objective function minJ (w, e) optimized by the above formulai) First part of
Figure BDA0001661360840000034
For calibrating the magnitude of the weights, the second part
Figure BDA0001661360840000035
Describing errors in the training data, finding the optimal penalty coefficient gamma and the kernel parameter sigma by using a Lagrange method so as to enable the objective function value to reach the minimum:
Figure BDA0001661360840000036
wherein muiFor Lagrange multiplier, the values of w, u and e in the above expression are respectively comparedi,μiDifferentiating and making them all equal to 0, resulting in an optimization of the problemConditions are as follows:
Figure BDA0001661360840000041
elimination of w and eiThe optimal solution problem will be converted to the form of the following system of linear equations:
Figure BDA0001661360840000042
Wherein y ═ y1;...;yN],μ=[μ1;...;μN]I is an identity matrix, 1v=[1;...;1]Omega is a square matrix, the mth row and nth column elements are omegamn=K(fm,fn) N, where the introduced kernel function is:
Figure BDA0001661360840000043
finally, a decision function of the modulation signal is obtained;
the testing phase comprises the following steps:
(4) extracting characteristic values from the test signals in the test signal set to form four-dimensional characteristic vectors of the test signals according to the step (2) to form data test samples;
(5) and substituting the data test sample into a decision function, and outputting a classification result of the signal.
Wherein, the decision function of the modulation signal obtained in the step (3) is as follows;
Figure BDA0001661360840000044
wherein f isj,fiA test sample consisting of a feature vector representing the test signal, y (f)j) And representing the result of signal identification and representing Lagrange multiplier, wherein the kernel function in the formula adopts an RBF kernel function:
Figure BDA0001661360840000045
where σ denotes the nuclear parameter and i is not equal to j.
Wherein, the characteristic parameter of the training signal in the step (2) is selected from a high-order cumulant F1,F2And approximate entropy of local mean decomposition (APFn)1,ApFn2The specific extraction method is as follows:
(2.1) x (t) is a modulation signal expression and is regarded as a smooth complex random process, and the second, fourth and sixth high-order cumulant expressions are as follows: mpq=E[x(t)p-qx*(t)q]A mixing moment of order p of x (t);
C21=M21
C40=M40-3M20 2
C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3
Wherein C is21、C40、C63Respectively are second order cumulant, fourth order cumulant and sixth order cumulant, and the characteristic parameter expression based on the high order cumulant is as follows:
Figure BDA0001661360840000051
(2.2)ApEn1and ApEn2The approximate entropy characteristic parameters of the two local mean decomposition quantities are calculated as follows:
decomposing an original modulation signal x (t) into the sum of k PF components and 1 monotonic function by using a local mean decomposition method, namely:
Figure BDA0001661360840000052
wherein the PFiIs the local mean decomposition quantity, h, of the original modulation signalk(t) is a monotonic function, taking the first two partsMean decomposition amount PF1、PF2Respectively carrying out approximate entropy calculation on the obtained data, and the steps are as follows:
(2.2.1) considering the local mean decomposition as a one-dimensional time series pf (i) of length s, i 1, 2i,i=1,2,...,s-z-1:
Pi={PF(i),PF(i+1),...,PF(i+z-1)}
(2.2.2) calculating the vector PiAnd PjDistance between, i, j 1, 2.
d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1
(2.2.3) given a threshold r, for each vector PiCounting the number of d ≦ r and the ratio of the number to the total distance (s-z), and recording
Figure BDA0001661360840000053
(2.2.4) pairs
Figure BDA0001661360840000054
Taking logarithm, then averaging all i, and recording as phiz(r):
Figure BDA0001661360840000055
(2.2.5) adding z to 1, repeating the steps (3.2.1) - (3.2.4) to obtain
Figure BDA0001661360840000056
And phiz+1(r);
(2.2.6) from ΦzAnd phiz+1An expression of approximate entropy is obtained:
Figure BDA0001661360840000057
the first two PF components PF of the modulated signal can be modulated by the above steps 1,PF2Respectively calculating approximate entropy, and recording as ApEn1And ApEn2
Training a least square support vector machine estimation model in the step (3) to determine an optimal penalty coefficient gamma and a kernel parameter sigma in the model, wherein the steps are as follows:
(3.1) initializing hyperparameter versus population: the total number of the hyper-parameter pairs is Q, the space searched by the hyper-parameter pair group is a two-dimensional space, wherein the value of the ith pair of hyper-parameter pairs in the two-dimensional space can be represented as Xi=(Xi1,Xi2) Maximum number of allowed iterations tmaxThe value ranges of the penalty coefficient gamma and the nuclear parameter sigma randomly generate a group of initial values of the hyper-parameter pairs in a search space;
(3.2) training the estimation model of the least square support vector machine according to the initial value of gamma and sigma, and calculating the size of an objective function value to be optimized in the estimation model of the least square support vector machine under the current gamma and sigma hyper-parameters, wherein the expression of the objective function is as follows:
Figure BDA0001661360840000061
(3.3) carrying out grade division on the hyper-parameter groups according to the size of the obtained objective function value, wherein the first three pairs of hyper-parameters with smaller objective function values are sequentially the three pairs with the best fitness, the three pairs are sequentially named as alpha, beta and delta hyper-parameter pairs according to the definition of wolf optimization, and the rest hyper-parameter pairs are omega groups;
(3.4) respectively carrying out numerical update on the hyperparameter population according to the following formula:
Dα=|C1·Xα(t)-X(t)|
Dβ=|C2·Xβ(t)-X(t)|
Dδ=|C3·Xδ(t)-X(t)|
X1=Xα(t)-A1·Dα
X2=Xβ(t)-A2·Dβ
X3=Xδ(t)-A3·Dδ
Figure BDA0001661360840000062
wherein Dα、Dβ、DδRespectively representing the distances between the current omega population and alpha, beta and delta hyper-parameter pairs, respectively, t representing the number of current iterations, Xα(t)、Xβ(t)、Xδ(t) respectively represents the positions of the current alpha, beta and delta hyper-parameter pairs, X (t) represents the position of the current hyper-parameter pair, wherein C1、C2、C3Is the wobble factor, expressed by the formula Ci=2r1Determining i as 1, 2, 3, r1∈[0,1],A1、A2、A3Is a convergence factor, represented by formula Ai=2ar2-a,i=1,2,3,r2∈[0,1]A is an iteration factor, X decreases linearly from 2 to 0 with the number of iterations1、X2、X3Respectively defining the advancing direction and the step length of an omega group to alpha, beta and delta hyper-parameter pairs, wherein X (t +1) can comprehensively judge the moving direction of the current hyper-parameter pair;
(3.5) updating the parameters a, Ai、Ci
(3.6) calculating objective function values of all hyper-parameter pairs at the new position, comparing two objective function values before and after position updating of all hyper-parameter pairs, if the updated value is smaller than the value before updating, keeping the updated hyper-parameter pairs, otherwise, keeping the hyper-parameter pairs before updating, and grading the current hyper-parameter group according to the step (3.3);
(3.7) comparing the current iteration times with the maximum allowable iteration times, and if the iteration times t are not reachedmaxSkipping to the step (3.4) to continue parameter optimization; otherwise, the training is finished, and the obtained hyperparameter pair, namely the optimal solution of the least square support vector machine estimation model is output.
Further, the step (3.4) further comprises the following steps:
respectively calculating the hyper-parameters again according to the following formula to update the populationWhile obeying a uniformly distributed random number v ∈ [0, 1 ]]Selecting a random number in the interval, if the random number is larger than the discovery probability PaIf not, updating the numerical value of the current hyper-parameter pair;
Figure BDA0001661360840000071
wherein i is 1, 2, 3, …, N, N represents the number of host bird nests, i.e. the total number of pairs of candidate hyperparameters,
Figure BDA0001661360840000072
representing inner product multiplication, Xi(t) represents the value of the ith pair of hyperparameters, X, of the tth iterationiAnd (t +1) is a value of the ith pair of hyper-parameter after the iteration, epsilon is a step size factor, since epsilon is more than 0, the step size is determined and is associated with the scale of the problem, and under most conditions, epsilon takes a value of 1, the step size of random flight is Levy (lambda), and the column dimension distribution is obeyed.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial effects:
(1) the method selects the characteristic parameter F of the high-order cumulant for the characteristic parameters of the training signal and the test signal1、F2And approximate entropy characterizing parameter ApFn of local mean decomposition1、ApFn2The method improves the influence of noise factors on the signal identification result, and makes up the defects of under-enveloping, over-enveloping and boundary effects in the traditional modal empirical decomposition.
(2) The method adopts an intelligent group algorithm-the strategy of the wolf optimization to search the optimal hyper-parameter of the least square support vector machine model, and overcomes the defect that the hyper-parameter of the traditional least square support vector machine can not be changed in a self-adaptive manner.
(3) The method of the invention carries out secondary optimization on the updated wolf pack position in the optimization of the gray wolf by using cuckoo search, expands the optimal solution search space, and reduces the iteration times, accelerates the convergence speed and improves the recognition rate compared with the defects of high calculation cost and poor robustness of the traditional pack intelligent optimization.
Drawings
Fig. 1 is a schematic diagram of a simulated wolf pack rating regime.
FIG. 2 is a schematic diagram of finding an optimal solution using gray wolf optimization.
Fig. 3 is a flow chart of a modulation signal classification method of cuckoo search improved grays wolf optimized support vector machine.
Fig. 4 is a flow chart of cuckoo search improved graywolf optimization parameters.
FIG. 5 is a comparison graph of MAPE value convergence curves of a wolf optimized least squares support vector machine (GWO-LSSVM) and a Cuckoo search improved wolf optimized least squares support vector machine (CS-IGWO-LSSVM) at low signal-to-noise ratio.
FIG. 6 is a comparison graph of MAPE value convergence curves of a wolf optimized least squares support vector machine (GWO-LSSVM) and a Cuckoo search improved wolf optimized least squares support vector machine (CS-IGWO-LSSVM) at high signal-to-noise ratio.
FIG. 7 is a comparison graph of recognition rates of a Grey wolf optimization least squares support vector machine (GWO-LSSVM) and a Cuckoo search improvement Grey wolf optimization least squares support vector machine (CS-IGWO-LSSVM) under different signal-to-noise ratios.
Detailed Description
The technical scheme of the invention is further explained by combining the drawings and the embodiment.
As shown in fig. 3, the further detailed steps of the technical solution of the present invention are described as follows:
(1) in this example, N Monte Carlo experiments were performed, and the simulation signals were: BPSK, QPSK, 8PSK, 16QAM, and 64QAM are five common digitally modulated signals. The simulation software environment is MATLAB r2014b, the hardware environment is a wonderful notebook, and the processor: intel Core i5-5200U @2.20GHz 2.19GHz, memory: 8 GB. Selected carrier signal frequency fcTaking the value of 2kHz, the symbol rate rsTaking 1000Baud, sampling rate fsAnd the value of the additive white Gaussian noise is 8kHz, and the channel environment is zero mean value. The signal-to-noise ratio (SNR) is defined by the following equation, and has a value range of [ -6, 12 []In dB.
Figure BDA0001661360840000081
Wherein ESAnd n0Representing symbol energy and noise energy, respectively.
Randomly extracting N modulation signals from five common digital modulation signals of M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, and naturally forming a test signal set array2 by the rest N-M modulation signals; the number of each kind of extracted training signals is counted, the variance value of the extracted number of 5 kinds of signals is calculated, if the variance limit value xi is smaller than 0.5, the extracted training signal set can be guaranteed to give consideration to each kind of signals and the number is balanced, and if the limit value of the variance limit value is not met, random extraction is performed again until the limit value is met.
The following description will be given by taking M300 and N200 as examples.
(2) For each signal x in training signal set array1 i200, extracting a high-order cumulant characteristic parameter F1,F2And the local mean decomposition quantity approximate entropy characteristic parameter ApFn1,ApFn2And the extracted characteristic parameters form a four-dimensional characteristic vector of the training signal: f. ofi kK is 1, 2, 3, 4, the feature vectors of all training signals constitute the data training samples f i1, 2, 3.. 200. The specific extraction method is as follows: x (t) is a modulated signal representation, which is considered to be a smooth complex random process.
Firstly, describing the calculation of the characteristic parameter of the high-order cumulant, and the expressions of the four and six high-order cumulant of the modulation signal are as follows:
C21=M21
C40=M40-3M20 2
C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3
wherein M ispq=E[x(t)p-qx*(t)q]A mixing moment of order p, C, of x (t)21、C40、C63Respectively, second, fourth and sixth order cumulants. The characteristic parameter expression based on the high-order cumulant is as follows:
Figure BDA0001661360840000091
the theoretical calculations are as follows:
characteristic parameter BPSK QPSK 8PSK 16QAM 64QAM
F
1 2 1 0 0.68 2.08
F2 16 4 4 0.619 1.797
As can be seen from the above table, the two parameters can respectively perform in-class identification and classification on MPSK, i.e., BPSK, QPSK, 8PSK, and MQAM, i.e., 16QAM and 64 QAM.
Next, ApEn is introduced1And ApEn2Calculating approximate entropy characteristic parameters of two local mean decomposition quantities as follows:
decomposing the original modulation signal x (t) into the sum of k PF components and 1 monotonic function according to the step of local mean decomposition, namely:
Figure BDA0001661360840000092
Wherein the PFiIs the local mean decomposition quantity, h, of the original modulation signalk(t) is a monotonic function, the first two local mean decomposition quantities PF are taken1、PF2Respectively carrying out approximate entropy calculation on the obtained data, and the steps are as follows:
(a) regarding the local mean decomposition amount PF as a one-dimensional time series { PF (i) } of length s, i ═ 1, 2i,i=1,2,...,s-z-1.:
Pi={PF(i),PF(i+1),...,PF(i+z-1)}
(b) Calculating a vector PiAnd PjJ-1, 2.., distance between s-z-1:
d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1
(c) given a threshold r, for each vector PiCounting the number of d ≦ r and the ratio of the number to the total distance (s-z), and recording
Figure BDA0001661360840000093
(d) To pair
Figure BDA0001661360840000094
Taking logarithm, and then all
Figure BDA0001661360840000095
Adding up, calculating the average value, and recording as phiz(r):
Figure BDA0001661360840000096
(e) Adding z to 1, repeating the steps (1) to (4) to obtain
Figure BDA0001661360840000097
And phiz+1(r)。
(f) Thus consisting ofzAnd phiz+1An expression of approximate entropy is obtained:
Figure BDA0001661360840000098
the PF can be respectively treated by the steps1、PF2Find the approximate entropy, denoted ApEn1And ApEn2Then is further reacted with F1,F2And taking the parameters as characteristic parameters together for subsequent training.
(3) Training samples f obtained in the step (2)iExample specific data as one of the equation constraints is substituted into the following least squares support vector machine estimation model for training:
Figure BDA0001661360840000101
so that it satisfies the equality constraint:
Figure BDA0001661360840000102
wherein y isi1, 2, 3, 4, 5 denote different classes of BPSK, QPSK, 8PSK, 16QAM, and 64QAM, respectively, w is a weight vector,
Figure BDA0001661360840000103
Is capable of mapping the input space intoNonlinear transformation of high dimensional space, u represents deviation amount, eiAnd (i ═ 1, 2, 3., 200) is the error amount between the actual result and the estimated output of the i-th training sample group, and gamma is a penalty coefficient.
Objective function minJ (w, e) optimized by the above formulai) First part of
Figure BDA0001661360840000104
For calibrating the magnitude of the weights and penalising large weights, second part
Figure BDA0001661360840000105
Errors in the training data are described. This optimization problem is solved using the lagrange method:
Figure BDA0001661360840000106
wherein muiFor the Lagrange multiplier, the upper pair of w, u, e will bei,μiDifferentiating and making them all equal to 0, the optimum condition for the problem is found:
Figure BDA0001661360840000107
elimination of w and eiThe optimal solution problem will be converted to the form of the following system of linear equations:
Figure BDA0001661360840000108
wherein y is [ y ]1;...;y200],μ=[μ1;...;μ200]I is an identity matrix, 1v=[1;...;1]Omega is a square matrix, the mth row and nth column elements are omegamn=K(fm,fn) Wherein m, n ═ 1, 2, 3.., 200, where the introduced kernel function is:
Figure BDA0001661360840000109
Figure BDA0001661360840000111
the decision function for finally obtaining the modulation signal is:
Figure BDA0001661360840000112
it f isj(j ═ 1, 2, 3.., 100) denotes a test sample composed of a feature vector of the test signal, y (f · isj) The result of signal identification is represented, different classes of BPSK, QPSK, 8PSK, 16QAM and 64QAM are represented by 1, 2, 3, 4 and 5 respectively, a Lagrange multiplier is represented, and a kernel function in a formula adopts an RBF kernel function:
Figure BDA0001661360840000113
Where σ represents a nuclear parameter.
And training the least square support vector machine estimation model, namely determining the optimal values of the penalty coefficient gamma and the nuclear parameter sigma in the model, wherein the penalty parameter and the nuclear parameter belong to hyper-parameters, and the hyper-parameters are used for referring to the penalty parameter gamma and the nuclear parameter sigma.
The invention adopts a cuckoo search improved gray wolf optimization method to help a least square support vector machine model to select an optimal value of a hyper-parameter pair, the specific method is to introduce cuckoo search for secondary position update when the hyper-parameter updates a group position, the problem that the original gray wolf optimization is easy to fall into local optimization under a high-dimensional data set environment is improved, the two-dimensional coordinate solution of the output optimal alpha wolf is the optimal value of the hyper-parameter pair, the gray wolf fitness corresponds to the value of a target function, and the corresponding constraint conditions are met, and the specific steps are as follows:
step 3.1: initializing hyper-parameter pair populations: the total number of the hyper-parameter pairs is 20, and the space searched by the hyper-parameter pairs is two-dimensional spaceWherein the value of the ith pair of hyper-parameters in two-dimensional space can be represented as Xi=(Xi1,Xi2) Maximum allowed number of iterations tmaxThe value ranges of the penalty coefficient gamma and the nuclear parameter sigma are respectively gamma-epsilon [0, 100% ],σ∈[0,1]Randomly generating an initial value of a group of hyper-parameter pairs in a search space;
step 3.2: and (3) training the LSSVM model according to the initial value of (gamma, sigma), and calculating the size of the objective function value to be optimized in the LSSVM model under the current gamma, sigma hyper-parameter.
Step 3.3: and (3) carrying out grade division on the hyper-parameter groups according to the obtained target function values, wherein the first three pairs of hyper-parameters with smaller target function values are sequentially the three pairs with the best fitness, the three pairs are sequentially named as alpha, beta and delta hyper-parameter pairs according to the definition of the gray wolf optimization, and the rest hyper-parameter group groups are omega groups.
Step 3.4: respectively updating the values of the hyper-parameter pair groups according to the following formula:
Dα=|C1·Xα(t)-X(t)|
Dβ=|C2·Xβ(t)-X(t)|
Dδ=|C3·Xδ(t)-X(t)|
X1=Xα(t)-A1·Dα
X2=Xβ(t)-A2·Dβ
X3=Xδ(t)-A3·Dδ
Figure BDA0001661360840000121
wherein Dα、Dβ、DδRespectively representing the distances between the current omega population and alpha, beta and delta hyper-parameter pairs, respectively, t representing the number of current iterations, Xα(t)、Xβ(t)、Xδ(t) positions of current alpha, beta, delta hyper-parameter pairs, respectively, and X (t) positions of current hyper-parameter pairsPosition in which C1、C2、C3Is the wobble factor, expressed by the formula Ci=2r1Determining i as 1, 2, 3, r1∈[0,1],A1、A2、A3Is a convergence factor, represented by formula Ai=2ar2-a,i=1,2,3,r2∈[0,1]A is an iteration factor, X decreases linearly from 2 to 0 with the number of iterations1、X2、X3The advancing direction and the step length of the omega group to the alpha, beta and delta hyper-parameter pairs are respectively defined, and the X (t +1) can comprehensively judge the moving direction of the current hyper-parameter pair.
When the super-parameters are used for position updating of the group, cuckoo search is introduced for secondary position updating, and the problem that the original wolf optimization is easy to fall into local optimization in a high-dimensional data set environment is solved. So the following additional steps are added in step 3.4 above:
step 3.4.1: respectively calculating the updated positions of the hyper-parameters to the population again according to the following formula, and simultaneously, obtaining the uniformly distributed random number v from the [0, 1 ]]Selecting a random number in the interval, if the random number is larger than the discovery probability PaIf the value is 0.5, the value of the current pair of hyperparameters is updated, otherwise, the value is not updated.
Figure BDA0001661360840000122
Where i is 1, 2, 3, …, 20, indicating that the number of searchable host nests is 20 (i.e., the number of candidate superparameters versus the total number),
Figure BDA0001661360840000123
representing inner product multiplication, Xi(t) represents the value of the ith pair of hyperparameters of the tth iteration, Xi(t +1) is the value of the ith pair of hyperparameters after the current iteration. Epsilon is a step size factor, since epsilon is more than 0, the step size is determined and is associated with the scale of the problem, and epsilon takes the value of 1. The step size of the random flight is Levy (λ), obeying the column dimension distribution.
Step 3.5: updating parameters a, Ai、Ci
Step 3.6: and (3) calculating objective function values of all the hyper-parameter pairs at the new position, comparing the two objective function values before and after the position of each hyper-parameter pair is updated, if the updated value is smaller than the value before updating, keeping the updated hyper-parameter pair, otherwise, keeping the hyper-parameter pair before updating, and grading the current hyper-parameter group according to the step 3.3.
Step 3.7: comparing the current iteration times with the maximum allowable iteration times, and if the current iteration times do not reach 200, skipping to the step 3.4 to continue parameter optimization; otherwise, the training is finished, the obtained alpha hyper-parameter pair, namely the optimal solution of the LSSVM model is output, the obtained optimal punishment parameter gamma is 4.9278, and the optimal kernel parameter sigma is 0.3112.
And 4, step 4: extracting characteristic values from the signals in the test signal set array2 to form four-dimensional characteristic vectors of the test signals in the step 2 to form a data test sample fj,j=1,2,...,100。
And 5: and substituting the data test sample into a decision function, and outputting a classification result of the signal.
In the study of this embodiment, to compare and evaluate the accuracy of the modulation signal method, a Mean Absolute Percent Error (MAPE) is selected, and the calculation method is as follows:
Figure BDA0001661360840000131
where M is the total number of signals of the training samples 200, yi,ypThe actual value and the estimated value of the ith signal are respectively represented. Finally the accuracy of the method can be expressed as:
est_acc=100%-(MAPE×100%)
to better illustrate the superiority of cuckoo search for improved grey wolf optimization on the classification effect of modulation signals, the following description compares the classification result with the classification result of raw grey wolf optimization:
as can be seen from the convergence graph shown in fig. 5, both the raw wolf optimization (GWO in the example of the graph) and the cuckoo search improved wolf optimization (CS-GWO in the example of the graph) have higher convergence rates, the cuckoo search improved wolf optimization converges to the optimal solution at about 65 iterations, and the raw wolf optimization converges at about 85 times, so the speed of the cuckoo search improved wolf optimization in the convergence process is increased. In addition, as is apparent from fig. 5, the final recognition rate of the brugia solei search improvement sirius optimization is also improved, the MAPE value of the brugia solei optimization finally converges to 94.1085% under a low signal-to-noise ratio of-3 dB, and the MAPE value of the brugia solei search improvement sirius optimization reaches 96.7426% under the signal-to-noise ratio, so that the modulation signal classification method of the brugia solei search improvement sirius optimization requires fewer iterations than the modulation signal classification method of the brugia solei optimization under the low signal-to-noise ratio, and has a more accurate recognition rate.
FIG. 6 is a simulation graph of the convergence curve of the average recognition rate of the reference LSSVM classification method on the classification of modulation signals at a signal-to-noise ratio of 12 dB. Fig. 6 shows that the classification method of the raw grayish wolf optimization has 100% of recognition rate as the cuckoo search improved grayish wolf optimization, and observation of the number of convergence iterations shows that the raw grayish wolf optimization converges to an optimal solution around 90 iterations, and the cuckoo search improved grayish wolf optimization converges around 65 iterations, so that under a high signal-to-noise ratio, the cuckoo search improved grayish wolf optimization is compared with the raw grayish wolf optimization, and the probability of being trapped to the local optimal part in the optimization process of the method is reduced.
The simulation of fig. 7 shows the recognition rate of the modulation signal by the classification method of the raw wolf optimization and the classification method of the cuckoo search improved wolf optimization within the range of-6 dB to 12dB of the signal-to-noise ratio. The identification rate broken line of fig. 7 shows that, under the signal-to-noise ratio of the experiment, the performance of the classification method for improving the optimization of the wolf by cuckoo search is obviously improved. Under the condition that the signal-to-noise ratio is greater than 3dB, the recognition rates of the two classification methods reach a convergence value, the recognition rate before improvement reaches 98%, and the recognition rate after improvement reaches 100%. And under the experimental condition that the signal-to-noise ratio is lower than 3dB, the performance of the classification method for improving the gray wolf optimization by cuckoo search is better than that of the original gray wolf optimization classification method.
The combination of the simulation results can draw a conclusion that: the classification method of the Husky optimized least square support vector machine improved by the Cuckoo search has higher convergence speed, improves efficiency, and simultaneously improves the recognition rate of modulation signals, and particularly has obvious advantages under the condition of low signal-to-noise ratio with practical significance.
The method utilizes the cuckoo to search for the twice updated wolf pack position so as to improve the global search capability and better optimize the LSSVM function estimation model parameter, so that the method has good robustness in the classification application of the modulation signal, has more accurate intelligent classification recognition effect and also has important application value in other applications.

Claims (5)

1. A modulation signal classification method of a cuckoo search improved wolf optimization support vector machine comprises a training stage and a testing stage, and is characterized in that:
the training phase comprises the steps of:
(1) randomly extracting N modulation signals from five digital modulation signals of M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, ensuring that the N modulation signals comprise the 5 digital modulation signals, and naturally forming a test signal set array2 by the rest M-N modulation signals;
(2) For each signal x in training signal set array1iExtracting characteristic parameters F based on high-order cumulant1,F2And an approximate entropy characterizing parameter ApFn based on the local mean decomposition1,ApFn2And the extracted characteristic parameters form a four-dimensional characteristic vector of the training signal: f. ofi kK is 1, 2, 3, 4; all the feature vectors of the training signal constitute the data training sample fi,i=1,2,3,...N;
(3) Substituting the training sample data obtained in the step (2) as one item in an equation constraint into the following least square support vector machine estimation model for training:
Figure FDA0003617742740000011
so that it satisfies the equality constraint:
Figure FDA0003617742740000012
wherein y isiFor the modulation signal type corresponding to the ith training sample, 1, 2, 3, 4, 5 respectively represent different classes of BPSK, QPSK, 8PSK, 16QAM, 64QAM, w is a weight vector,
Figure FDA0003617742740000013
is a non-linear function mapping the modulation signal to a high-dimensional feature space, u represents the deviation, eiThe error amount between the actual result of the ith group of training samples and the estimation output is determined, and gamma is a penalty coefficient;
objective function minJ (w, e) optimized by the above formulai) First part of
Figure FDA0003617742740000014
For calibrating the magnitude of the weights, the second part
Figure FDA0003617742740000015
Describing error in training data, using Lagrange method to find optimal punishment coefficient gamma and decision function of modulation signal
Figure FDA0003617742740000016
So that the value of the objective function is minimized:
Figure FDA0003617742740000017
Wherein muiFor Lagrange multiplier, the values of w, u and e in the above expression are respectively comparedi,μiDifferentiating and making them all equal to 0, the optimum condition for the problem is found:
Figure FDA0003617742740000021
elimination of w and eiThe optimal solution problem will be converted to the form of the following system of linear equations:
Figure FDA0003617742740000022
wherein y is [ y ]1;...;yN],μ=[μ1;...;μN]I is an identity matrix, 1v=[1;...;1]Omega is a square matrix, the mth row and nth column elements are omegamn=K(fm,fn) N, where the introduced kernel function is:
Figure FDA0003617742740000023
finally, a decision function of the modulation signal is obtained;
the testing phase comprises the following steps:
(4) extracting characteristic values from the test signals in the test signal set to form a four-dimensional characteristic vector of the test signals in the step (2) to form a data test sample;
(5) and substituting the data test sample into a decision function, and outputting a classification result of the signal.
2. The method as claimed in claim 1, wherein the decision function of obtaining the modulation signal is as follows:
Figure FDA0003617742740000024
wherein f isj,fiA test sample consisting of a feature vector representing the test signal, y (f)j) Representing the result of signal recognition, muiRepresenting Lagrange multiplier, wherein the kernel function in the formula adopts RBF kernel function:
Figure FDA0003617742740000025
where σ denotes the nuclear parameter and i is not equal to j.
3. The method of claim 1, wherein the method comprises the following steps: selecting high-order cumulant F as characteristic parameter of training signal in step (2)1,F2And approximate entropy of local mean decomposition (APFn)1,ApFn2The specific extraction method is as follows:
(2.1) x (t) is a modulation signal expression and is regarded as a smooth complex random process, and the second, fourth and sixth high-order cumulant expressions are as follows: mpq=E[x(t)p-qx*(t)q]A mixture moment of order p of x (t), where q is expressed as a high order accumulation coefficient of a stationary complex random process;
C21=M21
C40=M40-3M20 2
C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3
wherein C is21、C40、C63Respectively are second order cumulant, fourth order cumulant and sixth order cumulant, and the characteristic parameter expression based on the high order cumulant is as follows:
Figure FDA0003617742740000031
(2.2)ApEn1and ApEn2The approximate entropy characteristic parameters of the two local mean decomposition quantities are calculated as follows:
decomposing an original modulation signal x (t) into the sum of k PF components and 1 monotonic function by using a local mean decomposition method, namely:
Figure FDA0003617742740000032
wherein the PFiIs the local mean decomposition quantity, h, of the original modulation signalk(t) is a monotonic function, the first two local mean decomposition quantities PF are taken1、PF2Respectively carrying out approximate entropy calculation on the obtained data, and the steps are as follows:
(2.2.1) considering the local mean decomposition as a one-dimensional time series pf (i) of length s, i 1, 2 i,i=1,2,...,s-z-1:
Pi={PF(i),PF(i+1),...,PF(i+z-1)}
(2.2.2) calculating the vector PiAnd Pj,1, 2, s-z-1:
d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1
(2.2.3) given a threshold r, for each vector PiCounting the number of d ≦ r and the ratio of the number to the total distance (s-z), and recording
Figure FDA0003617742740000033
(2.2.4) pairs
Figure FDA0003617742740000034
Taking logarithm, then averaging all i, and recording as phiz(r):
Figure FDA0003617742740000035
(2.2.5) adding z to 1, repeating the steps (3.2.1) - (3.2.4) to obtain
Figure FDA0003617742740000036
And phiz+1(r);
(2.2.6) from ΦzAnd phiz+1Obtaining a representation of approximate entropyFormula (II):
Figure FDA0003617742740000037
the first two PF components PF of the modulated signal can be modulated by the above steps1,PF2Respectively calculating approximate entropy, and marking as ApEn1And ApEn2
4. The method of claim 1, wherein the method comprises the following steps: in the step (3), training the least square support vector machine estimation model, and determining the optimal penalty coefficient gamma and the nuclear parameter sigma in the model, the steps are as follows:
(3.1) initializing hyperparameter versus population: the total number of the hyper-parameter pairs is Q, the space searched by the hyper-parameter pair group is a two-dimensional space, wherein the value of the ith pair of hyper-parameter pairs in the two-dimensional space can be represented as Xi=(Xi1,Xi2) Maximum number of allowed iterations tmaxThe value ranges of the penalty coefficient gamma and the nuclear parameter sigma randomly generate a group of initial values of the hyper-parameter pairs in a search space;
(3.2) training the estimation model of the least square support vector machine according to the initial values of gamma and sigma, and calculating the value of an objective function value to be optimized in the estimation model of the least square support vector machine under the current gamma and sigma hyper-parameters, wherein the expression of the objective function is as follows:
Figure FDA0003617742740000041
(3.3) carrying out grade division on the hyper-parameter groups according to the size of the obtained objective function value, wherein the first three pairs of hyper-parameters with smaller objective function values are sequentially the three pairs with the best fitness, the three pairs are sequentially named as alpha, beta and delta hyper-parameter pairs according to the definition of wolf optimization, and the rest hyper-parameter pairs are omega groups;
(3.4) respectively carrying out numerical update on the hyperparameter population according to the following formula:
Dα=|C1·Xα(t)-X(t)|
Dβ=|C2·Xβ(t)-X(t)|
Dδ=|C3·Xδ(t)-X(t)|
X1=Xα(t)-A1·Dα
X2=Xβ(t)-A2·Dβ
X3=Xδ(t)-A3·Dδ
Figure FDA0003617742740000042
wherein Dα、Dβ、DδRespectively representing the distances between the current omega population and alpha, beta and delta hyper-parameter pairs, respectively, t representing the number of current iterations, Xα(t)、Xβ(t)、Xδ(t) respectively represents the positions of the current alpha, beta and delta hyper-parameter pairs, X (t) represents the position of the current hyper-parameter pair, wherein C1、C2、C3Is the wobble factor, expressed by the formula Ci=2r1Determining i as 1, 2, 3, r1∈[0,1],A1、A2、A3Is a convergence factor, represented by formula Ai=2ar2-a,i=1,2,3,r2∈[0,1]A is an iteration factor, X decreases linearly from 2 to 0 with the number of iterations1、X2、X3Respectively defining the advancing direction and the step length of an omega group to alpha, beta and delta hyper-parameter pairs, wherein X (t +1) can comprehensively judge the moving direction of the current hyper-parameter pair;
(3.5) updating the parameters a, Ai、Ci
(3.6) calculating objective function values of all hyper-parameter pairs at the new position, comparing two objective function values before and after position updating of all hyper-parameter pairs, if the updated value is smaller than the value before updating, keeping the updated hyper-parameter pairs, otherwise, keeping the hyper-parameter pairs before updating, and grading the current hyper-parameter group according to the step (3.3);
(3.7) comparing the current iteration times with the maximum allowable iteration times, and if the iteration times t are not reachedmaxSkipping to the step (3.4) to continue parameter optimization; otherwise, the training is finished, and the obtained hyperparameter pair, namely the optimal solution of the least square support vector machine estimation model is output.
5. The method for classifying modulation signals of the blackbird search improved graying support vector machine according to claim 3, wherein the step (3.4) further comprises the following steps:
respectively calculating the positions of the super parameters after the group is updated according to the following formula, and simultaneously calculating the random number upsilon ∈ [0, 1) which obeys uniform distribution]Selecting a random number in the interval, if the random number is larger than the discovery probability PaIf not, updating the numerical value of the current hyper-parameter pair;
Figure FDA0003617742740000051
wherein i is 1, 2, 3, …, N, N represents the number of host bird nests, i.e. the total number of pairs of candidate hyperparameters,
Figure FDA0003617742740000052
Representing inner product multiplication, Xi(t) values of the ith pair of hyperparameters, X, for the tth iterationiAnd (t +1) is the value of the ith pair of hyperparameter after the current iteration, epsilon is a step-size factor, the step size of random flight is Levy (lambda), and the column-dimensional distribution is obeyed.
CN201810462952.3A 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine Active CN108694390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810462952.3A CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810462952.3A CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Publications (2)

Publication Number Publication Date
CN108694390A CN108694390A (en) 2018-10-23
CN108694390B true CN108694390B (en) 2022-06-14

Family

ID=63847375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810462952.3A Active CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Country Status (1)

Country Link
CN (1) CN108694390B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163131B (en) * 2019-05-09 2022-08-05 南京邮电大学 Human body action classification method based on hybrid convolutional neural network and ecological niche wolf optimization
CN110166389B (en) * 2019-06-12 2021-06-25 西安电子科技大学 Modulation identification method based on least square support vector machine
CN110378526A (en) * 2019-07-15 2019-10-25 安徽理工大学 The mobile method for predicting of bus station based on GW and SVR, system and storage medium
CN111024433A (en) * 2019-12-30 2020-04-17 辽宁大学 Industrial equipment health state detection method for optimizing support vector machine by improving wolf algorithm
CN111242005B (en) * 2020-01-10 2023-05-23 西华大学 Heart sound classification method based on improved wolf's swarm optimization support vector machine
CN111428418A (en) * 2020-02-28 2020-07-17 贵州大学 Bearing fault diagnosis method and device, computer equipment and storage medium
CN111414658B (en) * 2020-03-17 2023-06-30 宜春学院 Rock mass mechanical parameter inverse analysis method
CN112039820B (en) * 2020-08-14 2022-06-21 哈尔滨工程大学 Communication signal modulation and identification method for quantum image group mechanism evolution BP neural network
CN111967670A (en) * 2020-08-18 2020-11-20 浙江中新电力工程建设有限公司 Switch cabinet partial discharge data identification method based on improved wolf algorithm
CN112163570B (en) * 2020-10-29 2021-10-19 南昌大学 SVM (support vector machine) electrocardiosignal identification method based on improved Husky algorithm optimization
CN114118339B (en) * 2021-11-12 2024-05-14 吉林大学 Radio modulation signal identification and classification method based on improvement ResNet of cuckoo algorithm
CN114964571A (en) * 2022-05-26 2022-08-30 常州大学 Pressure sensor temperature compensation method based on improved wolf algorithm
CN115378777A (en) * 2022-08-25 2022-11-22 杭州电子科技大学 Method for identifying underwater communication signal modulation mode in alpha stable distribution noise environment
CN115562275B (en) * 2022-10-11 2024-10-01 西安科技大学 MLRNN-PID algorithm-based intelligent navigation control method for coal mine crawler-type heading machine
CN116506307B (en) * 2023-06-21 2023-09-12 大有期货有限公司 Network delay condition analysis system of full link
CN117574213B (en) * 2024-01-15 2024-03-29 南京邮电大学 APSO-CNN-based network traffic classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166691A (en) * 2014-07-29 2014-11-26 桂林电子科技大学 Extreme learning machine classifying method based on waveform addition cuckoo optimization
CN107908688A (en) * 2017-10-31 2018-04-13 温州大学 A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166691A (en) * 2014-07-29 2014-11-26 桂林电子科技大学 Extreme learning machine classifying method based on waveform addition cuckoo optimization
CN107908688A (en) * 2017-10-31 2018-04-13 温州大学 A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Improved Grey Wolf Optimizer Algorithm Integrated with Cuckoo Search;Hui Xu等;《The 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications》;20170923;第490-493页 *
PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses;Xiaoyong Liu等;《The Scientific World Journal》;20141231;第1-7页 *

Also Published As

Publication number Publication date
CN108694390A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
CN108694390B (en) Modulation signal classification method for cuckoo search improved wolf optimization support vector machine
Kuptametee et al. A review of resampling techniques in particle filtering framework
CN107590436B (en) Radar emitter signal feature selection approach based on peplomer subgroup multi-objective Algorithm
CN104881706B (en) A kind of power-system short-term load forecasting method based on big data technology
Similä et al. Multiresponse sparse regression with application to multidimensional scaling
Akay A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding
CN110751121B (en) Unsupervised radar signal sorting method based on clustering and SOFM
CN116982113A (en) Machine learning driven plant gene discovery and gene editing
CN115131618B (en) Semi-supervised image classification method based on causal reasoning
CN111325264A (en) Multi-label data classification method based on entropy
Wang et al. Radar HRRP target recognition based on gradient boosting decision tree
CN113516019B (en) Hyperspectral image unmixing method and device and electronic equipment
CN111208483B (en) Radar out-of-library target identification method based on Bayesian support vector data description
CN116415177A (en) Classifier parameter identification method based on extreme learning machine
CN108694474A (en) Fuzzy neural network dissolved oxygen in fish pond prediction based on population
CN114417095A (en) Data set partitioning method and device
CN111612246B (en) Method, device, equipment and storage medium for predicting heavy metal content of farmland soil
CN110796198A (en) High-dimensional feature screening method based on hybrid ant colony optimization algorithm
Hasanbelliu et al. Online learning using a Bayesian surprise metric
CN111353525A (en) Modeling and missing value filling method for unbalanced incomplete data set
CN116341929A (en) Prediction method based on clustering and adaptive gradient lifting decision tree
CN111176865B (en) Peer-to-peer mode parallel processing method and framework based on optimization algorithm
CN113035363B (en) Probability density weighted genetic metabolic disease screening data mixed sampling method
CN110781963B (en) K-means clustering-based aerial target clustering method
CN113989584A (en) Neural network hyper-parameter tuning method based on orthogonal design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant