CN108694390B - Modulation signal classification method for cuckoo search improved wolf optimization support vector machine - Google Patents

Modulation signal classification method for cuckoo search improved wolf optimization support vector machine Download PDF

Info

Publication number
CN108694390B
CN108694390B CN201810462952.3A CN201810462952A CN108694390B CN 108694390 B CN108694390 B CN 108694390B CN 201810462952 A CN201810462952 A CN 201810462952A CN 108694390 B CN108694390 B CN 108694390B
Authority
CN
China
Prior art keywords
parameter
hyper
training
signal
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810462952.3A
Other languages
Chinese (zh)
Other versions
CN108694390A (en
Inventor
孙洪波
杨苏娟
郭永安
朱洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201810462952.3A priority Critical patent/CN108694390B/en
Publication of CN108694390A publication Critical patent/CN108694390A/en
Application granted granted Critical
Publication of CN108694390B publication Critical patent/CN108694390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Complex Calculations (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开一种布谷鸟搜索改进灰狼优化支持向量机的调制信号分类方法,其中,所述的方法对调制信号的特征参数选用高阶累积量和局部均值分解量近似熵,并利用布谷鸟搜索二次更新狼群位置以优化最小二乘支持向量机模型的两个关键参数,惩罚系数γ和核参数σ,从而获取最优的核极限学习机参数值,本发明涉及的方法减弱了噪声因素对信号识别结果的影响,弥补了传统的模态经验分解中欠包络、过包络、边界效应的缺陷,并有效改善了灰狼优化全局搜索能力薄弱,在处理高维数据时易陷入局部最优解的缺点,通过MATLAB仿真与原灰狼优化结果比较,证明本发明能更高效准确地对调制信号进行智能分类,具有良好的应用前景。

Figure 201810462952

The invention discloses a modulated signal classification method for cuckoo search and improved gray wolf optimization support vector machine, wherein the method selects high-order cumulant and local mean decomposition approximate entropy for characteristic parameters of modulated signals, and uses cuckoo Search and update the position of the wolf pack twice to optimize the two key parameters of the least squares support vector machine model, the penalty coefficient γ and the kernel parameter σ, so as to obtain the optimal kernel extreme learning machine parameter value. The method involved in the present invention reduces noise The influence of factors on the signal recognition results makes up for the defects of under-envelope, over-envelope and boundary effects in the traditional modal empirical decomposition, and effectively improves the weak global search ability of gray wolf optimization, which is easy to fall into when dealing with high-dimensional data. The shortcomings of the local optimal solution are compared with the optimization results of the original gray wolf through MATLAB simulation, and it is proved that the invention can intelligently classify the modulated signal more efficiently and accurately, and has a good application prospect.

Figure 201810462952

Description

一种布谷鸟搜索改进灰狼优化支持向量机的调制信号分类 方法A Modulated Signal Classification Method Based on Cuckoo Search and Improved Grey Wolf Optimized Support Vector Machine

技术领域technical field

本发明涉及调制信号分类领域和群智能优化领域,尤其涉及一种布谷鸟搜索改进灰狼优化支持向量机的调制信号分类方法。The invention relates to the field of modulation signal classification and swarm intelligence optimization, in particular to a modulation signal classification method for cuckoo search and improved gray wolf optimization support vector machine.

背景技术Background technique

信号调制方式识别,指的是于多调制信号以及噪声干扰的环境下识别出各个信号的调制方式及其各项参数。一般情况下,接收方只能截取先验知识未知的信号,因此有效识别信号的调制方式变得愈发的重要。Signal modulation mode identification refers to identifying the modulation mode and various parameters of each signal in the environment of multi-modulated signals and noise interference. In general, the receiver can only intercept signals whose prior knowledge is unknown, so it becomes more and more important to effectively identify the modulation method of the signal.

在已发表的调制识别相关的文献中,信号的智能识别大致可以分为两种:基于决策论的最大概似假设检验方法和基于特征提取的统计模式分类识别方法。前者在解决调制信号分类问题时,最大特点是需要观测待分类信号的波形,随后将其假设为候选调制方式中的某一种,再根据所选择的判决门限进行相似性判断以确定待分类信号的调制方式;后者则需先提取接收信号的特征参数,随后经由模式识别系统确定信号种类。一般性的,调制信号的模式识别框架主要包括信号预处理、特征参数提取和分类器三个主要模块。本发明研究的领域集中于分类器模块,分类器的优劣将直接关系到识别的正确率。当下热门的分类器设计结构有三种:基于决策树理论的识别方法、基于支持向量机的方法和基于神经网络的识别方法。In the published literature related to modulation identification, the intelligent identification of signals can be roughly divided into two types: the most likely hypothesis testing method based on decision theory and the statistical pattern classification and identification method based on feature extraction. When solving the modulation signal classification problem, the biggest feature of the former is that the waveform of the signal to be classified needs to be observed, and then it is assumed to be one of the candidate modulation methods, and then the similarity judgment is performed according to the selected decision threshold to determine the signal to be classified. In the latter, the characteristic parameters of the received signal need to be extracted first, and then the signal type is determined by the pattern recognition system. Generally, the pattern recognition framework of modulated signal mainly includes three main modules: signal preprocessing, feature parameter extraction and classifier. The research field of the present invention focuses on the classifier module, and the quality of the classifier will be directly related to the correct rate of recognition. There are three popular classifier design structures: the recognition method based on decision tree theory, the method based on support vector machine and the recognition method based on neural network.

支持向量机(SVM)是一种基于最小经验风险理论和VC维理论的先进模式识别方法。SVM有最小化结构统计的性质,有利于处理非线性、小样本、高维的模式识别问题。Support Vector Machine (SVM) is an advanced pattern recognition method based on minimum empirical risk theory and VC dimension theory. SVM has the property of minimizing structural statistics, which is beneficial to deal with nonlinear, small sample, and high-dimensional pattern recognition problems.

最小二乘支持向量机(LSSVM)是对标准SVM的改进。不同于标准SVM采用的不等式约束,LSSVM采用等式约束以得到线性方程组,极大地简化了计算过程,减小了计算成本,同时使得支持向量机更容易训练。LSSVM模型中惩罚系数与核参数统称为超参数。上述分析可得,LSSVM估计模型的建立问题即超参数的选取问题,不恰当地选取超参数,将降低预测结果的可信度。选择合适的参数对模型的估计精确度和复杂度起着决定性作用。Least Squares Support Vector Machine (LSSVM) is an improvement over the standard SVM. Different from the inequality constraints adopted by the standard SVM, the LSSVM adopts the equality constraints to obtain a linear equation system, which greatly simplifies the calculation process, reduces the calculation cost, and makes the support vector machine easier to train. The penalty coefficients and kernel parameters in the LSSVM model are collectively referred to as hyperparameters. The above analysis shows that the establishment of the LSSVM estimation model is the problem of the selection of hyperparameters. Improper selection of hyperparameters will reduce the credibility of the prediction results. The choice of appropriate parameters plays a decisive role in the estimation accuracy and complexity of the model.

灰狼优化(GWO)受到了生活在亚欧大陆的苍狼群体的智慧行为的启发。GWO主要模拟了自然界中狼的领导等级制和捕猎机理,通过模拟狼群的领导等级制将狼群划分为4种类型,如图1所示。α、β、δ被看作狼群中表现最好(适应度最好)的前三只狼,他们引导其它的狼(ω)趋向于搜索空间中最好的区域(带求解问题的的全局最优解)。在整个迭代搜索过程中用α、β、δ三种狼来预测评估猎物可能位置,即通过趋势搜索跳转至关键群组——具有高适应度值的个体。Grey Wolf Optimization (GWO) is inspired by the intelligent behavior of grey wolf packs living in Eurasia. GWO mainly simulates the leadership hierarchy and hunting mechanism of wolves in nature, and divides wolves into four types by simulating the leadership hierarchy of wolves, as shown in Figure 1. α, β, δ are regarded as the top three wolves with the best performance (the best fitness) in the wolf pack, and they guide the other wolves (ω) towards the best region in the search space (the global with the solving problem). Optimal solution). In the whole iterative search process, three kinds of wolves, α, β, and δ, are used to predict and evaluate the possible location of the prey, that is, jump to the key group through trend search—individuals with high fitness value.

GWO的寻优过程为:在搜索空间中随机产生一群灰狼,在进化过程中,由α,β,δ负责对猎物的位置(全局最优解)进行评估定位,群内其余个体以此为标准计算自身与猎物之间的距离,并完成对猎物的全方位靠近、包围、攻击等行为,最终捕获猎物。GWO的这种位置更新方式缺陷十分明显:全局搜索能力薄弱,有较高的概率陷入局部最优解,尤其是在使用高维数据时。The optimization process of GWO is: randomly generate a group of gray wolves in the search space. During the evolution process, α, β, δ are responsible for evaluating the position of the prey (global optimal solution), and the rest of the individuals in the group take this as the The standard calculates the distance between itself and the prey, and completes the behavior of approaching, encircling, and attacking the prey in all directions, and finally captures the prey. The defect of this position update method of GWO is very obvious: the global search ability is weak, and there is a high probability of falling into the local optimal solution, especially when using high-dimensional data.

布谷鸟搜索(CS)的思想基于特定种群的布谷鸟的寄生哺育机制与莱维飞行,已有研究表明布谷鸟搜索在解决优化问题是不需要对参数进行多次匹配,具有设置参数少、易于实现等优点。布谷鸟搜索的初始解表示宿主鸟巢中已有的鸟蛋,而布谷鸟搜索生成的新解代表的含义为布谷鸟产卵的所在位置,最终的实现需建立在下列三条假设规则上:The idea of cuckoo search (CS) is based on the parasitic feeding mechanism and Levi flight of a specific population of cuckoos. Studies have shown that cuckoo search does not need to perform multiple matching of parameters in solving optimization problems, and has the advantages of fewer parameters and easy to set. realization and other advantages. The initial solution of the cuckoo search represents the existing eggs in the host bird’s nest, and the new solution generated by the cuckoo search represents the location of the cuckoo’s eggs. The final implementation needs to be based on the following three assumptions:

一、布谷鸟随机选取宿主鸟巢进行产卵,且布谷鸟一次仅产卵一枚,即最终只允许一个;1. The cuckoo randomly selects the host bird's nest to lay eggs, and the cuckoo only lays one egg at a time, that is, only one is allowed in the end;

二、最优宿主鸟巢连同其拥有最高优先级的布谷鸟卵(最优解)被保留至下一步迭代;2. The optimal host bird's nest and its cuckoo eggs with the highest priority (the optimal solution) are reserved for the next iteration;

三、宿主鸟依据概率Pa决定是否发现布谷鸟卵,若发现,或将卵丢弃保留鸟巢,或舍弃鸟在他处重建新巢。Pa∈[0,1]是一个转移概率,通过转移概率可以估计第三个假设。3. The host bird decides whether to find cuckoo eggs according to the probability P a , and if found, either discard the eggs and keep the nest, or abandon the bird and rebuild a new nest elsewhere. P a ∈ [0, 1] is a transition probability through which the third hypothesis can be estimated.

根据上述三条假设,可以推导布谷鸟的觅巢路径与位置更新。基于上述规则可知布谷鸟搜索在更新宿主鸟巢的位置时,主要是通过寄养行为与莱维飞行两种操作实现,因而布谷鸟搜索步长或长或短都有几乎相等的概率,并且移动方向的选择具有高度随机性。此外,布谷鸟搜索更容易从当前区域跳转到其他区域,完成全局搜索。According to the above three assumptions, the nesting path and position update of cuckoo can be deduced. Based on the above rules, it can be seen that when the cuckoo search updates the position of the host bird's nest, it is mainly realized through the two operations of fostering behavior and Levi's flight, so the cuckoo search step length is almost equal to the probability of long or short, and the direction of the movement is different. The choice is highly random. In addition, cuckoo search makes it easier to jump from the current area to other areas to complete a global search.

由上述背景分析可知,在调制信号的模式识别分类过程中中对分类结果的好坏起着关键作用的是能否寻找到一种搜索能力强、效率高,结果不容易由于数据的高位复杂度而陷入局部最优解的方法以确定最小二乘支持向量机估计模型中的超参数。From the above background analysis, it can be seen that in the process of pattern recognition and classification of modulated signals, what plays a key role in the quality of the classification results is whether it is possible to find a kind of strong search ability and high efficiency, and the result is not easy due to the high complexity of the data. The method of getting stuck in a local optimal solution determines the hyperparameters in the least squares support vector machine estimation model.

发明内容SUMMARY OF THE INVENTION

发明目的:针对以上现有技术存在的问题,本发明提出一种结合布谷鸟搜索与灰狼优化以确定最小二乘支持向量机估计模型中的超参数,在具有高维数据特性调制信号分类应用中具有良好的分类效果。Purpose of the invention: In view of the above problems in the prior art, the present invention proposes a method combining cuckoo search and gray wolf optimization to determine the hyperparameters in the least squares support vector machine estimation model. has a good classification effect.

技术方案:为实现本发明的目的,本发明所采用的技术方案是:一种布谷鸟搜索改进灰狼优化支持向量机的调制信号分类方法,包含训练阶段和测试阶段,包括以下步骤:Technical scheme: In order to realize the purpose of the present invention, the technical scheme adopted in the present invention is: a modulated signal classification method of cuckoo search and improved gray wolf optimization support vector machine, including training phase and testing phase, including the following steps:

所述训练阶段包括以下步骤:The training phase includes the following steps:

(1)从总数为M个BPSK、QPSK、8PSK、16QAM以及64QAM五种数字调制信号中随机抽取N个调制信号组成训练信号集array1,确保N个调制信号包括上述5中信号,剩余M-N个调制信号自然组成测试信号集array2;(1) Randomly extract N modulation signals from five digital modulation signals of M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, to ensure that the N modulation signals include the above 5 signals, and the remaining M-N modulation signals The signal naturally forms the test signal set array2;

(2)对训练信号集array1中每个信号xi,i=1,2,3,...N提取基于高阶累积量的特征参数F1,F2,和基于局部均值分解量的近似熵特征参数ApFn1,ApFn2,提取出的特征参数构成该训练信号的四维特征向量:fi k,k=1,2,3,4;所有训练信号的特征向量构成数据训练样本fi,i=1,2,3,...N;(2) Extract high-order cumulant-based feature parameters F 1 , F 2 for each signal x i , i=1, 2, 3, . . . N in the training signal set array1, and an approximation based on local mean decomposition The entropy feature parameters ApFn 1 , ApFn 2 , the extracted feature parameters constitute the four-dimensional feature vector of the training signal: f i k , k=1, 2, 3, 4; the feature vectors of all training signals constitute the data training samples f i , i=1,2,3,...N;

(3)将步骤(2)所得的训练样本数据作为等式约束中的一项代入如下最小二乘支持向量机估计模型进行训练:(3) Substitute the training sample data obtained in step (2) as one of the equation constraints into the following least squares support vector machine estimation model for training:

Figure BDA0001661360840000031
Figure BDA0001661360840000031

使其满足等式约束:Make it satisfy the equality constraints:

Figure BDA0001661360840000032
Figure BDA0001661360840000032

其中yi为第i个训练样本对应的调制信号类型,以1,2,3,4,5分别表示BPSK、QPSK、8PSK、16QAM、64QAM的不同类别,w是权向量,

Figure BDA0001661360840000033
是将调制信号映射到高维特征空间的非线性函数,u表示偏差量,ei则为第i组训练样本的实际结果与估计输出的误差量,γ为惩罚系数;where y i is the modulated signal type corresponding to the ith training sample, 1, 2, 3, 4, and 5 represent different categories of BPSK, QPSK, 8PSK, 16QAM, and 64QAM, respectively, and w is the weight vector,
Figure BDA0001661360840000033
is the nonlinear function that maps the modulated signal to the high-dimensional feature space, u represents the deviation, e i is the error between the actual result and the estimated output of the i-th group of training samples, and γ is the penalty coefficient;

上式优化的目标函数minJ(w,ei)的第一部分

Figure BDA0001661360840000034
用来校准权重的大小,第二个部分
Figure BDA0001661360840000035
描述训练数据中的误差,使用拉格朗日法寻找最优惩罚系数γ和核参数σ以使得目标函数值达到最小:The first part of the objective function minJ(w, e i ) optimized by the above formula
Figure BDA0001661360840000034
Used to calibrate the size of the weights, the second part
Figure BDA0001661360840000035
To describe the error in the training data, use the Lagrangian method to find the optimal penalty coefficient γ and kernel parameter σ to minimize the objective function value:

Figure BDA0001661360840000036
Figure BDA0001661360840000036

其中μi为拉格朗日乘数,分别对上述表达式中的w,u,ei,μi微分并使它们都等于0,得到该问题的最优化条件:where μ i is the Lagrangian multiplier, differentiate w, u, e i , and μ i in the above expressions respectively and make them all equal to 0, to obtain the optimization condition of this problem:

Figure BDA0001661360840000041
Figure BDA0001661360840000041

消去w和ei,最优解问题将转换为以下线性方程组的形式:Eliminating w and e i , the optimal solution problem is transformed into the following system of linear equations:

Figure BDA0001661360840000042
Figure BDA0001661360840000042

式中y=[y1;...;yN],μ=[μ1;...;μN],I是一个单位矩阵,1v=[1;...;1],Ω为方阵,第m行第n列元素为Ωmn=K(fm,fn),m,n=1,2,3,...,N,其中引入的核函数为:where y=[y 1 ;...;y N ], μ=[μ 1 ;...; μ N ], I is an identity matrix, 1 v =[1;...;1],Ω is a square matrix, the elements of the mth row and the nth column are Ω mn =K(f m , f n ), m, n=1, 2, 3,...,N, and the introduced kernel function is:

Figure BDA0001661360840000043
Figure BDA0001661360840000043

最终得到调制信号的决策函数;Finally, the decision function of the modulated signal is obtained;

所述测试阶段包括以下步骤:The testing phase includes the following steps:

(4)对测试信号集中的测试信号如上述步骤(2)提取特征值构成该测试信号的四维特征向量,构成数据测试样本;(4) to the test signal in the test signal set as above-mentioned step (2) extract the characteristic value that constitutes the four-dimensional eigenvector of this test signal, constitute the data test sample;

(5)将数据测试样本代入决策函数,输出信号的分类结果。(5) Substitute the data test sample into the decision function, and output the classification result of the signal.

其中,步骤(3)中得到所述调制信号的决策函数如下;Wherein, the decision function obtained in step (3) of the modulated signal is as follows;

Figure BDA0001661360840000044
Figure BDA0001661360840000044

其中fj,fi表示测试信号的特征向量构成的测试样本,y(fj)表示信号识别的结果,表示拉格朗日乘数,公式中核函数采用RBF核函数:Among them, f j , f i represent the test samples formed by the eigenvectors of the test signal, y(f j ) represents the result of signal recognition, and represents the Lagrange multiplier. The kernel function in the formula adopts the RBF kernel function:

Figure BDA0001661360840000045
Figure BDA0001661360840000045

其中σ表示核参数,并且i不等于j。where σ denotes the kernel parameter and i is not equal to j.

其中,步骤(2)中的训练信号的特征参数选用高阶累积量F1,F2,和局部均值分解量的近似熵ApFn1,ApFn2,具体提取方法如下所述:Among them, the characteristic parameters of the training signal in step (2) are selected from high-order cumulants F 1 , F 2 , and approximate entropy ApFn 1 , ApFn 2 of local mean decomposition quantities, and the specific extraction method is as follows:

(2.1)x(t)为调制信号表达式,视作一个平稳复随机过程,其二、四、六高阶累积量表达式如下:Mpq=E[x(t)p-qx*(t)q],其为x(t)的p阶混合矩;(2.1)x(t) is the modulation signal expression, which is regarded as a stationary complex random process. The second, fourth and sixth higher-order cumulants are expressed as follows: M pq =E[x(t) pq x * (t) q ], which is the p-order mixing moment of x(t);

C21=M21 C 21 =M 21

C40=M40-3M20 2 C 40 =M 40 -3M 20 2

C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3 C 63 =M 63 -6M 20 M 41 -9M 42 M 21 +18M 20 2 M 21 +12M 21 3

其中C21、C40、C63分别为二、四、六阶累积量,基于上述高阶累积量的特征参数表达式为:

Figure BDA0001661360840000051
Among them, C 21 , C 40 , and C 63 are the second, fourth, and sixth order cumulants, respectively. The characteristic parameter expressions based on the above high-order cumulants are:
Figure BDA0001661360840000051

(2.2)ApEn1与ApEn2两个局部均值分解量的近似熵特征参数的计算方法如下:(2.2) The approximate entropy characteristic parameters of the two local mean decomposition quantities of ApEn 1 and ApEn 2 are calculated as follows:

将原始调制信号x(t)利用局部均值分解的方法分解为k个PF分量和1个单调函数之和,即:The original modulated signal x(t) is decomposed into the sum of k PF components and a monotonic function by the method of local mean decomposition, namely:

Figure BDA0001661360840000052
Figure BDA0001661360840000052

其中PFi为原始调制信号的局部均值分解量,hk(t)为一个单调函数,取前两个局部均值分解量PF1、PF2分别对其进行近似熵值的计算,步骤如下:where PF i is the local mean value decomposition of the original modulated signal, h k (t) is a monotone function, take the first two local mean decomposition quantities PF 1 and PF 2 to calculate the approximate entropy value respectively, the steps are as follows:

(2.2.1)将局部均值分解量看作长度为s的一维时间序列PF(i),i=1,2,...,s,按下式重构z维向量Pi,i=1,2,...,s-z-1:(2.2.1) The local mean decomposition is regarded as a one-dimensional time series PF(i) of length s, i=1, 2, ..., s, and the z-dimensional vector P i is reconstructed as follows, i= 1, 2, ..., sz-1:

Pi={PF(i),PF(i+1),...,PF(i+z-1)}P i ={PF(i), PF(i+1), ..., PF(i+z-1)}

(2.2.2)计算向量Pi与Pj,之间的距离,i,j=1,2,...,s-z-1:(2.2.2) Calculate the distance between vectors P i and P j , i, j=1, 2, ..., sz-1:

d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1d=max|PF(i+j)-PF(j+k)|, k=0, 1, ..., z-1

(2.2.3)给定一个阈值r,对每个向量Pi统计d≤r的数目以及此数目与距离总数(s-z)的比值,记为

Figure BDA0001661360840000053
(2.2.3) Given a threshold r, count the number of d≤r for each vector P i and the ratio of this number to the total distance (sz), denoted as
Figure BDA0001661360840000053

(2.2.4)对

Figure BDA0001661360840000054
取对数,然后将所有的i求平均值,记为Φz(r):(2.2.4) to
Figure BDA0001661360840000054
Take the logarithm, then average over all i and denote it as Φ z (r):

Figure BDA0001661360840000055
Figure BDA0001661360840000055

(2.2.5)将z加1,重复(3.2.1)-(3.2.4)的步骤,求得

Figure BDA0001661360840000056
和Φz+1(r);(2.2.5) Add 1 to z and repeat the steps (3.2.1)-(3.2.4) to obtain
Figure BDA0001661360840000056
and Φ z+1 (r);

(2.2.6)由Φz和Φz+1得到近似熵的表达式:(2.2.6) The expression of approximate entropy is obtained from Φ z and Φ z+1 :

Figure BDA0001661360840000057
Figure BDA0001661360840000057

通过以上步骤可以对调制信号的前两个PF分量PF1,PF2分别求出近似熵,记为ApEn1和ApEn2Through the above steps, approximate entropy can be obtained for the first two PF components PF 1 and PF 2 of the modulated signal, respectively, which are denoted as ApEn 1 and ApEn 2 .

其中,步骤(3)中对最小二乘支持向量机估计模型进行训练,确定模型中最佳的惩罚系数γ和核参数σ,步骤如下:Among them, in step (3), the least squares support vector machine estimation model is trained, and the optimal penalty coefficient γ and kernel parameter σ in the model are determined, and the steps are as follows:

(3.1)初始化超参数对群体:超参数对的总数目为Q,超参数对群体搜索的空间为二维空间,其中第i对超参数对在二维空间中的数值可表示为Xi=(Xi1,Xi2)、最大允许迭代次数tmax、惩罚系数γ与核参数σ的取值范围,在搜索空间中随机产生一群超参数对的初始值;(3.1) Initialize the hyperparameter pair population: the total number of hyperparameter pairs is Q, the space for hyperparameter pair population search is a two-dimensional space, and the value of the i-th hyperparameter pair in the two-dimensional space can be expressed as X i = (X i1 , X i2 ), the maximum allowable number of iterations t max , the value range of the penalty coefficient γ and the kernel parameter σ, and randomly generate the initial values of a group of hyperparameter pairs in the search space;

(3.2)最小二乘支持向量机估计模型依据γ,σ的初始值开始进行训练,计算出在当前γ,σ超参数下的最小二乘支持向量机估计模型中待优化的目标函数值的大小,其中目标函数表达式如下:(3.2) The least squares support vector machine estimation model starts training based on the initial values of γ, σ, and calculates the size of the objective function value to be optimized in the least squares support vector machine estimation model under the current γ, σ hyperparameters , where the objective function expression is as follows:

Figure BDA0001661360840000061
Figure BDA0001661360840000061

(3.3)根据所得的目标函数值的大小对超参数群体进行等级划分,目标函数值较小的前三对超参数依次是适应度最好的三对,根据灰狼优化的定义将这三对依次命名为α,β,δ超参数对,剩余的超参数对群体的均为ω群体;(3.3) Classify the hyperparameter population according to the size of the obtained objective function value. The first three pairs of hyperparameters with smaller objective function values are in turn the three pairs with the best fitness. According to the definition of gray wolf optimization, these three pairs Named α, β, δ hyperparameter pairs in turn, and the remaining hyperparameter pairs are ω groups;

(3.4)根据以下公式分别对超参数对群体进行数值更新:(3.4) Numerically update the population of hyperparameters according to the following formulas:

Dα=|C1·Xα(t)-X(t)|D α =|C 1 ·X α (t)-X(t)|

Dβ=|C2·Xβ(t)-X(t)|D β =|C 2 ·X β (t)-X(t)|

Dδ=|C3·Xδ(t)-X(t)|D δ =|C 3 ·X δ (t)-X(t)|

X1=Xα(t)-A1·Dα X 1 =X α (t)-A 1 ·D α

X2=Xβ(t)-A2·Dβ X 2 =X β (t)-A 2 ·D β

X3=Xδ(t)-A3·Dδ X 3 =X δ (t)-A 3 ·D δ

Figure BDA0001661360840000062
Figure BDA0001661360840000062

其中Dα、Dβ、Dδ分别表示当前ω群体分别与α、β、δ超参数对之间的距离,t表示当前迭代的次数,Xα(t)、Xβ(t)、Xδ(t)分别表示当前α、β、δ超参数对的位置,X(t)表示当前超参数对的位置,其中C1、C2、C3是摆动因子,由公式Ci=2r1决定,i=1,2,3,r1∈[0,1],A1、A2、A3是收敛因子,由公式Ai=2ar2-a,i=1,2,3,r2∈[0,1],a是迭代因子,随着迭代次数从2线性递减到0,X1、X2、X3分别定义了ω群体向α、β、δ超参数对前进的方向和步长,X(t+1)可综合判断出当前超参数对的移动方向;where D α , D β , and D δ represent the distance between the current ω population and the α, β, and δ hyperparameter pairs, respectively, t represents the number of current iterations, X α (t), X β (t), X δ (t) represents the position of the current α, β, δ hyperparameter pair respectively, X(t) represents the position of the current hyperparameter pair, where C 1 , C 2 , C 3 are swing factors, determined by the formula C i =2r 1 , i=1, 2, 3, r 1 ∈ [0, 1], A 1 , A 2 , A 3 are convergence factors, which are determined by the formula A i =2ar 2 -a,i=1,2,3,r 2 ∈ [0, 1], a is the iteration factor, with the number of iterations decreasing linearly from 2 to 0, X 1 , X 2 , and X 3 define the direction and step of the ω population toward the α, β, and δ hyperparameter pairs, respectively. long, X(t+1) can comprehensively judge the moving direction of the current hyperparameter pair;

(3.5)更新参数a、Ai、Ci(3.5) update parameters a, A i , C i ;

(3.6)计算新位置各超参数对的目标函数值,对比各超参数对位置更新前后的两个目标函数值,若更新后的值小于更新前的值,则保留更新后的超参数对,否则保留更新前的超参数对,且对当前超参数群体根据步骤(3.3)进行等级划分;(3.6) Calculate the objective function value of each hyperparameter pair in the new position, and compare the two objective function values before and after the position update of each hyperparameter pair. If the updated value is less than the value before the update, keep the updated hyperparameter pair, Otherwise, the hyperparameter pair before the update is retained, and the current hyperparameter group is classified according to step (3.3);

(3.7)将当前迭代次数与最大允许迭代次数对比,若未达到迭代次数tmax,则跳转至步骤(3.4)继续进行参数寻优;否则训练结束,得到的超参数对即最小二乘支持向量机估计模型最优解输出。(3.7) Compare the current number of iterations with the maximum allowable number of iterations. If the number of iterations t max is not reached, jump to step (3.4) to continue parameter optimization; otherwise, the training is over, and the obtained hyperparameter pair is the least squares support The vector machine estimates the optimal solution output of the model.

进一步的,在上述步骤(3.4)中还包括如下步骤:Further, in the above-mentioned step (3.4), the following steps are also included:

根据下式分别再次计算超参数对群体更新后的位置,同时在服从均匀分布的随机数v∈[0,1]区间内选取一个随机数,如果随机数大于发现概率Pa,则更新当前该超参数对的数值,否则不更新;According to the following formula, calculate the updated position of the group by hyperparameters again, and select a random number in the interval of random number v∈[0, 1] that obeys uniform distribution. If the random number is greater than the discovery probability P a , update the current The value of the hyperparameter pair, otherwise it will not be updated;

Figure BDA0001661360840000071
Figure BDA0001661360840000071

其中i=1,2,3,…,N,N表示宿主鸟巢数量,即待选的超参数对总数量,

Figure BDA0001661360840000072
表示内积乘法,Xi(t)表示第t次迭代第i对超参数的值,,Xi(t+1)为第i对超参数的值当次迭代后的值,ε为步长因子,由于ε>0确定步长,与问题的刻度相关联,在极大多数情况下,ε取值为1,随机飞行的步长为Levy(λ),服从列维分布。where i=1, 2, 3, ..., N, N represents the number of host bird nests, that is, the total number of hyperparameter pairs to be selected,
Figure BDA0001661360840000072
Represents inner product multiplication, X i (t) represents the value of the i-th pair of hyperparameters in the t-th iteration, X i (t+1) is the value of the i-th pair of hyperparameters after the current iteration, and ε is the step size The factor, since ε>0 determines the step size, is related to the scale of the problem. In most cases, the value of ε is 1, and the step size of the random flight is Levy (λ), which obeys the Levy distribution.

有益效果:与现有技术相比,本发明的技术方案具有以下有益效果:Beneficial effects: Compared with the prior art, the technical scheme of the present invention has the following beneficial effects:

(1)本发明的方法对训练信号和测试信号的特征参数选择选择高阶累积量特征参数F1、F2和局部均值分解量的近似熵特征参数ApFn1、ApFn2,改善了噪声因素对信号识别结果的影响,弥补了传统的模态经验分解中欠包络、过包络、边界效应的缺陷。(1) The method of the present invention selects the high-order cumulant characteristic parameters F 1 , F 2 and the approximate entropy characteristic parameters ApFn 1 and ApFn 2 of the local mean decomposition for the characteristic parameters of the training signal and the test signal, which improves the effect of noise factors on the The influence of the signal identification results makes up for the defects of under-envelope, over-envelope and boundary effects in the traditional modal empirical decomposition.

(2)本发明的方法采用智能群体算法——灰狼优化的策略搜寻最小二乘支持向量机模型的最优超参数,改善了传统的最小二乘支持向量机的超参数无法自适应变化的不足。(2) The method of the present invention adopts the intelligent swarm algorithm-grey wolf optimization strategy to search for the optimal hyperparameters of the least squares support vector machine model, which improves the problem that the hyperparameters of the traditional least squares support vector machine cannot be adaptively changed. insufficient.

(3)本发明的方法对灰狼优化中更新后的狼群位置利用布谷鸟搜索进行二次寻优,拓展最优解搜寻空间,对比传统群体智能优化高计算成本和鲁棒性不强的不足,本发明减少了迭代次数,加快了收敛速度,提高了识别率。(3) The method of the present invention uses cuckoo search to perform secondary optimization on the updated wolf group position in the gray wolf optimization, expands the search space for the optimal solution, and compares the traditional group intelligent optimization with high computational cost and low robustness. Insufficiency, the present invention reduces the number of iterations, accelerates the convergence speed, and improves the recognition rate.

附图说明Description of drawings

图1是模拟狼群等级制度的示意图。Figure 1 is a schematic diagram of a simulated wolf pack hierarchy.

图2是利用灰狼优化寻找最优解的示意图。Figure 2 is a schematic diagram of finding the optimal solution using grey wolf optimization.

图3是布谷鸟搜索改进灰狼优化支持向量机的调制信号分类方法的流程图。Fig. 3 is a flow chart of the modulated signal classification method of cuckoo search and improved gray wolf optimization support vector machine.

图4是布谷鸟搜索改进灰狼优化参数流程图。Figure 4 is a flow chart of cuckoo search to improve gray wolf optimization parameters.

图5是灰狼优化最小二乘支持向量机(GWO-LSSVM)与布谷鸟搜索改进灰狼优化最小二乘支持向量机(CS-IGWO-LSSVM)在低信噪比下的MAPE值收敛曲线对比图。Figure 5 is a comparison of the convergence curves of MAPE values under the condition of low signal-to-noise ratio between Gray Wolf Optimized Least Squares Support Vector Machine (GWO-LSSVM) and Cuckoo Search Improved Gray Wolf Optimized Least Squares Support Vector Machine (CS-IGWO-LSSVM) picture.

图6是灰狼优化最小二乘支持向量机(GWO-LSSVM)与布谷鸟搜索改进灰狼优化最小二乘支持向量机(CS-IGWO-LSSVM)在高信噪比下的MAPE值收敛曲线对比图。Figure 6 is a comparison of the convergence curves of MAPE values under the condition of high signal-to-noise ratio between Gray Wolf Optimized Least Squares Support Vector Machine (GWO-LSSVM) and Cuckoo Search Improved Gray Wolf Optimized Least Squares Support Vector Machine (CS-IGWO-LSSVM) picture.

图7是灰狼优化最小二乘支持向量机(GWO-LSSVM)与布谷鸟搜索改进灰狼优化最小二乘支持向量机(CS-IGWO-LSSVM)在不同信噪比下识别率对比图。Figure 7 is a comparison chart of the recognition rates of the Gray Wolf Optimized Least Squares Support Vector Machine (GWO-LSSVM) and the Cuckoo Search Improved Gray Wolf Optimized Least Squares Support Vector Machine (CS-IGWO-LSSVM) under different signal-to-noise ratios.

具体实施方式Detailed ways

下面结合附图和实施例对本发明的技术方案作进一步的说明。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and embodiments.

如图3所示,本发明的技术方案进一步的详细步骤描述如下:As shown in Figure 3, the further detailed steps of the technical solution of the present invention are described as follows:

(1)本实例进行了N次Monte Carlo实验,仿真信号是:BPSK、QPSK、8PSK、16QAM以及64QAM五种常用数字调制信号。仿真软件环境为MATLAB r2014b,硬件环境为华硕笔记本,处理器:Intel Core i5-5200U@2.20GHz 2.19GHz,内存:8GB。选取的载波信号频率fc取值2kHz,符号速率rs取值1000Baud,采样速率fs取值8kHz,信道环境为零均值的加性高斯白噪声。信噪比(SNR)由下式定义,取值范围为[-6,12],单位dB。(1) N times of Monte Carlo experiments are carried out in this example. The simulation signals are: BPSK, QPSK, 8PSK, 16QAM and 64QAM five commonly used digital modulation signals. The simulation software environment is MATLAB r2014b, the hardware environment is ASUS notebook, processor: Intel Core i5-5200U@2.20GHz 2.19GHz, memory: 8GB. The selected carrier signal frequency f c is 2 kHz, the symbol rate rs is 1000 Baud, the sampling rate f s is 8 kHz, and the channel environment is zero-mean additive white Gaussian noise. The signal-to-noise ratio (SNR) is defined by the following formula, and the value range is [-6, 12], and the unit is dB.

Figure BDA0001661360840000081
Figure BDA0001661360840000081

其中ES和n0分别表示码元能量与噪声能量。where E S and n 0 represent symbol energy and noise energy, respectively.

从总数为M个BPSK、QPSK、8PSK、16QAM以及64QAM五种常用的数字调制信号中随机抽取N个调制信号组成训练信号集array1,剩余N-M个调制信号自然组成测试信号集array2;其中,对所抽取的各类训练信号进行个数统计,并计算5种信号抽取个数的方差值,若小于方差限定值ξ=0.5,则可以保证所抽取的训练信号集兼顾每一种信号且个数均衡,若不满足方差限定值的限制,则再次进行随机抽取直到满足为止。N modulation signals are randomly selected from five commonly used digital modulation signals including M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, and the remaining N-M modulation signals naturally form a test signal set array2; The number of the extracted training signals is counted, and the variance value of the extracted number of 5 kinds of signals is calculated. If it is less than the variance limit value ξ=0.5, it can be ensured that the extracted training signal set takes into account each signal and the number of Balanced, if the limit of the variance limit value is not met, random sampling is performed again until it is met.

下面以M=300,N=200为例进行说明。The following description takes M=300 and N=200 as an example.

(2)对训练信号集array1中每个信号xi,i=1,2,3,...200提取高阶累积量特征参数F1,F2,和局部均值分解量近似熵特征参数ApFn1,ApFn2,提取出的特征参数构成该训练信号的四维特征向量:fi k,k=1,2,3,4,所有训练信号的特征向量构成数据训练样本fi,i=1,2,3,...200。具体提取方法如下所述:x(t)为调制信号表达式,视作一个平稳复随机过程。(2) Extract high-order cumulant feature parameters F 1 , F 2 , and local mean decomposition approximate entropy feature parameter ApFn for each signal x i , i=1, 2, 3, ... 200 in the training signal set array1 1 , ApFn 2 , the extracted feature parameters constitute the four-dimensional feature vector of the training signal: f i k , k=1, 2, 3, 4, the feature vectors of all training signals constitute the data training sample f i , i=1, 2, 3, ... 200. The specific extraction method is as follows: x(t) is the expression of the modulated signal, which is regarded as a stationary complex random process.

首先,介绍高阶累积量特征参数的计算,调制信号的四、六高阶累积量表达式如下:First, the calculation of high-order cumulant characteristic parameters is introduced. The fourth and sixth high-order cumulant expressions of the modulated signal are as follows:

C21=M21 C 21 =M 21

C40=M40-3M20 2 C 40 =M 40 -3M 20 2

C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3 C 63 =M 63 -6M 20 M 41 -9M 42 M 21 +18M 20 2 M 21 +12M 21 3

其中,Mpq=E[x(t)p-qx*(t)q]为x(t)的p阶混合矩,C21、C40、C63分别为二、四、六阶累积量。基于上述高阶累积量的特征参数表达式为:

Figure BDA0001661360840000091
理论计算值如下表:Wherein, M pq =E[x(t) pq x * (t) q ] is the p-order mixing moment of x(t), and C 21 , C 40 , and C 63 are the second-, fourth-, and sixth-order cumulants, respectively. The characteristic parameter expression based on the above-mentioned higher-order cumulants is:
Figure BDA0001661360840000091
The theoretical calculation values are as follows:

特征参数Characteristic Parameters BPSKBPSK QPSKQPSK 8PSK8PSK 16QAM16QAM 64QAM64QAM F<sub>1</sub>F<sub>1</sub> 22 11 00 0.680.68 2.082.08 F<sub>2</sub>F<sub>2</sub> 1616 44 44 0.6190.619 1.7971.797

由上表可知,上述两个参数可分别对MPSK,即BPSK、QPSK、8PSK大类与MQAM即16QAM、64QAM大类,进行类内识别分类。It can be seen from the above table that the above two parameters can be used for intra-class identification and classification of MPSK, namely BPSK, QPSK, 8PSK and MQAM, namely 16QAM and 64QAM.

其次,介绍ApEn1与ApEn2两个局部均值分解量的近似熵特征参数的计算,如下:Secondly, the calculation of the approximate entropy characteristic parameters of the two local mean decomposition quantities of ApEn 1 and ApEn 2 is introduced, as follows:

将原始调制信号x(t)根据局部均值分解的步骤分解为k个PF分量和1个单调函数之和,即:The original modulated signal x(t) is decomposed into the sum of k PF components and a monotonic function according to the steps of local mean decomposition, namely:

Figure BDA0001661360840000092
Figure BDA0001661360840000092

其中PFi为原始调制信号的局部均值分解量,hk(t)为一个单调函数,取前两个局部均值分解量PF1、PF2分别对其进行近似熵值的计算,步骤如下:where PF i is the local mean value decomposition of the original modulated signal, h k (t) is a monotone function, take the first two local mean decomposition quantities PF 1 and PF 2 to calculate the approximate entropy value respectively, the steps are as follows:

(a)将局部均值分解量PF看作长度为s的一维时间序列{PF(i),i=1,2,...,s},按下式重构z维向量Pi,i=1,2,...,s-z-1.:(a) The local mean decomposition quantity PF is regarded as a one-dimensional time series {PF(i), i=1, 2, ..., s} of length s, and the z-dimensional vector P i , i is reconstructed as follows =1,2,...,sz-1.:

Pi={PF(i),PF(i+1),...,PF(i+z-1)}P i ={PF(i), PF(i+1), ..., PF(i+z-1)}

(b)计算向量Pi与Pj,j=1,2,...,s-z-1之间的距离:(b) Calculate the distance between vectors P i and P j , j=1, 2, ..., sz-1:

d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1d=max|PF(i+j)-PF(j+k)|, k=0, 1, ..., z-1

(c)给定一个阈值r,对每个向量Pi统计d≤r的数目以及此数目与距离总数(s-z)的比值,记为

Figure BDA0001661360840000093
(c) Given a threshold r, count the number of d≤r for each vector Pi and the ratio of this number to the total distance (sz), denoted as
Figure BDA0001661360840000093

(d)对

Figure BDA0001661360840000094
取对数,然后将所有的
Figure BDA0001661360840000095
累加,再求得平均值,记为Φz(r):(d) Yes
Figure BDA0001661360840000094
Take the logarithm, then put all the
Figure BDA0001661360840000095
Accumulate, and then obtain the average value, denoted as Φ z (r):

Figure BDA0001661360840000096
Figure BDA0001661360840000096

(e)将z加1,重复(1)-(4)的步骤,求得

Figure BDA0001661360840000097
和Φz+1(r)。(e) Add 1 to z and repeat steps (1)-(4) to obtain
Figure BDA0001661360840000097
and Φ z+1 (r).

(f)这样由Φz和Φz+1得到近似熵的表达式:(f) The approximate entropy expression is obtained from Φ z and Φ z+1 as follows:

Figure BDA0001661360840000098
Figure BDA0001661360840000098

通过以上步骤可以分别对PF1、PF2求出近似熵,记为ApEn1和ApEn2,再与F1,F2一同作为特征参数,进行后续训练。Through the above steps, the approximate entropy can be obtained for PF 1 and PF 2 respectively, which are denoted as ApEn 1 and ApEn 2 , and together with F 1 and F 2 are used as characteristic parameters for subsequent training.

(3)将步骤(2)所得的训练样本fi的实例具体数据作为等式约束中的一项代入如下最小二乘支持向量机估计模型进行训练:(3) Substitute the instance specific data of the training sample f i obtained in step (2) as one of the equation constraints into the following least squares support vector machine estimation model for training:

Figure BDA0001661360840000101
Figure BDA0001661360840000101

使其满足等式约束:Make it satisfy the equality constraints:

Figure BDA0001661360840000102
Figure BDA0001661360840000102

其中yi,i=1,2,3,...,200为第i个训练样本对应的调制信号类型,以1,2,3,4,5分别表示BPSK、QPSK、8PSK、16QAM、64QAM的不同类别,w是权向量,

Figure BDA0001661360840000103
是能够将输入空间映射为高维空间的非线性变换,u表示偏差量,ei(i=1,2,3,...,200)则为第i组训练样本的实际结果与估计输出的误差量,γ为惩罚系数。where y i , i=1, 2, 3, . different categories of , w is the weight vector,
Figure BDA0001661360840000103
is a nonlinear transformation that can map the input space into a high-dimensional space, u represents the deviation, and e i (i=1, 2, 3,..., 200) is the actual result and estimated output of the i-th group of training samples The amount of error, γ is the penalty coefficient.

上式优化的目标函数minJ(w,ei)的第一部分

Figure BDA0001661360840000104
用来校准权重的大小并惩罚大权重,第二个部分
Figure BDA0001661360840000105
描述训练数据中的误差。用拉格朗日法求解这个优化问题:The first part of the objective function minJ(w, e i ) optimized by the above formula
Figure BDA0001661360840000104
Used to calibrate the size of the weights and penalize large weights, the second part
Figure BDA0001661360840000105
Describe the error in the training data. Solve this optimization problem using the Lagrangian method:

Figure BDA0001661360840000106
Figure BDA0001661360840000106

其中μi为拉格朗日乘数,将上使分别对w,u,ei,μi微分并使它们都等于0,得到该问题的最优化条件:where μ i is the Lagrangian multiplier, we will differentiate w, u, e i and μ i respectively and make them all equal to 0 to obtain the optimization condition of this problem:

Figure BDA0001661360840000107
Figure BDA0001661360840000107

消去w和ei,最优解问题将转换为以下线性方程组的形式:Eliminating w and e i , the optimal solution problem is transformed into the following system of linear equations:

Figure BDA0001661360840000108
Figure BDA0001661360840000108

式中y=[y1;...;y200],μ=[μ1;...;μ200],I是一个单位矩阵,1v=[1;...;1],Ω为方阵,第m行第n列元素为Ωmn=K(fm,fn),其中m,n=1,2,3,...,200,其中引入的核函数为:where y=[y 1 ;...;y 200 ], μ=[μ 1 ;...; μ 200 ], I is an identity matrix, 1 v =[1;...;1],Ω is a square matrix, the element of the mth row and the nth column is Ω mn =K(f m , f n ), where m, n=1, 2, 3,..., 200, and the introduced kernel function is:

Figure BDA0001661360840000109
Figure BDA0001661360840000109

Figure BDA0001661360840000111
Figure BDA0001661360840000111

最终得到调制信号的决策函数为:The final decision function of the modulated signal is:

Figure BDA0001661360840000112
Figure BDA0001661360840000112

其fj(j=1,2,3,...,100)表示测试信号的特征向量构成的测试样本,y(fj)表示信号识别的结果,以1,2,3,4,5分别表示BPSK、QPSK、8PSK、16QAM、64QAM的不同类别,表示拉格朗日乘数,公式中核函数采用RBF核函数:where f j ( j =1, 2, 3, . Represent different categories of BPSK, QPSK, 8PSK, 16QAM, and 64QAM, respectively, and represent Lagrange multipliers. The kernel function in the formula adopts the RBF kernel function:

Figure BDA0001661360840000113
Figure BDA0001661360840000113

其中σ表示核参数。where σ denotes the kernel parameter.

对最小二乘支持向量机估计模型进行训练,即模型中惩罚系数γ与核参数σ最优值的确定,惩罚参数和核参数都属于超参数,下均用超参数对代指惩罚参数γ和核参数σ。The least squares support vector machine estimation model is trained, that is, the determination of the optimal value of the penalty coefficient γ and the kernel parameter σ in the model, the penalty parameter and the kernel parameter are both hyperparameters, and the hyperparameter pairs are used to refer to the penalty parameters γ and Kernel parameter σ.

本发明采用布谷鸟搜索改进灰狼优化的方法来帮助最小二乘支持向量机模型选取超参数对的最佳值,具体方法为在超参数对群体进行位置更新时引入布谷鸟搜索进行二次位置更新,改善原始灰狼优化在高维数据集环境下易陷入局部最优的问题,输出的最优α狼的二维坐标解即为超参数对的最优值,灰狼适应度大小对应目标函数的值的大小,且满足相应的约束条件,具体步骤如下:The present invention adopts the method of cuckoo search to improve gray wolf optimization to help the least squares support vector machine model to select the optimal value of the hyperparameter pair, and the specific method is to introduce the cuckoo search for secondary position when the hyperparameter updates the position of the group. Update, improve the problem that the original gray wolf optimization is easy to fall into the local optimum in the high-dimensional data set environment, the two-dimensional coordinate solution of the output optimal α wolf is the optimal value of the hyperparameter pair, and the fitness of the gray wolf corresponds to the target The size of the value of the function, and meet the corresponding constraints, the specific steps are as follows:

步骤3.1:初始化超参数对群体:超参数对的总数目为20,超参数对群体搜索的空间为二维空间,其中第i对超参数对在二维空间中的数值可表示为Xi=(Xi1,Xi2)、最大允许迭代次数tmax为200、惩罚系数γ与核参数σ的取值范围分别为γ∈[0,100],σ∈[0,1],在搜索空间中随机产生一群超参数对的初始值;Step 3.1: Initialize the hyperparameter pair population: the total number of hyperparameter pairs is 20, the space for hyperparameter pair population search is a two-dimensional space, and the value of the i-th hyperparameter pair in the two-dimensional space can be expressed as X i = (X i1 , X i2 ), the maximum allowable number of iterations t max is 200, the value ranges of the penalty coefficient γ and the kernel parameter σ are γ∈[0, 100], σ∈[0, 1], respectively, in the search space Randomly generate initial values for a group of hyperparameter pairs;

步骤3.2:LSSVM模型依据(γ,σ)的初始值开始进行训练,计算出在当前γ,σ超参数下的LSSVM模型中待优化的目标函数值的大小。Step 3.2: The LSSVM model starts training according to the initial values of (γ, σ), and calculates the size of the objective function value to be optimized in the LSSVM model under the current γ, σ hyperparameters.

步骤3.3:根据所得的目标函数值对超参数群体进行等级划分,目标函数值较小的前三对超参数依次是适应度最好的三对,根据灰狼优化的定义将这三对依次命名为α,β,δ超参数对,剩余的超参数对群体的均为ω群体。Step 3.3: Classify the hyperparameter population according to the obtained objective function value. The first three pairs of hyperparameters with smaller objective function values are the three pairs with the best fitness in order. Name these three pairs in turn according to the definition of gray wolf optimization. is the α, β, δ hyperparameter pair, and the remaining hyperparameter pairs are the ω population.

步骤3.4:根据以下公式分别对超参数对群体进行数值更新:Step 3.4: Numerically update the population of hyperparameters separately according to the following formulas:

Dα=|C1·Xα(t)-X(t)|D α =|C 1 ·X α (t)-X(t)|

Dβ=|C2·Xβ(t)-X(t)|D β =|C 2 ·X β (t)-X(t)|

Dδ=|C3·Xδ(t)-X(t)|D δ =|C 3 ·X δ (t)-X(t)|

X1=Xα(t)-A1·Dα X 1 =X α (t)-A 1 ·D α

X2=Xβ(t)-A2·Dβ X 2 =X β (t)-A 2 ·D β

X3=Xδ(t)-A3·Dδ X 3 =X δ (t)-A 3 ·D δ

Figure BDA0001661360840000121
Figure BDA0001661360840000121

其中Dα、Dβ、Dδ分别表示当前ω群体分别与α、β、δ超参数对之间的距离,t表示当前迭代的次数,Xα(t)、Xβ(t)、Xδ(t)分别表示当前α、β、δ超参数对的位置,X(t)表示当前超参数对的位置,其中C1、C2、C3是摆动因子,由公式Ci=2r1决定,i=1,2,3,r1∈[0,1],A1、A2、A3是收敛因子,由公式Ai=2ar2-a,i=1,2,3,r2∈[0,1],a是迭代因子,随着迭代次数从2线性递减到0,X1、X2、X3分别定义了ω群体向α、β、δ超参数对前进的方向和步长,X(t+1)可综合判断出当前超参数对的移动方向。where D α , D β , and D δ represent the distance between the current ω population and the α, β, and δ hyperparameter pairs, respectively, t represents the number of current iterations, X α (t), X β (t), X δ (t) represents the position of the current α, β, δ hyperparameter pair respectively, X(t) represents the position of the current hyperparameter pair, where C 1 , C 2 , C 3 are swing factors, determined by the formula C i =2r 1 , i=1, 2, 3, r 1 ∈ [0, 1], A 1 , A 2 , A 3 are convergence factors, which are determined by the formula A i =2ar 2 -a,i=1,2,3,r 2 ∈ [0, 1], a is the iteration factor, with the number of iterations decreasing linearly from 2 to 0, X 1 , X 2 , and X 3 define the direction and step of the ω population toward the α, β, and δ hyperparameter pairs, respectively. long, X(t+1) can comprehensively judge the moving direction of the current hyperparameter pair.

在超参数对群体进行位置更新时引入布谷鸟搜索进行二次位置更新,改善原始灰狼优化在高维数据集环境下易陷入局部最优的问题。所以在上述步骤3.4中添加如下附加步骤:When the hyperparameters are used to update the position of the group, the cuckoo search is introduced to perform the secondary position update, which improves the problem that the original gray wolf optimization is easy to fall into the local optimum in the high-dimensional data set environment. So add the following additional steps to step 3.4 above:

步骤3.4.1:根据下式分别再次计算超参数对群体更新后的位置,同时在服从均匀分布的随机数v∈[0,1]区间内选取一个随机数,如果随机数大于发现概率Pa=0.5,则更新当前该超参数对的数值,否则不更新。Step 3.4.1: Calculate the updated position of the population by hyperparameters again according to the following formula, and select a random number in the interval of uniformly distributed random numbers v∈[0, 1], if the random number is greater than the discovery probability P a =0.5, then update the current value of the hyperparameter pair, otherwise do not update.

Figure BDA0001661360840000122
Figure BDA0001661360840000122

其中i=1,2,3,…,20,表示可搜索的宿主鸟巢数量为20(即待选的超参数对总数量),

Figure BDA0001661360840000123
表示内积乘法,Xi(t)表示第t次迭代第i对超参数的值,Xi(t+1)为第i对超参数的值当次迭代后的值。ε为步长因子,由于ε>0确定步长,与问题的刻度相关联,ε取值为1。随机飞行的步长为Levy(λ),服从列维分布。where i=1, 2, 3, ..., 20, indicating that the number of searchable host bird nests is 20 (that is, the total number of hyperparameter pairs to be selected),
Figure BDA0001661360840000123
Represents the inner product multiplication, X i (t) represents the value of the i-th pair of hyperparameters in the t-th iteration, and X i (t+1) is the value of the i-th pair of hyperparameters after the current iteration. ε is the step size factor. Since ε>0 determines the step size, it is related to the scale of the problem, and the value of ε is 1. The step size of the random flight is Levy(λ), which obeys the Levy distribution.

步骤3.5:更新参数a、Ai、CiStep 3.5: Update parameters a, A i , C i .

步骤3.6:计算新位置各超参数对的目标函数值,对比各超参数对位置更新前后的两个目标函数值,若更新后的值小于更新前的值,则保留更新后的超参数对,否则保留更新前的超参数对,且对当前超参数群体根据步骤3.3进行等级划分。Step 3.6: Calculate the objective function value of each hyperparameter pair at the new position, compare the two objective function values before and after the position update of each hyperparameter pair, if the updated value is less than the value before the update, keep the updated hyperparameter pair, Otherwise, the hyperparameter pair before the update is retained, and the current hyperparameter population is classified according to step 3.3.

步骤3.7:将当前迭代次数与最大允许迭代次数对比,若未达到200,则跳转至步骤3.4继续进行参数寻优;否则训练结束,得到的α超参数对即LSSVM模型最优解输出,得到最优的惩罚参数γ为4.9278,最优的核参数σ为0.3112。Step 3.7: Compare the current number of iterations with the maximum allowable number of iterations. If it does not reach 200, skip to step 3.4 to continue parameter optimization; otherwise, the training is over, and the obtained α hyperparameter pair is the output of the optimal solution of the LSSVM model, which is obtained. The optimal penalty parameter γ is 4.9278, and the optimal kernel parameter σ is 0.3112.

步骤4:对测试信号集array2中信号均如上述步骤2提取特征值构成该测试信号的四维特征向量,构成数据测试样本fj,j=1,2,...,100。Step 4: Extract the eigenvalues of the signals in the test signal set array2 as in the above step 2 to form a four-dimensional feature vector of the test signal, and form a data test sample f j , j=1, 2, . . . , 100.

步骤5:将数据测试样本代入决策函数,输出信号的分类结果。Step 5: Substitute the data test sample into the decision function, and output the classification result of the signal.

在本实施例研究中,为对比评价调制信号方法的准确率,选用平均绝对百分差(Mean Absolute Percentage Error,MAPE),计算方法如下:In the research of the present embodiment, in order to compare and evaluate the accuracy of the modulated signal method, the mean absolute percentage error (Mean Absolute Percentage Error, MAPE) is selected, and the calculation method is as follows:

Figure BDA0001661360840000131
Figure BDA0001661360840000131

其中,M为训练样本的信号总数200,yi,yp则分别表示第i个信号的实际值与估计值。最终该方法的准确率可以表示为:Among them, M is the total number of signals of the training sample 200, and y i and y p represent the actual value and estimated value of the ith signal, respectively. The final accuracy of the method can be expressed as:

est_acc=100%-(MAPE×100%)est_acc=100%-(MAPE×100%)

为更好地说明布谷鸟搜索改进灰狼优化对调制信号分类效果的优胜之处,下面将其与原灰狼优化的分类结果进行对比说明:In order to better illustrate the superiority of cuckoo search improved gray wolf optimization on the classification effect of modulated signals, the following is a comparison of the classification results with the original gray wolf optimization:

从图5给出的收敛曲线图可以看出,原灰狼优化(图中图例为GWO)与布谷鸟搜索改进灰狼优化(图中图例为CS-GWO)都有较高的收敛速度,布谷鸟搜索改进灰狼优化在65次迭代左右收敛至最优解,原灰狼优化在85次左右收敛,所以,布谷鸟搜索改进灰狼优化在收敛过程中的速度有所提高。此外,从图5可明显看出,布谷鸟搜索改进灰狼优化的最终识别率亦提升,原灰狼优化在-3dB的低信噪比下MAPE值最终收敛于94.1085%,而布谷鸟搜索改进灰狼优化在该信噪比下MAPE值达到96.7426%,可以得出,在低信噪比下,布谷鸟搜索改进灰狼优化的调制信号分类方法比原灰狼优化的调制信号分类方法所需迭代次数更少,且具有更加精确的识别率。It can be seen from the convergence curve graph given in Figure 5 that both the original gray wolf optimization (the legend in the figure is GWO) and the cuckoo search improved gray wolf optimization (the legend in the figure is CS-GWO) have a high convergence speed, and the cuckoo The improved gray wolf optimization of bird search converges to the optimal solution in about 65 iterations, and the original gray wolf optimization converges at about 85 times. Therefore, the speed of the improved gray wolf optimization of cuckoo search in the convergence process is improved. In addition, it can be clearly seen from Figure 5 that the final recognition rate of the improved gray wolf optimization by cuckoo search is also improved. The MAPE value of the original gray wolf optimization finally converges to 94.1085% under the low signal-to-noise ratio of -3dB, while the improvement of cuckoo search Under the signal-to-noise ratio of gray wolf optimization, the MAPE value reaches 96.7426%. It can be concluded that under low signal-to-noise ratio, the modulation signal classification method of cuckoo search improved gray wolf optimization needs more than the original gray wolf optimization modulation signal classification method. Fewer iterations and more accurate recognition rates.

图6为在12dB的信噪比下,参比的LSSVM分类方法对调制信号分类的平均识别率收敛曲线仿真图。图6可直观看出,原灰狼优化的分类方法的与布谷鸟搜索改进灰狼优化一样有100%的识别率,而观察收敛迭代次数发现,原灰狼优化在90次迭代左右收敛至最优解,布谷鸟搜索改进灰狼优化在65次左右收敛,因此,在高信噪比下,布谷鸟搜索改进灰狼优化对比原灰狼优化,减小了该方法寻优过程中陷入局部最优的概率。FIG. 6 is a simulation diagram of the average recognition rate convergence curve of the modulation signal classification by the reference LSSVM classification method under the signal-to-noise ratio of 12dB. Figure 6 shows intuitively that the classification method of the original gray wolf optimization has the same recognition rate of 100% as the cuckoo search improved gray wolf optimization, and observing the number of convergence iterations, it is found that the original gray wolf optimization converges to the maximum at about 90 iterations. The optimal solution, the improved gray wolf optimization of cuckoo search converges at about 65 times. Therefore, under the high signal-to-noise ratio, the improved gray wolf optimization of cuckoo search is compared with the original gray wolf optimization, which reduces the local maximum in the optimization process of this method. good probability.

图7仿真给出了信噪比在-6dB至12dB的区间内,原灰狼优化的分类方法、布谷鸟搜索改进灰狼优化的分类方法对调制信号的识别率。图7的识别率折线表明,在实验的信噪比下,布谷鸟搜索改进灰狼优化的分类方法性能提升明显。在信噪比大于3dB的情况下,两种分类方法的识别率都达到了收敛值,改进前识别率达到98%,改进后的分类方法识别率达到100%。而在信噪比低于3dB的实验条件下,布谷鸟搜索改进灰狼优化的分类方法性能相较于原灰狼优化分类方法更胜一筹。Figure 7 shows the simulation results in the signal-to-noise ratio in the range of -6dB to 12dB, the classification method of the original gray wolf optimization and the cuckoo search improved gray wolf optimization classification method to the recognition rate of the modulated signal. The broken line of the recognition rate in Figure 7 shows that under the experimental signal-to-noise ratio, the performance of the classification method improved by cuckoo search and improved gray wolf optimization is significantly improved. When the signal-to-noise ratio is greater than 3dB, the recognition rates of the two classification methods have reached the convergence value. And under the experimental condition that the signal-to-noise ratio is lower than 3dB, the performance of the cuckoo search improved gray wolf optimization classification method is better than that of the original gray wolf optimization classification method.

结合以上仿真结果可以得出结论:布谷鸟搜索改进灰狼优化最小二乘支持向量机的分类方法有着更快的收敛速度,提高了效率,同时也提升了调制信号识别率,尤其是在具有现实意义的低信噪比下,改进后的方法的优势尤为明显。Combining the above simulation results, it can be concluded that the classification method of cuckoo search improved gray wolf optimized least squares support vector machine has faster convergence speed, improves efficiency, and also improves the recognition rate of modulated signals, especially in the real world. The advantages of the improved method are particularly obvious under the low signal-to-noise ratio of significance.

本发明利用布谷鸟搜索二次更新狼群位置以提高全局搜索能力,更好地优化了LSSVM函数估计模型参数,使得其在调制信号的分类应用中具有良好的鲁棒性,本发明具有更准确的智能分类识别效果,在其他应用中亦具有重要的应用价值。The present invention uses cuckoo search to update the position of wolves twice to improve the global search ability, and better optimizes the parameters of the LSSVM function estimation model, so that it has good robustness in the classification application of modulated signals, and the present invention has more accurate It also has important application value in other applications.

Claims (5)

1. A modulation signal classification method of a cuckoo search improved wolf optimization support vector machine comprises a training stage and a testing stage, and is characterized in that:
the training phase comprises the steps of:
(1) randomly extracting N modulation signals from five digital modulation signals of M BPSK, QPSK, 8PSK, 16QAM and 64QAM to form a training signal set array1, ensuring that the N modulation signals comprise the 5 digital modulation signals, and naturally forming a test signal set array2 by the rest M-N modulation signals;
(2) For each signal x in training signal set array1iExtracting characteristic parameters F based on high-order cumulant1,F2And an approximate entropy characterizing parameter ApFn based on the local mean decomposition1,ApFn2And the extracted characteristic parameters form a four-dimensional characteristic vector of the training signal: f. ofi kK is 1, 2, 3, 4; all the feature vectors of the training signal constitute the data training sample fi,i=1,2,3,...N;
(3) Substituting the training sample data obtained in the step (2) as one item in an equation constraint into the following least square support vector machine estimation model for training:
Figure FDA0003617742740000011
so that it satisfies the equality constraint:
Figure FDA0003617742740000012
wherein y isiFor the modulation signal type corresponding to the ith training sample, 1, 2, 3, 4, 5 respectively represent different classes of BPSK, QPSK, 8PSK, 16QAM, 64QAM, w is a weight vector,
Figure FDA0003617742740000013
is a non-linear function mapping the modulation signal to a high-dimensional feature space, u represents the deviation, eiThe error amount between the actual result of the ith group of training samples and the estimation output is determined, and gamma is a penalty coefficient;
objective function minJ (w, e) optimized by the above formulai) First part of
Figure FDA0003617742740000014
For calibrating the magnitude of the weights, the second part
Figure FDA0003617742740000015
Describing error in training data, using Lagrange method to find optimal punishment coefficient gamma and decision function of modulation signal
Figure FDA0003617742740000016
So that the value of the objective function is minimized:
Figure FDA0003617742740000017
Wherein muiFor Lagrange multiplier, the values of w, u and e in the above expression are respectively comparedi,μiDifferentiating and making them all equal to 0, the optimum condition for the problem is found:
Figure FDA0003617742740000021
elimination of w and eiThe optimal solution problem will be converted to the form of the following system of linear equations:
Figure FDA0003617742740000022
wherein y is [ y ]1;...;yN],μ=[μ1;...;μN]I is an identity matrix, 1v=[1;...;1]Omega is a square matrix, the mth row and nth column elements are omegamn=K(fm,fn) N, where the introduced kernel function is:
Figure FDA0003617742740000023
finally, a decision function of the modulation signal is obtained;
the testing phase comprises the following steps:
(4) extracting characteristic values from the test signals in the test signal set to form a four-dimensional characteristic vector of the test signals in the step (2) to form a data test sample;
(5) and substituting the data test sample into a decision function, and outputting a classification result of the signal.
2. The method as claimed in claim 1, wherein the decision function of obtaining the modulation signal is as follows:
Figure FDA0003617742740000024
wherein f isj,fiA test sample consisting of a feature vector representing the test signal, y (f)j) Representing the result of signal recognition, muiRepresenting Lagrange multiplier, wherein the kernel function in the formula adopts RBF kernel function:
Figure FDA0003617742740000025
where σ denotes the nuclear parameter and i is not equal to j.
3. The method of claim 1, wherein the method comprises the following steps: selecting high-order cumulant F as characteristic parameter of training signal in step (2)1,F2And approximate entropy of local mean decomposition (APFn)1,ApFn2The specific extraction method is as follows:
(2.1) x (t) is a modulation signal expression and is regarded as a smooth complex random process, and the second, fourth and sixth high-order cumulant expressions are as follows: mpq=E[x(t)p-qx*(t)q]A mixture moment of order p of x (t), where q is expressed as a high order accumulation coefficient of a stationary complex random process;
C21=M21
C40=M40-3M20 2
C63=M63-6M20M41-9M42M21+18M20 2M21+12M21 3
wherein C is21、C40、C63Respectively are second order cumulant, fourth order cumulant and sixth order cumulant, and the characteristic parameter expression based on the high order cumulant is as follows:
Figure FDA0003617742740000031
(2.2)ApEn1and ApEn2The approximate entropy characteristic parameters of the two local mean decomposition quantities are calculated as follows:
decomposing an original modulation signal x (t) into the sum of k PF components and 1 monotonic function by using a local mean decomposition method, namely:
Figure FDA0003617742740000032
wherein the PFiIs the local mean decomposition quantity, h, of the original modulation signalk(t) is a monotonic function, the first two local mean decomposition quantities PF are taken1、PF2Respectively carrying out approximate entropy calculation on the obtained data, and the steps are as follows:
(2.2.1) considering the local mean decomposition as a one-dimensional time series pf (i) of length s, i 1, 2 i,i=1,2,...,s-z-1:
Pi={PF(i),PF(i+1),...,PF(i+z-1)}
(2.2.2) calculating the vector PiAnd Pj,1, 2, s-z-1:
d=max|PF(i+j)-PF(j+k)|,k=0,1,...,z-1
(2.2.3) given a threshold r, for each vector PiCounting the number of d ≦ r and the ratio of the number to the total distance (s-z), and recording
Figure FDA0003617742740000033
(2.2.4) pairs
Figure FDA0003617742740000034
Taking logarithm, then averaging all i, and recording as phiz(r):
Figure FDA0003617742740000035
(2.2.5) adding z to 1, repeating the steps (3.2.1) - (3.2.4) to obtain
Figure FDA0003617742740000036
And phiz+1(r);
(2.2.6) from ΦzAnd phiz+1Obtaining a representation of approximate entropyFormula (II):
Figure FDA0003617742740000037
the first two PF components PF of the modulated signal can be modulated by the above steps1,PF2Respectively calculating approximate entropy, and marking as ApEn1And ApEn2
4. The method of claim 1, wherein the method comprises the following steps: in the step (3), training the least square support vector machine estimation model, and determining the optimal penalty coefficient gamma and the nuclear parameter sigma in the model, the steps are as follows:
(3.1) initializing hyperparameter versus population: the total number of the hyper-parameter pairs is Q, the space searched by the hyper-parameter pair group is a two-dimensional space, wherein the value of the ith pair of hyper-parameter pairs in the two-dimensional space can be represented as Xi=(Xi1,Xi2) Maximum number of allowed iterations tmaxThe value ranges of the penalty coefficient gamma and the nuclear parameter sigma randomly generate a group of initial values of the hyper-parameter pairs in a search space;
(3.2) training the estimation model of the least square support vector machine according to the initial values of gamma and sigma, and calculating the value of an objective function value to be optimized in the estimation model of the least square support vector machine under the current gamma and sigma hyper-parameters, wherein the expression of the objective function is as follows:
Figure FDA0003617742740000041
(3.3) carrying out grade division on the hyper-parameter groups according to the size of the obtained objective function value, wherein the first three pairs of hyper-parameters with smaller objective function values are sequentially the three pairs with the best fitness, the three pairs are sequentially named as alpha, beta and delta hyper-parameter pairs according to the definition of wolf optimization, and the rest hyper-parameter pairs are omega groups;
(3.4) respectively carrying out numerical update on the hyperparameter population according to the following formula:
Dα=|C1·Xα(t)-X(t)|
Dβ=|C2·Xβ(t)-X(t)|
Dδ=|C3·Xδ(t)-X(t)|
X1=Xα(t)-A1·Dα
X2=Xβ(t)-A2·Dβ
X3=Xδ(t)-A3·Dδ
Figure FDA0003617742740000042
wherein Dα、Dβ、DδRespectively representing the distances between the current omega population and alpha, beta and delta hyper-parameter pairs, respectively, t representing the number of current iterations, Xα(t)、Xβ(t)、Xδ(t) respectively represents the positions of the current alpha, beta and delta hyper-parameter pairs, X (t) represents the position of the current hyper-parameter pair, wherein C1、C2、C3Is the wobble factor, expressed by the formula Ci=2r1Determining i as 1, 2, 3, r1∈[0,1],A1、A2、A3Is a convergence factor, represented by formula Ai=2ar2-a,i=1,2,3,r2∈[0,1]A is an iteration factor, X decreases linearly from 2 to 0 with the number of iterations1、X2、X3Respectively defining the advancing direction and the step length of an omega group to alpha, beta and delta hyper-parameter pairs, wherein X (t +1) can comprehensively judge the moving direction of the current hyper-parameter pair;
(3.5) updating the parameters a, Ai、Ci
(3.6) calculating objective function values of all hyper-parameter pairs at the new position, comparing two objective function values before and after position updating of all hyper-parameter pairs, if the updated value is smaller than the value before updating, keeping the updated hyper-parameter pairs, otherwise, keeping the hyper-parameter pairs before updating, and grading the current hyper-parameter group according to the step (3.3);
(3.7) comparing the current iteration times with the maximum allowable iteration times, and if the iteration times t are not reachedmaxSkipping to the step (3.4) to continue parameter optimization; otherwise, the training is finished, and the obtained hyperparameter pair, namely the optimal solution of the least square support vector machine estimation model is output.
5. The method for classifying modulation signals of the blackbird search improved graying support vector machine according to claim 3, wherein the step (3.4) further comprises the following steps:
respectively calculating the positions of the super parameters after the group is updated according to the following formula, and simultaneously calculating the random number upsilon ∈ [0, 1) which obeys uniform distribution]Selecting a random number in the interval, if the random number is larger than the discovery probability PaIf not, updating the numerical value of the current hyper-parameter pair;
Figure FDA0003617742740000051
wherein i is 1, 2, 3, …, N, N represents the number of host bird nests, i.e. the total number of pairs of candidate hyperparameters,
Figure FDA0003617742740000052
Representing inner product multiplication, Xi(t) values of the ith pair of hyperparameters, X, for the tth iterationiAnd (t +1) is the value of the ith pair of hyperparameter after the current iteration, epsilon is a step-size factor, the step size of random flight is Levy (lambda), and the column-dimensional distribution is obeyed.
CN201810462952.3A 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine Active CN108694390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810462952.3A CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810462952.3A CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Publications (2)

Publication Number Publication Date
CN108694390A CN108694390A (en) 2018-10-23
CN108694390B true CN108694390B (en) 2022-06-14

Family

ID=63847375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810462952.3A Active CN108694390B (en) 2018-05-15 2018-05-15 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine

Country Status (1)

Country Link
CN (1) CN108694390B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163131B (en) * 2019-05-09 2022-08-05 南京邮电大学 Human Action Classification Method Based on Hybrid Convolutional Neural Network and Niche Grey Wolf Optimization
CN110166389B (en) * 2019-06-12 2021-06-25 西安电子科技大学 Modulation Identification Method Based on Least Squares Support Vector Machine
CN110378526A (en) * 2019-07-15 2019-10-25 安徽理工大学 The mobile method for predicting of bus station based on GW and SVR, system and storage medium
CN111024433A (en) * 2019-12-30 2020-04-17 辽宁大学 Industrial equipment health state detection method for optimizing support vector machine by improving wolf algorithm
CN111242005B (en) * 2020-01-10 2023-05-23 西华大学 A heart sound classification method based on improved wolf pack algorithm and optimized support vector machine
CN111428418A (en) * 2020-02-28 2020-07-17 贵州大学 Bearing fault diagnosis method and device, computer equipment and storage medium
CN111414658B (en) * 2020-03-17 2023-06-30 宜春学院 Rock mass mechanical parameter inverse analysis method
CN112039820B (en) * 2020-08-14 2022-06-21 哈尔滨工程大学 A Modulation and Identification Method of Communication Signals Based on Evolutionary BP Neural Network of Quantum Swarm Mechanism
CN111967670A (en) * 2020-08-18 2020-11-20 浙江中新电力工程建设有限公司 Switch cabinet partial discharge data identification method based on improved wolf algorithm
CN112163570B (en) * 2020-10-29 2021-10-19 南昌大学 A SVM ECG Signal Recognition Method Optimized Based on Improved Grey Wolf Algorithm
CN114118339B (en) * 2021-11-12 2024-05-14 吉林大学 Radio modulation signal identification and classification method based on improvement ResNet of cuckoo algorithm
CN114964571A (en) * 2022-05-26 2022-08-30 常州大学 Temperature compensation method of pressure sensor based on improved grey wolf algorithm
CN115378777A (en) * 2022-08-25 2022-11-22 杭州电子科技大学 Method for identifying underwater communication signal modulation mode in alpha stable distribution noise environment
CN115562275B (en) * 2022-10-11 2024-10-01 西安科技大学 MLRNN-PID algorithm-based intelligent navigation control method for coal mine crawler-type heading machine
CN116506307B (en) * 2023-06-21 2023-09-12 大有期货有限公司 Network delay condition analysis system of full link
CN117574213B (en) * 2024-01-15 2024-03-29 南京邮电大学 APSO-CNN-based network traffic classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166691A (en) * 2014-07-29 2014-11-26 桂林电子科技大学 Extreme learning machine classifying method based on waveform addition cuckoo optimization
CN107908688A (en) * 2017-10-31 2018-04-13 温州大学 A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166691A (en) * 2014-07-29 2014-11-26 桂林电子科技大学 Extreme learning machine classifying method based on waveform addition cuckoo optimization
CN107908688A (en) * 2017-10-31 2018-04-13 温州大学 A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Improved Grey Wolf Optimizer Algorithm Integrated with Cuckoo Search;Hui Xu等;《The 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications》;20170923;第490-493页 *
PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses;Xiaoyong Liu等;《The Scientific World Journal》;20141231;第1-7页 *

Also Published As

Publication number Publication date
CN108694390A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
CN108694390B (en) Modulation signal classification method for cuckoo search improved wolf optimization support vector machine
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
Hernández-Lobato et al. Predictive entropy search for multi-objective bayesian optimization
CN112001270B (en) Automatic target classification and recognition method for ground radar based on one-dimensional convolutional neural network
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN105488528A (en) Improved adaptive genetic algorithm based neural network image classification method
CN110082738B (en) Radar target identification method based on Gaussian mixture and tensor recurrent neural network
CN110097060A (en) A kind of opener recognition methods towards trunk image
CN105044722B (en) The full Bayesian Discriminating Features extracting method of synthetic aperture radar target
CN111273288B (en) Radar unknown target identification method based on long-term and short-term memory network
CN108694474A (en) Fuzzy neural network dissolved oxygen in fish pond prediction based on population
CN116437290A (en) A Model Fusion Method Based on CSI Fingerprint Location
CN113988163B (en) Radar high-resolution range profile recognition method based on multi-scale group fusion convolution
CN114220164B (en) Gesture recognition method based on variational modal decomposition and support vector machine
CN111208483A (en) Recognition method of out-of-radar targets based on Bayesian support vector data description
CN104331711B (en) SAR image recognition methods based on multiple dimensioned fuzzy mearue and semi-supervised learning
CN113780455A (en) Moving target identification method of C-SVM (support vector machine) based on fuzzy membership function
Jaffel et al. A symbiotic organisms search algorithm for feature selection in satellite image classification
Trapp et al. Learning deep mixtures of gaussian process experts using sum-product networks
CN110780270A (en) Target library attribute discrimination local regular learning subspace feature extraction method
CN116340846A (en) Aliasing modulation signal identification method for multi-example multi-label learning under weak supervision
Dinata et al. Optimizing the Evaluation of K-means Clustering Using the Weight Product.
CN113296947A (en) Resource demand prediction method based on improved XGboost model
Mühlenstädt et al. How much data do you need? Part 2: Predicting DL class specific training dataset sizes
Bui et al. Density-softmax: efficient test-time model for uncertainty estimation and robustness under distribution shifts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant