WO2017197626A1 - 改进人工蜂群优化的极限学习机方法 - Google Patents
改进人工蜂群优化的极限学习机方法 Download PDFInfo
- Publication number
- WO2017197626A1 WO2017197626A1 PCT/CN2016/082668 CN2016082668W WO2017197626A1 WO 2017197626 A1 WO2017197626 A1 WO 2017197626A1 CN 2016082668 W CN2016082668 W CN 2016082668W WO 2017197626 A1 WO2017197626 A1 WO 2017197626A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bee
- fitness
- learning machine
- extreme learning
- threshold
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/086—Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/008—Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
Definitions
- the invention belongs to the field of artificial intelligence technology, and relates to an improved extreme learning machine method, in particular to an extreme learning machine method for improving artificial bee colony optimization.
- ANN Artificial Neural Networks
- SLFNs Single-hidden Layer Feed Forward Neural Networks
- ELM Extreme Learn-ing Machine
- Extreme learning machine theory and applications.Neurocomputing, 2006, 70(1-3): 489-501". Since the extreme learning machine can randomly generate the connection weight between the input layer and the hidden layer and the hidden layer neuron threshold b before training, it can overcome some shortcomings of the traditional feedforward neural network. Due to its fast learning speed and excellent generalization performance, the Extreme Learning Machine has attracted the research and attention of many scholars and experts at home and abroad. Extreme learning machine has a wide range of applicability. It is not only suitable for regression and fitting problems, but also suitable for classification, pattern recognition and other fields, so it has been widely used.
- connection weight of the extreme learning machine and the threshold b are randomly generated before training and remain unchanged during the training process, the role of the partial hidden layer node is very small. If the data set is biased, even most of the nodes will be close. The consequences of 0. Therefore, Huang et al. pointed out that in order to achieve the desired accuracy, a large number of hidden layer nodes need to be set.
- Neural processing letters, 2012, 36 (3): 285-305.” proposes an adaptive evolution Extreme learning machine (SaE-ELM), the algorithm combines an adaptive evolutionary algorithm with an extreme learning machine, optimizes the hidden layer nodes on the basis of fewer set parameters, and improves the regression problem of the extreme learning machine. And the accuracy and stability of the classification problem, but the algorithm is used for a long time in training, and the practicability is poor; "Wang Jie, Bi Haoyang. An extreme learning machine based on particle swarm optimization[J]. Journal of Zhengzhou University: Science Edition, 2013, 45(1): 100-104.” A particle swarm optimization-based extreme learning machine (PSO-ELM) is proposed.
- SaE-ELM adaptive evolution Extreme learning machine
- the particle swarm optimization algorithm is used to optimize the input layer weights and implicit values of the extreme learning machine. Including the layer deviation, so as to get an optimal network, but the algorithm only achieves a good effect on the function fitting, and the effect is not good in practical application; "Lin Meijin, Luo Fei, Su Caihong, etc. New hybrid intelligent extreme learning machine [J]. Control and decision making, 201 5,30(06):1078-1084. Combining the differential evolution algorithm and the particle swarm optimization algorithm, and referring to the memetic evolution mechanism of the leapfrog algorithm, a hybrid intelligent optimization algorithm (DEPSO-ELM) is proposed for parameter optimization.
- DEPSO-ELM hybrid intelligent optimization algorithm
- the extreme learning machine algorithm is used to obtain the output weight of SLFNs, but the algorithm relies excessively on experimental data and has poor robustness.
- the traditional extreme learning machine regression method is as follows:
- ⁇ j ⁇ R d is the connection weight from the input layer to the hidden layer node
- b j ⁇ R is the neuron threshold of the hidden layer node
- g() is the activation function of the hidden layer node
- g( ⁇ j ⁇ x i +b j ) is the output of the i-th sample at the hidden layer node
- ⁇ j ⁇ x i is the inner product of the vector
- ⁇ j is the connection weight between the hidden layer node and the output layer.
- Step 1a randomly initializing the connection weight ⁇ and the threshold b, randomly selecting at the beginning of the network training, and remaining unchanged during the training process;
- Step 2a obtaining the output weight by solving the least squares solution of the following linear equations
- H + represents the Moore-Penrose (MP) generalized inverse of the hidden layer output matrix.
- Step 3a which is obtained in the formula (3) After taking in (1), the calculation result can be obtained.
- ABS Artificial Bee Colony
- Step 1b generating an initial solution:
- an initial solution is generated for the SN individuals, as follows:
- Step 2b Hiring Bee Searching Phase: Starting from the initial position, each employed bee individual searches for a new honey source near the current location, which is updated as follows:
- v i,j represents new honey source position information
- x i,j represents original honey source position information
- rand[-1,1] represents a random number between -1 and 1
- x k,j Indicates the j-dimensional information of the kth honey source, k ⁇ ⁇ 1, 2, ..., SN ⁇ and k ⁇ i.
- the fitness value of the honey source is calculated. If the fitness of the new honey source is better than the original honey source, the new honey source location is adopted. Otherwise, the original honey source location information will continue to be used, and the number of mining times will be increased by one.
- Step 3b Follow the bee following stage: following the position information transmitted by the bee according to the employment bee, the honey source information with higher fitness is selected according to the probability, and a changed position is generated on the basis of the employment bee to find a new honey source.
- the probability selection formula is as follows:
- fitness(x i ) represents the fitness value of the i-th following bee.
- P i represents the probability that the i-th following bee is selected.
- Step 4b detection bee search stage: When the honey source mining reaches a certain number of times but does not change the fitness value, the hired bee is turned into a detection bee and finds a new honey source position, and the search formula is the same formula (4).
- the present invention proposes an extreme learning machine method (DECABC-ELM) to improve the artificial bee colony optimization based on the traditional extreme learning machine, which effectively improves the classification. And the effect of the return.
- DECABC-ELM extreme learning machine method
- a method for improving an extreme beeper optimized by an artificial bee colony includes the following steps:
- Step 1 Generate an initial solution for the SN individuals:
- Step 2 Globally optimize the connection weight ⁇ and the threshold b of the extreme learning machine:
- x best,j represents the current optimal group of bee colonies
- x k,j ,x l,j and x m,j are the other three different individuals randomly selected in addition to the current in vitro, ie i ⁇ k ⁇ l ⁇ m; whenever the hiring bee reaches a new position, the connection learning weight ⁇ and the threshold b of the extreme learning machine are verified against the training sample set and the fitness value is obtained, and if the fitness value is high, the use is performed.
- New location information replaces old location information
- Step 3 performing local optimization on the limit learning machine connection weight ⁇ and the threshold b;
- N i represents the number of clones of the i-th following bee
- SN represents the total number of individuals
- fitness(x i ) represents the fitness value of the i-th following bee
- the following bee is selected to select the random number with a probability greater than 0 ⁇ 1, and the optimization method is the same as (8);
- the food source is selected by the selection probability calculation formula by the concentration probability and the fitness probability of the group, and the new position information is formed; the new position information and the position information before the amplification are the same;
- the fitness probability calculation formula is:
- the concentration probability calculation formula is:
- N i represents the number of following bees that are similar to the i-th following bee fitness value; Indicates the proportion of followers in the population with similar fitness; T is the concentration threshold; HN indicates the number of followers with a concentration greater than T;
- the selection probability calculation formula is:
- the following bee population is selected according to formula (11), and the following SN follower bees with the highest fitness function are selected to constitute new food source information.
- Step 4 setting the number of cycles to limit times; if the food source information is not updated within the limit cycle, converting the hired bee into a scout bee, and reinitializing the individual using the formula (7) in the step 1;
- Step 5 When the number of iterations reaches the set value, or the mean square error value reaches the accuracy of 1e-4, the extreme learning machine connection weight ⁇ and the threshold b are extracted from the optimal individual, and the test set is used for verification.
- the invention better overcomes the shortcomings of the traditional extreme learning machine when applied to classification and regression. Compared with the traditional extreme learning machine and the SaE-ELM algorithm, the method of the invention has strong robustness and effectively improves the method. The results of classification and regression.
- Figure 1 is a flow chart of the present invention.
- FIG. 1 is a flow chart of the present invention. As shown in FIG. 1, the method for improving the limit learning machine for artificial bee colony optimization according to the present invention is as follows:
- Step 1 Using the improved artificial bee colony algorithm to perform the extreme learning machine connection weight ⁇ and threshold b Optimization, according to the following formula (7), generate an initial solution for SN individuals
- Step 2 Based on the original artificial bee colony optimization algorithm, combine the DE/rand-to-best/1 differential mutation operator in the DE algorithm with the employment bee search formula, and use the improved formula (8).
- the extreme learning machine connects the weight ⁇ and the threshold b for global optimization.
- j represents the current optimal group of bee colonies
- x k, j , x l, j and x m, j are the other three different individuals randomly selected in addition to the current in vitro, namely i ⁇ k ⁇ l ⁇ m.
- Step 3 Based on the original artificial bee colony algorithm, the clonal amplification operator in the immune cloning algorithm is introduced into the artificial bee colony algorithm, and the improved learning equation is used to locally optimize the extreme learning machine connection weight ⁇ and threshold b. To achieve better accuracy.
- N i represents the number of clones of the i-th following bee
- SN represents the total number of individuals
- fitness(x i ) represents the fitness value of the i-th following bee.
- the cloned and amplified populations are selected according to the fitness probability calculation formula (6), and the following bees with a random probability between 0 and 1 are selected, that is, the followers with higher fitness have a higher probability of change.
- the way to find the best is the same as the formula (8).
- the food source with higher fitness is selected according to the selection probability formula through the concentration probability and the fitness probability of the group, and new position information is formed, wherein the new position information and the pre-amplification are selected.
- the number of location information is the same.
- the fitness probability calculation formula is the same as equation (6), the concentration probability and the selection probability are as shown in equations (10) to (11).
- N i represents the number of following bees that are similar to the i-th following bee fitness value; Represents the proportion of followers in the population with similar fitness; T is the concentration threshold; HN indicates the number of followers whose concentration is greater than T.
- the following bee population is selected according to the above selection probability formula, and the following SN follower bees with the highest fitness function are selected to constitute new food source information.
- Step 4 If the food source information is not updated within a given initial condition limit cycle, the hire bee is converted into a scout bee, and the individual is reinitialized using equation (7) in step 1.
- the limit times are specifically selected to be 10 times.
- Step 5 When the number of iterations reaches the set value, or the mean square error value reaches the accuracy of 1e-4, the extreme learning machine connection weight ⁇ and the threshold b extracted from the optimal individual are verified using the test set.
- the parameters of ABC-ELM and DECABC-ELM are the same. The results are shown in Table 1.
- Embodiment 1 The specific steps of Embodiment 1 are:
- Step 1 Generate an initial solution for each of the SN individuals, where each individual is encoded as follows
- the way to generate the initial solution is to use the following formula:
- x i,j represents any one of the values of ⁇
- the initial expression is generated as the usage formula
- Step 2 Optimize each individual using the improved hire bee search formula, where the optimization formula is as follows:
- v i,j x i,j +rand[-1,1](x best,j -x k,j +x l,j -x m,j );
- j represents the value of the j-dimensional of the current optimal group of the bee colony
- x k, j , x l, j and x m, j are the other three different individuals randomly selected in addition to the current in vitro
- Step 3 Perform clonal expansion on each individual ⁇ , and select corresponding individuals to change position information by a certain probability.
- Step 3 Perform clonal expansion on each individual ⁇ , and select corresponding individuals to change position information by a certain probability.
- follow bees are cloned according to their fitness, and the number of clones is proportional to their fitness:
- N i represents the number of clones of the i-th following bee
- SN represents the total number of individuals
- fitness(x i ) represents the fitness value of the i-th following bee
- the clonal amplified population is optimized according to the probability, wherein the optimization formula is the same as step 2 above, and the probability formula is as follows:
- fitness(x i ) represents the fitness value of the i-th following bee.
- P i represents the probability that the i-th following bee is selected for updating.
- the fitness value of each individual is calculated, that is, the connection weight ⁇ and the threshold b including the ELM are extracted in each individual ⁇ , and the ELM network is constructed, and the input value band of the SinC function is used. The result obtained after entering the ELM and the correct result of the function are used to find the mean square error, and the fitness information is calculated.
- the food source with higher fitness is selected by the concentration and fitness of the population to form new position information, wherein the new position information selected and the number of position information before amplification are the same.
- the fitness probability is the same as the above probability formula, concentration probability and selection probability:
- N i represents the number of followers that are similar to the i-th following bee fitness value; Indicates the proportion of followers in the population with similar suitability; T is the concentration threshold; HN indicates the number of followers with a concentration greater than T.
- the following bee population is selected according to the above selection probability formula, and the following SN follower bees with the highest fitness function are selected to constitute new food source information.
- the hiring bee is converted into a scout bee, and the individual is initialized again using the formula in step 1.
- connection weight ⁇ and the threshold b of the ELM contained in the optimal individual ⁇ we take the connection weight ⁇ and the threshold b of the ELM contained in the optimal individual ⁇ to construct the ELM network, and test the ELM using the test set reserved by the SinC function, and obtain the result of the ELM.
- the result of the SinC function is the mean square error.
- the mean square error was averaged over the experiment and the standard deviation between the mean square errors was calculated and compared with other algorithms. The comparison results are shown in Table 1.
- Example 2 Regression data set simulation experiment.
- the performance of the four algorithms was compared using four real regression data sets in the machine learning library at the University of California, Irvine.
- the dataset names are: Auto MPG (MPG), Computer Hardware (CPU), Housing, and Servo.
- the data in the data set in the experiment was randomly divided into a training sample set and a test sample set, of which 70% was used as the training sample set and the remaining 30% was used as the test sample set.
- we normalize the data before the algorithm runs that is, the input variables are normalized to [-1, 1], and the output variables are normalized to [0, 1].
- the hidden layer nodes gradually increased from small to large, and the experimental results with the average optimal RMSE were recorded in Table 2 - Table 5.
- DECABC-ELM obtained the smallest RMSE in all dataset fitting experiments, but in Auto MPG and Computer Hardware, the standard deviation of DECABC-ELM is worse than other algorithms, ie stability. Need to be improved. From the training time and the number of hidden layer nodes, PSO-ELM and DEPSO-ELM converge faster, and the number of hidden layer nodes used is less, but the accuracy is not as good as DECABC-ELM. Considering comprehensively, DECABC-ELM, which is an algorithm described in the present invention, has superior performance.
- Example 3 Classification data set simulation experiment.
- the four real classification dataset names are: Blood Transfusion Service Cen-ter (Blood), Ecoli, Iris and Wine. Same as the categorical data set, 70% of the experimental data is used as the training sample set, and 30% is used as the test sample set, and the input variables of the data set are normalized to [-1, 1]. The hidden layer nodes were gradually increased in the experiment, and the experimental results with the optimal classification rate were recorded in Table 6-Table 9.
- DECABC-ELM has achieved the highest classification accuracy rate in the four classification data sets.
- DECABC-ELM is still not ideal in terms of stability.
- the DECABC-ELM algorithm takes longer than the PSO-ELM, DEPSO-ELM and ABC-ELM algorithms and is shorter than the SaE-ELM.
- DECABC-ELM can achieve higher classification accuracy by using fewer hidden layer nodes.
- DECABC-ELM that is, the method of the present invention, has superior performance.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Physiology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Robotics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
一种改进人工蜂群优化的极限学习机的方法,包括以下步骤:步骤1、对SN个个体产生初始解;步骤2、对极限学习机的连接权值ω和阈值b进行全局寻优;步骤3、对极限学习机连接权值ω和阈值b进行局部寻优;步骤4、如果食物源信息在一定时间内没有更新,则将雇佣蜂转换为侦查蜂,回到步骤1重新初始化此个体;步骤5、从最优个体中提取出的极限学习机连接权值ω和阈值b,并使用测试集进行验证。该方法较好地克服了传统极限学习机应用于分类和回归时结果较差的缺点,相对于传统极限学习机和SaE-ELM算法,具有较强的鲁棒性,有效地提高了分类和回归的结果。
Description
本发明属于人工智能技术领域,涉及一种改进的极限学习机方法,具体涉及一种改进人工蜂群优化的极限学习机方法。
人工神经网络(Artificial Neural Networks,ANN),是一种模仿生物神经网络行为特征,进行分布式并行计算处理的算法数学模型。其中,单隐含层前馈神经网络(Single-hidden Layer Feed forward Neural Networks,SLFNs)因其良好的学习能力在许多领域得到了广泛的应用。但是由于传统的前馈神经网络大多数使用了梯度下降法修正隐层结点的值,因此容易出现训练速度慢、容易陷入局部极小点、需要设置较多的参数等缺点。近年来,一种新型的前馈神经网络——极限学习机(Extreme Learn-ing Machine,ELM)被Huang等人提出:“Huang G B,Zhu Q Y,Siew C K.Extreme learning machine:theory and applications.Neurocomputing,2006,70(1-3):489-501”。由于极限学习机在训练之前可以随机产生不变的输入层与隐含层之间的连接权值以及隐含层神经元阈值b,因此能够克服传统前馈神经网络的一些缺点。极限学习机因其较快的学习速度和优良的泛化性能,引起了国内外许多学者专家的研究和关注。极限学习机的适用性广泛,其不仅仅适用于回归、拟合问题,还适用于分类,模式识别等领域,因此得到了广泛的应用。
由于极限学习机的连接权值和阈值b在训练之前随机生成并在训练过程中保持不变,因此部分隐层结点的作用非常的小,如果数据集有偏甚至会造成大部分结点接近于0的后果。因此Huang等人指出,为了达到理想的精度,需要设置大量的隐层节点。
为解决这个缺点,一些学者使用智能优化算法结合极限学习机得到了很好的效果。“Zhu Q Y,Qin A K,Suganthan P N,et al.Evolutionary ex-treme learning machine[J].Pattern recognition,2005,38(10):1759-1763.”提出了一种进化极限学习机(E-ELM),算法使用差分进化算法对ELM的隐层结点参数进行优化,从而提高了ELM的性能,但需要设置的参数较多,实验过程
复杂;“Cao J,Lin Z,Huang G B.Self-adaptive evolutionary ex-treme learning machine[J].Neural processing letters,2012,36(3):285-305.”提出了一种自适应的进化极限学习机(SaE-ELM),算法将一种自适应的进化算法和极限学习机进行结合,在设定参数较少的基础上对隐层结点进行优化,提高了极限学习机在回归问题和分类问题上的精度以及稳定性,但是该算法在训练时使用时间过长,实用性较差;“王杰,毕浩洋.一种基于粒子群优化的极限学习机[J].郑州大学学报:理学版,2013,45(1):100-104.”提出了一种基于粒子群优化的极限学习机(PSO-ELM),利用粒子群优化算法优化选择极限学习机的输入层权值和隐含层偏差,从而得到一个最优的网络,但该算法仅仅在函数拟合上达到了较好的效果,在实际应用中效果欠佳;“林梅金,罗飞,苏彩红,等.一种新的混合智能极限学习机[J].控制与决策,2015,30(06):1078-1084.”结合了差分进化算法和粒子群优化算法,借鉴蛙跳算法的模因进化机制,提出了一种混合智能优化算法(DEPSO-ELM)进行参数寻优,采用极限学习机算法求得SLFNs的输出权值,但是该算法过分地依赖实验数据,鲁棒性较差。
因此如何能够较好地解决传统极限学习机中的缺点,提高其效果显得非常重要。
传统极限学习机回归方法如下:
对于N个任意不同的训练样本集(xi,yi)(i=1,2,...,N),xi∈Rd,yi∈Rm,则一个具有L个隐层节点的前馈神经网络的输出为:
式(1)中,ωj∈Rd是输入层到到隐层节点的连接权值;bj∈R是隐层节点的神经元阈值;g()是隐层节点的激活函数;g(ωj·xi+bj)为第i个样本在隐层节点的输出;ωj·xi为向量的内积;βj是隐层节点和输出层之间的连接权值。
步骤1a、随机初始化连接权值ω和阈值b,在网络训练开始时随机选择,且在训练过程中保持不变;
方程组(2)的解为:
式(3)中,H+代表隐层输出矩阵的Moore-Penrose(MP)广义逆。
传统人工蜂群优化(Artificial Bee Colony,ABC)算法的步骤如下:
步骤1b、产生初始解:在初始化阶段,对SN个个体产生初始解,其式如下:
步骤2b、雇佣蜂搜寻阶段:从初始位置开始,每个雇佣蜂个体在当前位置的附近搜索新的蜜源,其更新式如下:
vi,j=xi,j+rand[-1,1](xi,j-xk,j) (5)
式(5)中:vi,j表示新的蜜源位置信息;xi,j表示原蜜源位置信息;rand[-1,1]表示取-1~1之间的随机数;xk,j表示第k个蜜源的j维信息,k∈{1,2,...,SN}并且k≠i。
当雇佣蜂获取到新的蜜源位置信息后,将计算蜜源的适应度值,如果新蜜源的适应度比原蜜源适应度好,则采用新蜜源位置。否则继续采用原蜜源位置信息,且开采次数加1。
步骤3b、跟随蜂跟随阶段:跟随蜂根据雇佣蜂传递过来的位置信息,依概率选择适应度较高的蜜源信息,在雇佣蜂的基础上产生一个变化的位置,去寻找新的蜜源。其概率选择计算式如下:
式(6)中,fitness(xi)表示第i个跟随蜂的适应度值。Pi代表了第i个跟随蜂被选择的概率。一旦跟随蜂被选择,就会按照式(5)进行位置更新操作。如果新的蜜源适应度较好,采用新蜜源位置,否则继续采用原蜜源位置信息且开采次数加1。
步骤4b、侦查蜂搜寻阶段:当蜜源开采达到一定次数却未改变适应度值时,雇佣蜂转变为侦查蜂并寻找新的蜜源位置,其搜索式同式(4)。
发明内容
针对传统极限学习机应用于分类和回归时存在的缺点,本发明在传统极限学习机的基础上,提出一种改进人工蜂群优化的极限学习机方法(DECABC-ELM),有效地提高了分类和回归的效果。
本发明的技术方案如下:
一种改进人工蜂群优化的极限学习机的方法,包括以下步骤:
给定训练样本集(xi,yi)(i=1,2,...,N),xi∈Rd,yi∈Rm;其激励函数为g(),隐层节点数目为L;
步骤1、对SN个个体产生初始解为:
每个个体的编码方式如下所示:
编码中,ωj(j=1,...,L)是D维向量,每一维都是-1到1之间的随机数;bj则是0到1之间的随机数;G表示蜂群的迭代次数;
步骤2、对极限学习机的连接权值ω和阈值b进行全局寻优:
vi,j=xi,j+rand[-1,1](xbest,j-xk,j+xl,j-xm,j), (8)
式(8)中,xbest,j代表当前蜂群最优个体,xk,j,xl,j和xm,j则是除当前个体外随机选择的其他三个不同的个体,即i≠k≠l≠m;每当雇佣蜂到达一个新的位置,将极限学习机的连接权值ω和阈值b对训练样本集进行验证并得到适应度值,如果适应度值较高,则使用新的位置信息代替旧的位置信息;
步骤3、对极限学习机连接权值ω和阈值b进行局部寻优;
首先,将跟随蜂根据其适应度进行克隆,克隆的数目和其适应度成正比:
式(9)中,Ni表示第i个跟随蜂的克隆数目;SN表示个体的总数目;fitness(xi)表示第i个跟随蜂的适应度值;
其次,对克隆扩增后的群体,按照适应度概率计算式,选择概率大于0~1之间随机数的跟随蜂进行寻优,寻优的方式同式(8);
跟随蜂位置信息变化后,通过群体的浓度概率和适应度概率,使用选择概率计算式选出食物源,并组成新的位置信息;新位置信息和扩增之前的位置信息数目相同;
所述适应度概率计算式为:
所述浓度概率计算式为:
所述选择概率计算式为:
Pchoose(xi)=αPi(xi)+(1-α)Pd(xi), (11)
利用轮盘赌的形式,依据式(11)对跟随蜂群体进行选择,选择前SN个适应度函数最高的跟随蜂构成新的食物源信息。
步骤4、设定循环次数为limit次;如果食物源信息在limit次循环内没有更新,则将雇佣蜂转换为侦查蜂,使用所述步骤1中的式(7)重新初始化此个体;
步骤5、当迭代次数到达设定值,或均方误差值达到1e-4的精度后,从最优个体中提取出极限学习机连接权值ω和阈值b,并使用测试集进行验证。
本发明的有益技术效果是:
本发明较好地克服了传统极限学习机应用于分类和回归时结果较差的缺点,相对于传统极限学习机和SaE-ELM算法,本发明方法具有较强的鲁棒性,有效地提高了分类和回归的结果。
图1是本发明的流程图。
图1是本发明的流程图,如图1所示,本发明所述改进人工蜂群优化的极限学习机方法过程如下:
给定训练样本集(xi,yi)(i=1,2,...,N),xi∈Rd,yi∈Rm;其激励函数为g(),隐层节点数目为L;
步骤1:使用改进的人工蜂群算法对极限学习机连接权值ω和阈值b进行
优化,按照如下的式(7),对SN个个体产生初始解
其中每个个体的编码方式如下所示:
根据极限学习机算法,编码中ωj(j=1,...,L)是D维向量,每一维都是-1到1之间的随机数;bj则是0到1之间的随机数;G表示了蜂群的迭代次数。
步骤2:在原始的人工蜂群优化算法的基础上,将DE算法中的DE/rand-to-best/1差分变异算子和雇佣蜂搜索式进行结合,使用经过改进的式(8)对极限学习机连接权值ω和阈值b进行全局寻优。
vi,j=xi,j+rand[-1,1](xbest,j-xk,j+xl,j-xm,j) (8)
其中,xbest,j代表当前蜂群最优个体,xk,j,xl,j和xm,j则是除当前个体外随机选择的其他三个不同的个体,即i≠k≠l≠m。每当雇佣蜂到达一个新的位置,我们将位置信息,即极限学习机的连接权值ω和阈值b对训练样本集进行验证并得到适应度值,如果适应度值较高,则使用新的位置信息代替旧的位置信息。
步骤3:在原始人工蜂群算法的基础上,将免疫克隆算法中的克隆扩增算子引入人工蜂群算法中,使用改进的式对极限学习机连接权值ω和阈值b进行局部寻优,以达到更好的精度。
首先将跟随蜂根据其适应度进行克隆,克隆的数目和其适应度成正比:
式(9)中Ni表示第i个跟随蜂的克隆数目;SN表示个体总数目;fitness(xi)表示第i个跟随蜂的适应度值。
接下来对克隆扩增后的群体按照适应度概率计算式(6)选择选择概率大于0~1之间随机数的跟随蜂进行寻优,即适应度较高的跟随蜂拥有较高的变化几率,寻优的方式和式(8)相同。
跟随蜂位置信息变化后,通过群体的浓度概率和适应度概率,根据选择概率式选出适应度较高的食物源,并组成新的位置信息,其中筛选出来的新位置信息和扩增之前的位置信息数目相同。其中,适应度概率计算式同式(6),浓度概率以及选择概率见式(10)~式(11)。
浓度概率计算式:
选择概率计算式:
Pchoose(xi)=αPi(xi)+(1-α)Pd(xi) (11)
利用轮盘赌的形式,依据上述选择概率式对跟随蜂群体进行选择,选择前SN个适应度函数最高的跟随蜂构成新的食物源信息。
步骤4、如果食物源信息在给定的初始条件limit次循环内没有更新,则将雇佣蜂转换为侦查蜂,使用步骤1中的式(7)重新初始化此个体。在本实施例中,limit次具体选取为10次。
步骤5、当迭代次数到达设定值,或均方误差值达到1e-4的精度后,从最优个体中提取出的极限学习机连接权值ω和阈值b,并使用测试集进行验证。
以下用三个实施例来证明,本发明的技术方案相对于现有技术,属于更优的技术方案。
实施例1:SinC函数仿真实验。
“SinC”函数表达式如下:
数据产生方法:产生5000个[-10,10]均匀分布的数据x,计算得到5000个数据集{xi,f(xi)},i=1,...,5000,再产生5000个[-0.2,0.2]均匀分布的噪声ε;令训练样本集为{xi,f(xi)+εi},i=1,...,5000,再产生另一组5000个数据集{yi,f(yi)},i=1,...,5000作为测试集。逐渐增加四种算法的隐层节点数对函数进行拟合,ABC-ELM和DECABC-ELM算法参数设置相同。结果如表1所示。
表1SinC函数拟合结果对比
从表1中可以看出,随着隐层节点的增加,平均测试误差和标准差逐渐减小,当隐层节点过多时,会出现过拟合的情况。ABC-ELM由于容易陷入局部最优解等缺点,在节点数较高时效果依然较差。在大部分情况下,当隐层节点数相同时,DECABC-ELM具有更小的平均测试误差和标准差。
实施例1的具体步骤为:
步骤1、对SN个个体产生初始解,其中每个个体的编码方式如下所示
编码中ωj(j=1,...,L)是D维向量,每一维都是-1到1之间的随机数;bj则是0到1之间的随机数;G表示了蜂群的迭代次数。产生初始解的方法是使用如下式:
步骤2、使用改进的雇佣蜂搜索式分别对每个个体进行寻优,其中寻优式如下所示:
vi,j=xi,j+rand[-1,1](xbest,j-xk,j+xl,j-xm,j);
其中,xbest,j代表当前蜂群最优个体第j维的值,xk,j,xl,j和xm,j则是除当前个体外随机选择的其他三个不同的个体第j维的值,即i≠k≠l≠m。由于每个个体θ中包含了ELM的连接权值ω和阈值b,因此我们使用个体θ变化前和变化后的内容分别构建ELM网络,并对ELM得到的结果和SinC函数的结果求均方误差。如果变化后的个体均方误差较小,即适应度值较大,那么使用新的位置信息代替旧的位置信息。
步骤3、对每个个体θ进行克隆扩增,并通过一定概率选择相应的个体进行位置信息变化。首先将跟随蜂根据其适应度进行克隆,克隆的数目和其适应度成正比:
其中Ni表示第i个跟随蜂的克隆数目;SN表示个体总数目;fitness(xi)表示第i个跟随蜂的适应度值。
对克隆扩增后的群体按照概率进行寻优操作,其中寻优式和上述步骤2相同,概率式如下:
其中,fitness(xi)表示第i个跟随蜂的适应度值。Pi代表了第i个跟随蜂被选择进行更新的概率。
克隆后的跟随蜂位置信息变化后,计算每个个体的适应度值,即提取出每个个体θ中包含了ELM的连接权值ω和阈值b后构建ELM网络,使用SinC函数的输入值带入ELM后求得的结果和函数正确结果求均方误差,并计算适应度信息。
在克隆变异后的群体中,通过群体的浓度和适应度选出适应度较高的食物源,组成新的位置信息,其中筛选出来的新位置信息和扩增之前的位置信息数目相同。其中,适应度概率同上述概率式,浓度概率以及选择概率见下:
浓度概率计算式:
选择概率计算式:
Pchoose(xi)=αPi(xi)+(1-α)Pd(xi)
利用轮盘赌的形式,依据上述选择概率式对跟随蜂群体进行选择,选择前SN个适应度函数最高的跟随蜂构成新的食物源信息。
如果食物源信息在一定时间内没有更新,则将雇佣蜂转换为侦查蜂,使用步骤1中的式对这个个体再次进行初始化。
经过一定次数的迭代之后,我们取出最优个体θ中包含的ELM的连接权值ω和阈值b后构建ELM网络,并使用SinC函数预留的测试集对ELM进行测试,对ELM得到的结果和SinC函数的结果求均方误差。整个实验进行多次后均方误差取平均值,并计算各个均方误差之间的标准差,并和其他算法进行对比。对比结果如表1所示。
实施例2:回归数据集仿真实验。
使用加州大学欧文分校的机器学习库中的4个真实回归数据集对4种算法的性能进行对比。数据集名称分别是:Auto MPG(MPG),Computer Hardware(CPU),Housing和Servo。实验中数据集中的数据被随机分为训练样本集和测试样本集,其中70%作为训练样本集,剩余30%作为测试样本集。为减少各个变量差异较大的影响,我们在算法运行前先对数据进行归一化处理,即输入变量归一化到[-1,1],输出变量归一化到[0,1]。所有实验中,隐层节点从小到大逐步增加,具有平均最优RMSE的实验结果被记录在表2-表5中。
表2 Auto MPG拟合结果对比
表3 Computer Hardware拟合结果对比
表4 Housing拟合结果对比
表5 Servo拟合结果对比
从表格中我们可以看出,DECABC-ELM在所有的数据集拟合实验中获得了最小的RMSE,但是在Auto MPG和Computer Hardware中,DECABC-ELM的标准差却比其他算法差,即稳定性有待提升。从训练时间和隐层节点数上看,PSO-ELM和DEPSO-ELM收敛速度较快,且使用的隐层节点数较少,但精度却不如DECABC-ELM。综合考虑,DECABC-ELM,即本发明所述的算法,性能较为优越。
实施例2的具体步骤与实施例1相同。
实施例3:分类数据集仿真实验。
使用了加州大学欧文分校的机器学习库。4个真实分类数据集名称分别是:Blood Transfusion Service Cen-ter(Blood),Ecoli,Iris和Wine。和分类数据集相同,实验数据中70%作为训练样本集,30%作为测试样本集,数据集的输入变量归一化到[-1,1]。实验中隐层节点逐步增加,具有最优分类率的实验结果被记录在表6-表9中。
表6 Blood分类结果对比
表7 Ecoli分类结果对比
表8 Iris分类结果对比
表9 Wine分类结果对比
表格显示在四种分类数据集中DECABC-ELM均取得了最高分类正确率。但DECABC-ELM在稳定性上依然不够理想。DECABC-ELM算法的用时较PSO-ELM,DEPSO-ELM和ABC-ELM三种算法更长,比SaE-ELM短。相比于其他算法,DECABC-ELM使用了较少的隐层节点就能够达到较高的分类正确率。综合上述考虑,DECABC-ELM,即本发明所述的方法,性能较优。
实施例3的具体步骤与实施例1相同。
以上所述的仅是本发明的优选实施方式,本发明不限于以上实施例。可以理解,本领域技术人员在不脱离本发明的精神和构思的前提下直接导出或联想
到的其他改进和变化,均应认为包含在本发明的保护范围之内。
Claims (1)
- 一种改进人工蜂群优化的极限学习机的方法,其特征在于,包括以下步骤:给定训练样本集(xi,yi)(i=1,2,...,N),xi∈Rd,yi∈Rm;其激励函数为g(),隐层节点数目为L;步骤1、对SN个个体产生初始解为:每个个体的编码方式如下所示:编码中,ωj(j=1,...,L)是D维向量,每一维都是-1到1之间的随机数;bj则是0到1之间的随机数;G表示蜂群的迭代次数;步骤2、对极限学习机的连接权值ω和阈值b进行全局寻优:vi,j=xi,j+rand[-1,1](xbest,j-xk,j+xl,j-xm,j), (8)式(8)中,xbest,j代表当前蜂群最优个体,xk,j,xl,j和xm,j则是除当前个体外随机选择的其他三个不同的个体,即i≠k≠l≠m;每当雇佣蜂到达一个新的位置,将极限学习机的连接权值ω和阈值b对训练样本集进行验证并得到适应度值,如果适应度值较高,则使用新的位置信息代替旧的位置信息;步骤3、对极限学习机连接权值ω和阈值b进行局部寻优;首先,将跟随蜂根据其适应度进行克隆,克隆的数目和其适应度成正比:式(9)中,Ni表示第i个跟随蜂的克隆数目;SN表示个体的总数目;fitness(xi)表示第i个跟随蜂的适应度值;其次,对克隆扩增后的群体,按照适应度概率计算式,选择概率大于0~1之间随机数的跟随蜂进行寻优,寻优的方式同式(8);跟随蜂位置信息变化后,通过群体的浓度概率和适应度概率,使用选择概率计算式选出食物源,并组成新的位置信息;新位置信息和扩增之前的位置信息数目相同;所述适应度概率计算式为:所述浓度概率计算式为:所述选择概率计算式为:Pchoose(xi)=αPi(xi)+(1-α)Pd(xi), (11)利用轮盘赌的形式,依据式(11)对跟随蜂群体进行选择,选择前SN个适应度函数最高的跟随蜂构成新的食物源信息。步骤4、设定循环次数为limit次;如果食物源信息在limit次循环内没有更新,则将雇佣蜂转换为侦查蜂,使用所述步骤1中的式(7)重新初始化此个体;步骤5、当迭代次数到达设定值,或均方误差值达到1e-4的精度后,从最优个体中提取出极限学习机连接权值ω和阈值b,并使用测试集进行验证。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/082668 WO2017197626A1 (zh) | 2016-05-19 | 2016-05-19 | 改进人工蜂群优化的极限学习机方法 |
US15/550,361 US20180240018A1 (en) | 2016-05-19 | 2016-05-19 | Improved extreme learning machine method based on artificial bee colony optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/082668 WO2017197626A1 (zh) | 2016-05-19 | 2016-05-19 | 改进人工蜂群优化的极限学习机方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017197626A1 true WO2017197626A1 (zh) | 2017-11-23 |
Family
ID=60325703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/082668 WO2017197626A1 (zh) | 2016-05-19 | 2016-05-19 | 改进人工蜂群优化的极限学习机方法 |
Country Status (2)
Country | Link |
---|---|
US (1) | US20180240018A1 (zh) |
WO (1) | WO2017197626A1 (zh) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108320102A (zh) * | 2018-02-05 | 2018-07-24 | 东北石油大学 | 基于改进人工蜂群算法的注汽锅炉配汽优化方法 |
CN108875933A (zh) * | 2018-05-08 | 2018-11-23 | 中国地质大学(武汉) | 一种无监督稀疏参数学习的超限学习机分类方法及系统 |
CN109034366A (zh) * | 2018-07-18 | 2018-12-18 | 北京化工大学 | 基于多激活函数的elm集成模型在化工建模中的应用 |
CN109408865A (zh) * | 2018-09-11 | 2019-03-01 | 三峡大学 | 一种非固态铝电解电容等效电路模型及参数辨识方法 |
CN109640080A (zh) * | 2018-11-01 | 2019-04-16 | 广州土圭垚信息科技有限公司 | 一种基于sfla算法的深度图像模式划分简化方法 |
CN109787931A (zh) * | 2018-03-20 | 2019-05-21 | 中山大学 | 一种基于改进人工蜂群算法的ofdm信号峰均比降低方法 |
CN109829420A (zh) * | 2019-01-18 | 2019-05-31 | 湖北工业大学 | 一种基于改进蚁狮优化算法的高光谱图像的特征选择方法 |
CN109948198A (zh) * | 2019-02-28 | 2019-06-28 | 大连海事大学 | 一种基于非线性函数的围岩分级可靠性评价方法 |
CN110084194A (zh) * | 2019-04-26 | 2019-08-02 | 南京林业大学 | 一种基于高光谱成像与深度学习的籽棉地膜在线识别算法 |
CN110147874A (zh) * | 2019-04-28 | 2019-08-20 | 武汉理工大学 | 一种长大隧道车速分布的环境因素水平智能优化方法 |
CN110243937A (zh) * | 2019-06-17 | 2019-09-17 | 江南大学 | 一种基于高频超声的倒装焊焊点缺失缺陷智能检测方法 |
CN110298399A (zh) * | 2019-06-27 | 2019-10-01 | 东北大学 | 基于Freeman链码和矩特征融合的抽油井故障诊断方法 |
CN110569958A (zh) * | 2019-09-04 | 2019-12-13 | 长江水利委员会长江科学院 | 一种基于混合人工蜂群算法的高维复杂水量分配模型求解方法 |
CN110889155A (zh) * | 2019-11-07 | 2020-03-17 | 长安大学 | 一种钢桥面板腐蚀预测模型及构建方法 |
CN111898726A (zh) * | 2020-07-30 | 2020-11-06 | 长安大学 | 一种电动汽车控制系统参数优化方法、计算机设备及存储介质 |
CN112232575A (zh) * | 2020-10-21 | 2021-01-15 | 国网山西省电力公司经济技术研究院 | 一种基于多元负荷预测的综合能源系统调控方法和装置 |
CN112257942A (zh) * | 2020-10-29 | 2021-01-22 | 中国特种设备检测研究院 | 一种应力腐蚀开裂预测方法及系统 |
CN112484732A (zh) * | 2020-11-30 | 2021-03-12 | 北京工商大学 | 一种基于ib-abc算法的无人机飞行路径规划方法 |
CN112653751A (zh) * | 2020-12-18 | 2021-04-13 | 杭州电子科技大学 | 物联网环境下基于多层极限学习机的分布式入侵检测方法 |
CN112748372A (zh) * | 2020-12-21 | 2021-05-04 | 湘潭大学 | 一种人工蜂群优化极限学习机的变压器故障诊断方法 |
CN112887994A (zh) * | 2021-01-20 | 2021-06-01 | 华南农业大学 | 基于改进二进制粒子群的无线传感器网络优化方法及应用 |
CN113177563A (zh) * | 2021-05-07 | 2021-07-27 | 安徽帅尔信息科技有限公司 | 融合cma-es算法及贯序极限学习机的贴片后异常检测方法 |
CN113365282A (zh) * | 2021-06-22 | 2021-09-07 | 成都信息工程大学 | 一种采用问题特征的人工蜂群算法的wsn障碍性区域覆盖部署方法 |
CN113486933A (zh) * | 2021-06-22 | 2021-10-08 | 中国联合网络通信集团有限公司 | 模型训练方法、用户身份信息预测方法及装置 |
CN113590693A (zh) * | 2020-12-03 | 2021-11-02 | 南理工泰兴智能制造研究院有限公司 | 一种基于区块链技术的化工生产线数据反馈方法 |
CN113766623A (zh) * | 2021-07-23 | 2021-12-07 | 福州大学 | 基于改进人工蜂群的认知无线电功率分配方法 |
CN114065631A (zh) * | 2021-11-18 | 2022-02-18 | 福州大学 | 一种板材激光切割的能耗预测方法及系统 |
CN114971207A (zh) * | 2022-05-05 | 2022-08-30 | 东南大学 | 基于改进人工蜂群算法的负荷优化分配方法 |
CN115685924A (zh) * | 2022-10-28 | 2023-02-03 | 中南大学 | 一种转炉吹炼终点预报方法 |
CN116432689A (zh) * | 2023-04-17 | 2023-07-14 | 广州菲利斯太阳能科技有限公司 | 基于改进量子人工蜂群算法的虚拟同步机参数量化方法 |
CN117236137A (zh) * | 2023-11-01 | 2023-12-15 | 龙建路桥股份有限公司 | 一种高寒区深长隧道冬季连续施工控制系统 |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11032149B2 (en) * | 2018-02-05 | 2021-06-08 | Crenacrans Consulting Services | Classification and relationship correlation learning engine for the automated management of complex and distributed networks |
CN109615056A (zh) * | 2018-10-09 | 2019-04-12 | 天津大学 | 一种基于粒子群优化极限学习机的可见光定位方法 |
CN109583478A (zh) * | 2018-11-06 | 2019-04-05 | 北京交通大学 | 一种智能蜂群聚类方法及车辆目标检测方法 |
CN109581291B (zh) * | 2018-12-11 | 2023-08-01 | 哈尔滨工程大学 | 一种基于人工蜂群的直接定位方法 |
CN109599866B (zh) * | 2018-12-18 | 2022-02-08 | 国网辽宁省电力有限公司抚顺供电公司 | 一种预测辅助的电力系统状态估计方法 |
CN109598320A (zh) * | 2019-01-16 | 2019-04-09 | 广西大学 | 一种基于蝗虫算法和极限学习机的rfid室内定位方法 |
CN110288606B (zh) * | 2019-06-28 | 2024-04-09 | 中北大学 | 一种基于蚁狮优化的极限学习机的三维网格模型分割方法 |
CN110489790B (zh) * | 2019-07-10 | 2022-09-13 | 合肥工业大学 | 基于改进abc-svr的igbt结温预测方法 |
CN110363152B (zh) * | 2019-07-16 | 2022-09-13 | 郑州轻工业学院 | 一种基于表面肌电信号的下肢假肢路况识别方法 |
CN112115754B (zh) * | 2019-07-30 | 2023-05-26 | 嘉兴学院 | 基于烟花差分进化混合算法-极限学习机的短时交通流预测模型 |
CN111144541B (zh) * | 2019-12-12 | 2023-07-04 | 中国地质大学(武汉) | 一种基于多种群粒子群优化方法的微波滤波器调试方法 |
CN111291855B (zh) * | 2020-01-19 | 2023-04-07 | 西安石油大学 | 基于改进智能算法的天然气环状管网布局优化方法 |
CN111399370B (zh) * | 2020-03-12 | 2022-08-16 | 四川长虹电器股份有限公司 | 离网逆变器的人工蜂群pi控制方法 |
CN111639695B (zh) * | 2020-05-26 | 2024-02-20 | 温州大学 | 一种基于改进果蝇优化算法对数据进行分类的方法及系统 |
CN111695611B (zh) * | 2020-05-27 | 2022-05-03 | 电子科技大学 | 一种蜂群优化核极限学习和稀疏表示机械故障识别方法 |
CN111709584B (zh) * | 2020-06-18 | 2023-10-31 | 中国人民解放军空军研究院战略预警研究所 | 一种基于人工蜂群算法的雷达组网优化部署方法 |
CN111982118B (zh) * | 2020-08-19 | 2023-05-05 | 合肥工业大学 | 机器人行走轨迹确定方法、装置、计算机设备及存储介质 |
CN112036540B (zh) * | 2020-09-07 | 2023-11-28 | 哈尔滨工程大学 | 一种基于双种群混合人工蜂群算法的传感器数目优化方法 |
CN112487700B (zh) * | 2020-09-15 | 2022-04-19 | 燕山大学 | 一种基于nsga与felm的冷轧轧制力预测方法 |
CN112419092A (zh) * | 2020-11-26 | 2021-02-26 | 北京科东电力控制系统有限责任公司 | 一种基于粒子群优化极限学习机的线损预测方法 |
CN112530529B (zh) * | 2020-12-09 | 2024-01-26 | 合肥工业大学 | 一种气体浓度预测方法、系统、设备及其存储介质 |
CN112883641B (zh) * | 2021-02-08 | 2022-08-05 | 合肥工业大学智能制造技术研究院 | 一种基于优化elm算法的高大建筑倾斜预警方法 |
CN113283572B (zh) * | 2021-05-31 | 2024-06-07 | 中国人民解放军空军工程大学 | 基于改进人工蜂群的盲源分离抗主瓣干扰方法及装置 |
CN113449464B (zh) * | 2021-06-11 | 2023-09-22 | 淮阴工学院 | 一种基于改进深度极限学习机的风功率预测方法 |
CN113268913B (zh) * | 2021-06-24 | 2022-02-11 | 广州鼎泰智慧能源科技有限公司 | 一种基于pso-elm算法的智能建筑空调冷机系统运行优化方法 |
CN113496486B (zh) * | 2021-07-08 | 2023-08-22 | 四川农业大学 | 基于高光谱成像技术的猕猴桃货架期快速判别方法 |
CN113642632B (zh) * | 2021-08-11 | 2023-10-27 | 国网冀北电力有限公司计量中心 | 基于自适应竞争和均衡优化的电力系统客户分类方法及装置 |
CN113673471B (zh) * | 2021-08-31 | 2024-04-09 | 国网山东省电力公司滨州供电公司 | 一种变压器绕组振动信号特征提取方法 |
CN114020418A (zh) * | 2021-11-25 | 2022-02-08 | 国网上海市电力公司 | 一种包含轮盘赌算法的粒子群优化的虚拟机放置方法 |
CN114333307A (zh) * | 2021-12-23 | 2022-04-12 | 北京交通大学 | 一种基于pso-elm算法的交叉口交通状态识别方法 |
CN114298230B (zh) * | 2021-12-29 | 2024-08-02 | 福州大学 | 一种基于表面肌电信号的下肢运动识别方法及系统 |
CN114793174A (zh) * | 2022-04-21 | 2022-07-26 | 浪潮云信息技术股份公司 | 基于改进人工蜂群算法的ddos入侵检测方法及系统 |
CN114638555B (zh) * | 2022-05-18 | 2022-09-16 | 国网江西综合能源服务有限公司 | 基于多层正则化极限学习机的用电行为检测方法及系统 |
CN115096590B (zh) * | 2022-05-23 | 2023-08-15 | 燕山大学 | 一种基于iwoa-elm的滚动轴承故障诊断方法 |
CN115017774B (zh) * | 2022-06-25 | 2024-07-09 | 三峡大学 | 利用改进abc算法确定热学参数的大坝温度预测方法 |
CN115081335B (zh) * | 2022-06-30 | 2024-07-16 | 三峡大学 | 一种改进深度极限学习机的土壤重金属空间分布预测方法 |
CN115993345A (zh) * | 2022-08-11 | 2023-04-21 | 贵州电网有限责任公司 | 一种基于isfo-vmd-kelm的sf6分解组分co2浓度反演的方法 |
CN115618201A (zh) * | 2022-10-09 | 2023-01-17 | 湖南万脉医疗科技有限公司 | 一种基于压缩感知的呼吸机信号处理方法及呼吸机 |
CN116992758B (zh) * | 2023-07-17 | 2024-06-14 | 江苏科技大学 | 一种基于机器学习的复杂机械智能装配方法 |
CN117631667B (zh) * | 2023-11-29 | 2024-05-14 | 北京建筑大学 | 一种应用于多层建筑人员的动态引导避障疏散方法 |
CN117850367B (zh) * | 2023-12-29 | 2024-06-21 | 淮阴工学院 | 一种基于多生产线的vmd分解与生产线优化系统 |
CN118536677B (zh) * | 2024-07-18 | 2024-10-01 | 长沙矿冶研究院有限责任公司 | 选矿系统工艺参数智能决策方法及系统 |
CN119046023A (zh) * | 2024-11-01 | 2024-11-29 | 泉州师范学院 | 基于改进蜂群算法的云资源实时调度方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708381A (zh) * | 2012-05-09 | 2012-10-03 | 江南大学 | 融合最小二乘向量机回归学习思想的改进极限学习机 |
CN106022465A (zh) * | 2016-05-19 | 2016-10-12 | 江南大学 | 改进人工蜂群优化的极限学习机方法 |
-
2016
- 2016-05-19 WO PCT/CN2016/082668 patent/WO2017197626A1/zh active Application Filing
- 2016-05-19 US US15/550,361 patent/US20180240018A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708381A (zh) * | 2012-05-09 | 2012-10-03 | 江南大学 | 融合最小二乘向量机回归学习思想的改进极限学习机 |
CN106022465A (zh) * | 2016-05-19 | 2016-10-12 | 江南大学 | 改进人工蜂群优化的极限学习机方法 |
Non-Patent Citations (2)
Title |
---|
LUO, JUN ET AL.: "Artificial Bee Colony Algorithm with Chaotic-Search Strategy", CONTROLAND DECISION, vol. 25, no. 12, 31 December 2010 (2010-12-31), ISSN: 1001-0920 * |
MA, CHAO: "Research on Meta-heuristic Optimized Extreme Learning Machine Based Classification Algorithms and Application", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE , CHINA DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, 15 March 2015 (2015-03-15), pages 32 - 37 and 71, ISSN: 1674-022X * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108320102B (zh) * | 2018-02-05 | 2021-09-28 | 东北石油大学 | 基于改进人工蜂群算法的注汽锅炉配汽优化方法 |
CN108320102A (zh) * | 2018-02-05 | 2018-07-24 | 东北石油大学 | 基于改进人工蜂群算法的注汽锅炉配汽优化方法 |
CN109787931A (zh) * | 2018-03-20 | 2019-05-21 | 中山大学 | 一种基于改进人工蜂群算法的ofdm信号峰均比降低方法 |
CN108875933A (zh) * | 2018-05-08 | 2018-11-23 | 中国地质大学(武汉) | 一种无监督稀疏参数学习的超限学习机分类方法及系统 |
CN109034366A (zh) * | 2018-07-18 | 2018-12-18 | 北京化工大学 | 基于多激活函数的elm集成模型在化工建模中的应用 |
CN109408865B (zh) * | 2018-09-11 | 2023-02-07 | 三峡大学 | 一种非固态铝电解电容等效电路模型及参数辨识方法 |
CN109408865A (zh) * | 2018-09-11 | 2019-03-01 | 三峡大学 | 一种非固态铝电解电容等效电路模型及参数辨识方法 |
CN109640080A (zh) * | 2018-11-01 | 2019-04-16 | 广州土圭垚信息科技有限公司 | 一种基于sfla算法的深度图像模式划分简化方法 |
CN109829420A (zh) * | 2019-01-18 | 2019-05-31 | 湖北工业大学 | 一种基于改进蚁狮优化算法的高光谱图像的特征选择方法 |
CN109829420B (zh) * | 2019-01-18 | 2022-12-02 | 湖北工业大学 | 一种基于改进蚁狮优化算法的高光谱图像的特征选择方法 |
CN109948198A (zh) * | 2019-02-28 | 2019-06-28 | 大连海事大学 | 一种基于非线性函数的围岩分级可靠性评价方法 |
CN109948198B (zh) * | 2019-02-28 | 2022-10-04 | 大连海事大学 | 一种基于非线性函数的围岩分级可靠性评价方法 |
CN110084194A (zh) * | 2019-04-26 | 2019-08-02 | 南京林业大学 | 一种基于高光谱成像与深度学习的籽棉地膜在线识别算法 |
CN110084194B (zh) * | 2019-04-26 | 2020-07-28 | 南京林业大学 | 一种基于高光谱成像与深度学习的籽棉地膜在线识别方法 |
CN110147874B (zh) * | 2019-04-28 | 2022-12-16 | 武汉理工大学 | 一种长大隧道车速分布的环境因素水平智能优化方法 |
CN110147874A (zh) * | 2019-04-28 | 2019-08-20 | 武汉理工大学 | 一种长大隧道车速分布的环境因素水平智能优化方法 |
CN110243937A (zh) * | 2019-06-17 | 2019-09-17 | 江南大学 | 一种基于高频超声的倒装焊焊点缺失缺陷智能检测方法 |
CN110298399A (zh) * | 2019-06-27 | 2019-10-01 | 东北大学 | 基于Freeman链码和矩特征融合的抽油井故障诊断方法 |
CN110298399B (zh) * | 2019-06-27 | 2022-11-25 | 东北大学 | 基于Freeman链码和矩特征融合的抽油井故障诊断方法 |
CN110569958A (zh) * | 2019-09-04 | 2019-12-13 | 长江水利委员会长江科学院 | 一种基于混合人工蜂群算法的高维复杂水量分配模型求解方法 |
CN110569958B (zh) * | 2019-09-04 | 2022-02-08 | 长江水利委员会长江科学院 | 一种基于混合人工蜂群算法的高维复杂水量分配模型求解方法 |
CN110889155B (zh) * | 2019-11-07 | 2024-02-09 | 长安大学 | 一种钢桥面板腐蚀预测模型及构建方法 |
CN110889155A (zh) * | 2019-11-07 | 2020-03-17 | 长安大学 | 一种钢桥面板腐蚀预测模型及构建方法 |
CN111898726B (zh) * | 2020-07-30 | 2024-01-26 | 长安大学 | 一种电动汽车控制系统参数优化方法、设备及存储介质 |
CN111898726A (zh) * | 2020-07-30 | 2020-11-06 | 长安大学 | 一种电动汽车控制系统参数优化方法、计算机设备及存储介质 |
CN112232575A (zh) * | 2020-10-21 | 2021-01-15 | 国网山西省电力公司经济技术研究院 | 一种基于多元负荷预测的综合能源系统调控方法和装置 |
CN112232575B (zh) * | 2020-10-21 | 2023-12-19 | 国网山西省电力公司经济技术研究院 | 一种基于多元负荷预测的综合能源系统调控方法和装置 |
CN112257942B (zh) * | 2020-10-29 | 2023-11-14 | 中国特种设备检测研究院 | 一种应力腐蚀开裂预测方法及系统 |
CN112257942A (zh) * | 2020-10-29 | 2021-01-22 | 中国特种设备检测研究院 | 一种应力腐蚀开裂预测方法及系统 |
CN112484732A (zh) * | 2020-11-30 | 2021-03-12 | 北京工商大学 | 一种基于ib-abc算法的无人机飞行路径规划方法 |
CN113590693A (zh) * | 2020-12-03 | 2021-11-02 | 南理工泰兴智能制造研究院有限公司 | 一种基于区块链技术的化工生产线数据反馈方法 |
CN112653751B (zh) * | 2020-12-18 | 2022-05-13 | 杭州电子科技大学 | 物联网环境下基于多层极限学习机的分布式入侵检测方法 |
CN112653751A (zh) * | 2020-12-18 | 2021-04-13 | 杭州电子科技大学 | 物联网环境下基于多层极限学习机的分布式入侵检测方法 |
CN112748372A (zh) * | 2020-12-21 | 2021-05-04 | 湘潭大学 | 一种人工蜂群优化极限学习机的变压器故障诊断方法 |
CN112887994A (zh) * | 2021-01-20 | 2021-06-01 | 华南农业大学 | 基于改进二进制粒子群的无线传感器网络优化方法及应用 |
CN113177563B (zh) * | 2021-05-07 | 2022-10-14 | 安徽帅尔信息科技有限公司 | 融合cma-es算法及贯序极限学习机的贴片后异常检测方法 |
CN113177563A (zh) * | 2021-05-07 | 2021-07-27 | 安徽帅尔信息科技有限公司 | 融合cma-es算法及贯序极限学习机的贴片后异常检测方法 |
CN113486933A (zh) * | 2021-06-22 | 2021-10-08 | 中国联合网络通信集团有限公司 | 模型训练方法、用户身份信息预测方法及装置 |
CN113365282A (zh) * | 2021-06-22 | 2021-09-07 | 成都信息工程大学 | 一种采用问题特征的人工蜂群算法的wsn障碍性区域覆盖部署方法 |
CN113486933B (zh) * | 2021-06-22 | 2023-06-27 | 中国联合网络通信集团有限公司 | 模型训练方法、用户身份信息预测方法及装置 |
CN113766623B (zh) * | 2021-07-23 | 2023-05-09 | 福州大学 | 基于改进人工蜂群的认知无线电功率分配方法 |
CN113766623A (zh) * | 2021-07-23 | 2021-12-07 | 福州大学 | 基于改进人工蜂群的认知无线电功率分配方法 |
CN114065631A (zh) * | 2021-11-18 | 2022-02-18 | 福州大学 | 一种板材激光切割的能耗预测方法及系统 |
CN114971207A (zh) * | 2022-05-05 | 2022-08-30 | 东南大学 | 基于改进人工蜂群算法的负荷优化分配方法 |
CN115685924A (zh) * | 2022-10-28 | 2023-02-03 | 中南大学 | 一种转炉吹炼终点预报方法 |
CN116432689A (zh) * | 2023-04-17 | 2023-07-14 | 广州菲利斯太阳能科技有限公司 | 基于改进量子人工蜂群算法的虚拟同步机参数量化方法 |
CN117236137A (zh) * | 2023-11-01 | 2023-12-15 | 龙建路桥股份有限公司 | 一种高寒区深长隧道冬季连续施工控制系统 |
CN117236137B (zh) * | 2023-11-01 | 2024-02-02 | 龙建路桥股份有限公司 | 一种高寒区深长隧道冬季连续施工控制系统 |
Also Published As
Publication number | Publication date |
---|---|
US20180240018A1 (en) | 2018-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017197626A1 (zh) | 改进人工蜂群优化的极限学习机方法 | |
CN106022465A (zh) | 改进人工蜂群优化的极限学习机方法 | |
Shao et al. | Online multi-view clustering with incomplete views | |
CN110347932B (zh) | 一种基于深度学习的跨网络用户对齐方法 | |
Zhang et al. | Multi-domain active learning for recommendation | |
CN113065974B (zh) | 一种基于动态网络表示学习的链路预测方法 | |
Zhang et al. | Incremental extreme learning machine based on deep feature embedded | |
CN107729290B (zh) | 一种利用局部敏感哈希优化的超大规模图的表示学习方法 | |
CN114519145A (zh) | 一种基于图神经网络挖掘用户长短期兴趣的序列推荐方法 | |
CN106650920A (zh) | 一种基于优化极限学习机的预测模型 | |
CN110264372B (zh) | 一种基于节点表示的主题社团发现方法 | |
Xiang et al. | Incremental few-shot learning for pedestrian attribute recognition | |
WO2020233245A1 (zh) | 一种基于回归树上下文特征自动编码的偏置张量分解方法 | |
CN111259233B (zh) | 一种提高协同过滤模型稳定性的方法 | |
Chen et al. | Enhancing Artificial Bee Colony Algorithm with Self‐Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization | |
Yu et al. | Contrastive correlation preserving replay for online continual learning | |
CN114299362A (zh) | 一种基于k-means聚类的小样本图像分类方法 | |
Liu et al. | Comparison of different CNN models in tuberculosis detecting | |
Pu et al. | Screen efficiency comparisons of decision tree and neural network algorithms in machine learning assisted drug design | |
CN106569954A (zh) | 一种基于kl散度的多源软件缺陷预测方法 | |
Li et al. | Meta-GNAS: Meta-reinforcement learning for graph neural architecture search | |
Liu et al. | Learning graph representation by aggregating subgraphs via mutual information maximization | |
Zhao et al. | Short‐term load forecasting of multi‐scale recurrent neural networks based on residual structure | |
CN108573275B (zh) | 一种在线分类微服务的构建方法 | |
Mu et al. | AD-link: An adaptive approach for user identity linkage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15550361 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16902011 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16902011 Country of ref document: EP Kind code of ref document: A1 |