CN104809500A - Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling) - Google Patents

Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling) Download PDF

Info

Publication number
CN104809500A
CN104809500A CN201510252869.XA CN201510252869A CN104809500A CN 104809500 A CN104809500 A CN 104809500A CN 201510252869 A CN201510252869 A CN 201510252869A CN 104809500 A CN104809500 A CN 104809500A
Authority
CN
China
Prior art keywords
particle
optimal
particles
fish
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510252869.XA
Other languages
Chinese (zh)
Inventor
何发智
鄢小虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201510252869.XA priority Critical patent/CN104809500A/en
Publication of CN104809500A publication Critical patent/CN104809500A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种高效的基于鱼群扑食行为的改进粒子群算法,本发明模拟这种智能行为,当前全局最优的粒子通过其他少数随机粒子提供的自身最优位置信息,寻找当前全局更优的位置。当鱼群被其他扑食者攻击时,弱小的不能很快逃走的鱼将会被吃掉。本发明模拟这些行为,将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换,这样增强了群体的多样性,FSPSO能有效地避免局部最优值。本发明能具体应用在函数优化、背包问题、旅行商问题、流水线作业问题和图形图像处理等复杂优化问题的求解过程中。The invention discloses an efficient improved particle swarm algorithm based on the behavior of fish shoals. The invention simulates this intelligent behavior. The currently optimal particle finds the current global optimal position information through its own optimal position information provided by other few random particles. better location. When the school is attacked by other predators, the weaker fish that cannot escape quickly will be eaten. The invention simulates these behaviors, and replaces weak particles near the current global worst particle with randomly generated particles, thus enhancing the diversity of groups, and FSPSO can effectively avoid local optimal values. The invention can be specifically applied in the solving process of complex optimization problems such as function optimization, knapsack problem, traveling salesman problem, assembly line operation problem and graphic image processing.

Description

一种高效的基于鱼群扑食行为的改进粒子群算法An Efficient Improved Particle Swarm Algorithm Based on the Predating Behavior of Fish Schools

技术领域technical field

本发明属于进化算法技术领域,涉及一种粒子群优化算法,具体涉及一种高效的基于鱼群扑食行为的改进粒子群算法。The invention belongs to the technical field of evolutionary algorithms, and relates to a particle swarm optimization algorithm, in particular to an efficient improved particle swarm algorithm based on the behavior of fish rushing for food.

背景技术Background technique

粒子群优化算法(Particle Swarm Optimization,PSO)是由Kennedy和Eberhart通过对鸟群某些社会行为的观察研究,提出的一种新颖的进化算法。由于PSO设置参数少,容易实现,可用于解决大量的非线性、不可微和多峰值等的复杂优化问题,已成功应用于参数优化、特征提取、旅行商问题、模式识别等诸多领域。但在复杂的优化问题中,粒子群算法存在早熟收敛,很容易陷入局部最优解。Angeline等人借鉴遗传算法思想提出杂交PSO算法概念,提高了算法的收敛速度和精度。Peram等人于2003年提出的基于粒子群优化的适应值-距离-比例算法(Fit ness-Distance-Ratio based Particle Swarm Optimization,FDR-PSO),算法中每个粒子根据一定的适应值-距离-比例原则,向附近具有较好适应值的多个粒子进行不同程度的靠近,而不仅仅向当前所发现的最好粒子靠近。此算法改善了PSO算法的早熟收敛问题,在优化复杂函数方面,其性能得到了较大改善。Bergh提出了协同PSO(Cooperative Particle Swarm Optimizer,CPSO),使粒子更容易跳出局部极小点,达到较高收敛精度。为防止PSO算法陷入局部最优,Liang等人于2006年提出了综合学习粒子群优化算法(Comprehensive Learning ParticleSwarm Optimizer,CLPSO),使得每个粒子的速度更新基于所有其它粒子的历史最优位置,从而达到综合学习的目的。Bergh提出了协同PSO(Cooperative ParticleSwarm Optimizer,CPSO),使粒子更容易跳出局部极小点,达到较高收敛精度。为防止PSO算法陷入局部最优,Liang等人于2006年提出了综合学习粒子群优化算法(Comprehensive Learning Particle Swarm Optimizer,CLPSO),使得每个粒子的速度更新基于所有其它粒子的历史最优位置,从而达到综合学习的目的。吴晓军通过分析搜索中心的概率分布,提出了搜索中心在两个极值间均匀分布的均匀搜索PSO(Uniform Search Particle Swarm Optimization,UPSO)算法,该算法搜索效率高,收敛性能好。Particle Swarm Optimization (PSO) is a novel evolutionary algorithm proposed by Kennedy and Eberhart through the observation and research of certain social behaviors of birds. Since PSO has few setting parameters and is easy to implement, it can be used to solve a large number of complex optimization problems such as nonlinear, non-differentiable and multi-peak, and has been successfully applied to parameter optimization, feature extraction, traveling salesman problem, pattern recognition and many other fields. However, in complex optimization problems, the particle swarm optimization algorithm has premature convergence, and it is easy to fall into a local optimal solution. Angeline et al. proposed the concept of hybrid PSO algorithm by referring to the idea of genetic algorithm, which improved the convergence speed and accuracy of the algorithm. The fitness-distance-proportion algorithm based on particle swarm optimization (Fitness-Distance-Ratio based Particle Swarm Optimization, FDR-PSO) proposed by Peram et al. in 2003, each particle in the algorithm is based on a certain fitness value-distance- The principle of proportionality is to approach multiple particles with better fitness values nearby to different degrees, not just to the best particle currently found. This algorithm improves the premature convergence problem of PSO algorithm, and its performance has been greatly improved in optimizing complex functions. Bergh proposed a cooperative PSO (Cooperative Particle Swarm Optimizer, CPSO), which makes it easier for particles to jump out of local minimum points and achieve higher convergence accuracy. In order to prevent the PSO algorithm from falling into a local optimum, Liang et al. proposed the Comprehensive Learning Particle Swarm Optimizer (CLPSO) in 2006, so that the velocity update of each particle is based on the historical optimal position of all other particles, so that To achieve the purpose of comprehensive learning. Bergh proposed the cooperative PSO (Cooperative Particle Swarm Optimizer, CPSO), which makes it easier for particles to jump out of local minimum points and achieve higher convergence accuracy. In order to prevent the PSO algorithm from falling into a local optimum, Liang et al. proposed the Comprehensive Learning Particle Swarm Optimizer (CLPSO) in 2006, so that the velocity update of each particle is based on the historical optimal position of all other particles. In order to achieve the purpose of comprehensive learning. By analyzing the probability distribution of the search center, Wu Xiaojun proposed a uniform search PSO (Uniform Search Particle Swarm Optimization, UPSO) algorithm in which the search center is evenly distributed between two extreme values. This algorithm has high search efficiency and good convergence performance.

上述算法在优化复杂高维多模函数时,仍容易陷入局部最优。The above algorithms are still prone to fall into local optimum when optimizing complex high-dimensional multi-modular functions.

发明内容Contents of the invention

为了解决上述的技术问题,本发明模拟鱼群扑食的行为,提出了一种高效的基于鱼群扑食行为的改进粒子群算法。In order to solve the above-mentioned technical problems, the present invention simulates the behavior of fish shoals, and proposes an efficient improved particle swarm optimization algorithm based on the fish swarm behavior.

本发明所采用的技术方案是:一种高效的基于鱼群扑食行为的改进粒子群算法,其特征在于,包括以下步骤:The technical scheme adopted in the present invention is: an efficient improved particle swarm algorithm based on the fish swarming behavior, which is characterized in that it includes the following steps:

步骤1:初始化高效的基于鱼群扑食行为的改进粒子群算法的参数,所述的参数包括群体个数n、最大迭代次数maxk、惯性权重w、学习因子c1和c2、搜索粒子数m、搜索因子c3和范围因子c4,其中0≤m≤n/10;Step 1: Initialize the parameters of an efficient improved particle swarm optimization algorithm based on the behavior of fish swarms, the parameters include the number of groups n, the maximum number of iterations maxk, inertia weight w, learning factors c 1 and c 2 , and the number of search particles m, search factor c 3 and range factor c 4 , where 0≤m≤n/10;

步骤2:对改进粒子群中每个粒子的位置和速度进行更新;Step 2: Update the position and velocity of each particle in the improved particle swarm;

步骤3:当前全局最优的粒子通过m个随机粒子提供的自身最优位置信息,寻找当前全局更优的位置;Step 3: The currently globally optimal particle finds a current globally optimal position through its own optimal position information provided by m random particles;

步骤4:将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换;Step 4: Replace the weak particles near the current global worst particle with randomly generated particles;

步骤5:判断,高效的基于鱼群扑食行为的改进粒子群算法是否收敛或者是否达到最大迭代次数?Step 5: Judgment, whether the efficient improved particle swarm optimization algorithm based on fish predation behavior converges or reaches the maximum number of iterations?

若是,则输出全局最优解的位置,这个位置即为优化问题的解;If so, output the position of the global optimal solution, which is the solution of the optimization problem;

若否,则回转执行所述的步骤2。If not, go back to step 2.

作为优选,步骤3的具体实现过程为,随机选择m个粒子,它们自身的最优位置为:P1,P2,...,Pm,当前全局最优粒子第j个方向搜索的位置为:As a preference, the specific implementation process of step 3 is to randomly select m particles, and their own optimal positions are: P 1 , P 2 ,..., P m , the search position of the current global optimal particle in the jth direction for:

XXj=Pg+r3·c3·(Pj-Pg),其中1≤j≤n,r3是[0,1]间均匀分布的随机数,c3是搜索因子;判断:若XXj中的最优值优于Pg,则用这个最优值替代Pg;否则,Pg的值不变。当前全局最优值通过向少数粒子的自身最优值学习,提高了算法的全局搜索能力。XX j =P g +r 3 ·c 3 ·(P j -P g ), where 1≤j≤n, r 3 is a uniformly distributed random number between [0,1], c 3 is the search factor; Judgment: If the optimal value in XX j is better than P g , then replace P g with this optimal value; otherwise, the value of P g remains unchanged. The current global optimal value improves the global search ability of the algorithm by learning from the own optimal value of a few particles.

作为优选,步骤4中所述的弱小粒子,若全局最差粒子群的位置为:Pb=(Pb1,Pb2,...,PbD),第j个弱小粒子的位置为Yj,由于弱小粒子在全局最差粒子的附近,则Yj和Pb的距离满足关系式其中c4是范围因子,Range是粒子的搜索范围。弱小的粒子被随机产生的粒子替代,这样增强了群体的多样性,能有效地避免局部最优值。Preferably, for the weak particles described in step 4, if the position of the global worst particle group is: P b =(P b1 ,P b2 ,...,P bD ), the position of the jth weak particle is Y j , since the weak particle is near the global worst particle, the distance between Y j and P b satisfies the relation where c 4 is the range factor, and Range is the search range of the particle. Weak particles are replaced by randomly generated particles, which enhances the diversity of the population and can effectively avoid local optimal values.

本发明提出的一种高效的基于鱼群扑食行为的改进粒子群算法(ParticleSwarm Optimization based on prey behavior of Fish Schooling,简称FSPSO)。鱼群能通过水波传递启发信息,在扑食中通过感知水波信息寻找到更好的位置。本发明模拟这种智能行为,当前全局最优的粒子通过其他少数随机粒子提供的自身最优位置信息,寻找当前全局更优的位置。当鱼群被其他扑食者攻击时,弱小的不能很快逃走的鱼将会被吃掉。本发明模拟这些行为,将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换,这样增强了群体的多样性,FSPSO能有效地避免局部最优值。本发明能具体应用在函数优化、背包问题、旅行商问题、流水线作业问题和图形图像处理等复杂优化问题的求解过程中。An efficient improved particle swarm optimization based on prey behavior of Fish Schooling (FSPSO for short) proposed by the present invention. The school of fish can transmit enlightenment information through the water wave, and find a better position by sensing the water wave information during the predation. The present invention simulates this kind of intelligent behavior, and the currently globally optimal particle searches for a current globally optimal position through its own optimal position information provided by other few random particles. When the school is attacked by other predators, the weaker fish that cannot escape quickly will be eaten. The invention simulates these behaviors, and replaces weak particles near the current global worst particle with randomly generated particles, thus enhancing the diversity of groups, and FSPSO can effectively avoid local optimal values. The invention can be specifically applied in the solving process of complex optimization problems such as function optimization, knapsack problem, traveling salesman problem, assembly line operation problem and graphic image processing.

附图说明Description of drawings

图1:是本发明实施例的全局最优粒子搜索方向图;Fig. 1: is the global optimal particle search pattern of the embodiment of the present invention;

图2:是本发明实施例的全局最差粒子附近的弱小粒子示意图;Figure 2: is a schematic diagram of weak and small particles near the global worst particle in the embodiment of the present invention;

图3:是本发明实施例的对于函数f2(x)的收敛图;Fig. 3: is the convergence diagram for the function f 2 (x) of the embodiment of the present invention;

图4:是本发明实施例的对于函数f7(x)的收敛图;Fig. 4: is the convergence diagram for the function f 7 (x) of the embodiment of the present invention;

图5:是本发明实施例的对于背包问题的收敛图。Fig. 5: is the convergence diagram for the knapsack problem of the embodiment of the present invention.

具体实施方式Detailed ways

为了便于本领域普通技术人员理解和实施本发明,下面结合附图及实施例对本发明作进一步的详细描述,应当理解,此处所描述的实施示例仅用于说明和解释本发明,并不用于限定本发明。In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

假设粒子群算法中粒子数为n,搜索空间为D维,第i个粒子的位置和速度分别为xi和vi,则:Assuming that the number of particles in the PSO algorithm is n, the search space is D-dimensional, and the position and velocity of the i-th particle are x i and v i respectively, then:

xi=(xi1,...,xid,...,xiD) (1),x i = (x i1 ,...,x id ,...,x iD ) (1),

vi=(vi1,...,vid,...,viD) (2);v i = (v i1 ,...,v id ,...,v iD ) (2);

群体中具有最优适应度粒子的位置记为:The position of the particle with the best fitness in the population is recorded as:

Pg=(Pg1,Pg2,...,PgD) (3);P g =(P g1 ,P g2 ,...,P gD ) (3);

第i个粒子所经历解空间的最好位置为:The best position of the solution space experienced by the i-th particle is:

Pi=(Pi1,Pi2,...,PiD) (4);P i = (P i1 ,P i2 ,...,P iD ) (4);

则粒子群算法位置和速度更新的方程为:Then the equation of particle swarm algorithm position and velocity update is:

VV idid kk ++ 11 == ww VV idid kk ++ cc 11 rr 11 (( PP idid kk -- Xx idid kk )) ++ cc 22 rr 22 (( PP gdgd kk -- Xx idid kk )) -- -- -- (( 55 )) ;;

Xx idid kk ++ 11 == Xx idid kk ++ VV idid kk ++ 11 -- -- -- (( 66 )) ;;

其中,w为惯性权值,k为迭代次数,c1和c2都为学习因子,r1和r2都为均匀分布在区间[0,1]内的随机数。第d个粒子位置和速度的变化范围分别为[XMINd,XMAXd]和[VMINd,VMAXd]。若由式(5)和(6)计算出的值超过了这个范围,则设置其为边界值。Among them, w is the inertia weight, k is the number of iterations, c 1 and c 2 are learning factors, r 1 and r 2 are random numbers uniformly distributed in the interval [0, 1]. The variation ranges of the position and velocity of the dth particle are [XMIN d , XMAX d ] and [VMIN d , VMAX d ] respectively. If the value calculated by formulas (5) and (6) exceeds this range, set it as the boundary value.

研究发现鱼群能产生复杂的水波,在扑食行为过程中,扑食的启发信息能通过水波进行传递。模拟这种智能行为,当前全局最优的粒子通过其他少数随机粒子提供的自身最优位置信息,寻找当前全局更优的位置。The study found that fish schools can generate complex water waves, and the heuristic information of predation can be transmitted through the water waves during the predation behavior. To simulate this kind of intelligent behavior, the currently optimal particle finds a better position globally through the information of its own optimal position provided by other random particles.

不失一般性,提供启发信息的m个粒子随机选择,它们自身的最优位置为:Without loss of generality, m particles that provide heuristic information are randomly selected, and their own optimal positions are:

P1,P2,...,Pm (7);P 1 ,P 2 ,...,P m (7);

当前全局最优粒子搜索更优粒子的方向如图1所示,当前全局最优粒子搜索的第j个方向搜索的位置为:The direction of the current global optimal particle search for better particles is shown in Figure 1, and the search position of the jth direction of the current global optimal particle search is:

XXj=Pg+r3·c3·(Pj-Pg) (8);XX j =P g +r 3 ·c 3 ·(P j -P g ) (8);

其中1≤j≤n,r3是[0,1]间均匀分布的随机数,c3是搜索因子。若XXj中的最优值优于Pg,则用这个最优值替代Pg;否则,Pg的值不变。为减小计算量,m的取值不宜过大。当前全局最优值通过向少数粒子的自身最优值学习,提高了算法的全局搜索能力。Where 1≤j≤n, r3 is a uniformly distributed random number between [0,1], and c3 is a search factor. If the optimal value in XX j is better than P g , then replace P g with this optimal value; otherwise, the value of P g remains unchanged. In order to reduce the amount of calculation, the value of m should not be too large. The current global optimal value improves the global search ability of the algorithm by learning from the own optimal value of a few particles.

鱼群在扑食过程中会被其他扑食者攻击,根据达尔文弱肉强食的理论,弱小的不能很快逃走的鱼将会被吃掉。本发明模拟这些行为,将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换。Fish schools will be attacked by other predators during the feeding process. According to Darwin's theory of the jungle, weak fish that cannot escape quickly will be eaten. The invention simulates these behaviors, and replaces weak particles near the current global worst particle with randomly generated particles.

全局最差粒子群的位置为:The position of the global worst particle swarm is:

Pb=(Pb1,Pb2,...,PbD) (9);P b = (P b1 ,P b2 ,...,P bD ) (9);

第j个弱小粒子的位置为Yj,由于弱小粒子在全局最差粒子的附近,则Yj和Pb的距离很小,它们之间的关系如图2所示。Yj在以Pb为中心的圆圈内,则它们之间的关系满足:The position of the jth weak particle is Y j , since the weak particle is near the global worst particle, the distance between Y j and P b is very small, and the relationship between them is shown in Figure 2. Y j is in the circle centered on P b , then the relationship between them satisfies:

ΣΣ ii == 11 DD. || YY jithe ji -- PP bibi || ≤≤ cc 44 ·· RangeRange -- -- -- (( 1010 )) ;;

其中c4是范围因子,Range是粒子的搜索范围。弱小的粒子被随机产生的粒子替代,这样增强了群体的多样性,FSPSO能有效地避免局部最优值。where c 4 is the range factor, and Range is the search range of the particle. Weak particles are replaced by randomly generated particles, which enhances the diversity of the population, and FSPSO can effectively avoid local optimal values.

本发明模拟鱼群扑食行为提出FSPSO算法,具体步骤如下:The present invention proposes the FSPSO algorithm by simulating the fish rushing behavior, and the specific steps are as follows:

步骤1:初始化FSPSO算法的参数,参数包括群体个数n,最大迭代次数maxk,惯性权重w,学习因子c1和c2,搜索粒子数m、搜索因子c3和范围因子c4,其中0≤m≤n/10;Step 1: Initialize the parameters of the FSPSO algorithm. The parameters include the number of groups n, the maximum number of iterations maxk, the inertia weight w, the learning factors c 1 and c 2 , the number of search particles m, the search factor c 3 and the range factor c 4 , where 0 ≤m≤n/10;

步骤2:对群体中每个粒子的位置和速度按式(5)和(6)进行更新;Step 2: Update the position and velocity of each particle in the group according to equations (5) and (6);

步骤3:当前全局最优的粒子通过m个随机粒子提供的自身最优位置信息,寻找当前全局更优的位置,按式(8)进行改进;Step 3: The current globally optimal particle finds the current globally optimal position through the information of its own optimal position provided by m random particles, and improves according to formula (8);

步骤4:将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换,按式(10)进行改进;Step 4: Replace the weak particles near the current global worst particle with randomly generated particles, and improve according to formula (10);

步骤5:判断,FSPSO算法是否收敛或者是否达到最大迭代次数?Step 5: Determine whether the FSPSO algorithm converges or reaches the maximum number of iterations?

若是,则输出全局最优解的位置,这个位置即为优化问题的解;If so, output the position of the global optimal solution, which is the solution of the optimization problem;

若否,则回转执行所述的步骤2。If not, go back to step 2.

在FSPSO算法中,计算当前最优粒子Pg,最差粒子Pb和每个粒子自身最优位置Pi的时间复杂度为O(n*D),通过随机粒子替换最差粒子附近的弱小粒子的时间复杂度为O(n*D),当前全局最优粒子搜索更优位置的时间复杂度为O(m*n*D)。由于m的值比较小,所以FSPSO的时间复杂度为O(n*D*maxk),和PSO的时间复杂度一样。在FSPSO中,Pi,Pg和Pb的位置需要被存储。所以,FSPSO的空间复杂度为O(n*D),和PSO的空间复杂度一样。综上所述,FSPSO的时间和空间复杂度与PSO的一样,所以FSPSO的性能很好。In the FSPSO algorithm, the time complexity of calculating the current optimal particle P g , the worst particle P b and each particle’s own optimal position Pi is O(n*D), and the weak particles near the worst particle are replaced by random particles. The time complexity of the particle is O(n*D), and the time complexity of the current global optimal particle searching for a better position is O(m*n*D). Since the value of m is relatively small, the time complexity of FSPSO is O(n*D*maxk), which is the same as that of PSO. In FSPSO, the positions of P i , P g and P b need to be stored. Therefore, the space complexity of FSPSO is O(n*D), which is the same as that of PSO. To sum up, the time and space complexity of FSPSO is the same as that of PSO, so the performance of FSPSO is very good.

为了对FSPSO的性能进行评估,利用该算法对9个标准的测试函数进行优化,测试函数如表1所示,实验的开发环境如下:Matlab R2012b,CPU为i7-2600k3.4GHz。In order to evaluate the performance of FSPSO, the algorithm is used to optimize nine standard test functions. The test functions are shown in Table 1. The experimental development environment is as follows: Matlab R2012b, CPU is i7-2600k3.4GHz.

表1 测试函数Table 1 Test function

表1中f1(x)-f6(x)为单模态函数,f7(x)-f9(x)为多模态函数。在这些函数中,N是函数的维数,这些函数的最小值都为0,最优解为[0]N。FSPSO,PSO和GA被用于优化上述函数,在这三种算法中,维数为30(N=D=30),种群个数为50,最大迭代次数为1000。在PSO中,c1=c2=2,惯性因子w随着迭代次数从0.9到0.7线性递减。通过比较实验寻找到FSPSO合适的参数如下:stepa=1.5,stepb=0.0001,m=4,FSPSO其他参数的设置和PSO中一样。GA的参数设置如下:交叉概率为0.95,采用随机选择机制,高斯变异的概率为0.1。每个函数的优化实验运行30次,30次解的最优值、平均值、中间值、方差如表2所示。In Table 1, f 1 (x)-f 6 (x) are single-mode functions, and f 7 (x)-f 9 (x) are multi-mode functions. In these functions, N is the dimension of the function, the minimum value of these functions is 0, and the optimal solution is [0] N . FSPSO, PSO and GA are used to optimize the above functions. In these three algorithms, the dimension is 30 (N=D=30), the number of populations is 50, and the maximum number of iterations is 1000. In PSO, c 1 =c 2 =2, and the inertia factor w decreases linearly with the number of iterations from 0.9 to 0.7. The appropriate parameters of FSPSO are found through comparative experiments as follows: stepa=1.5, stepb=0.0001, m=4, and the settings of other parameters of FSPSO are the same as in PSO. The parameters of GA are set as follows: the crossover probability is 0.95, the random selection mechanism is adopted, and the probability of Gaussian mutation is 0.1. The optimization experiment of each function is run 30 times, and the optimal value, average value, median value and variance of the 30 solutions are shown in Table 2.

表2 三种算法优化函数的结果Table 2 Results of three algorithm optimization functions

由表2可知,对所有函数FSPSO的最优解都比GA和PSO的好,所以FSPSO的全局搜索能力优于其他两种算法。对所有函数FSPSO的平均值都优于PSO,除了f1(x),f3(x)和f8(x)外,FSPSO的平均值都优于GA。所以,FSPSO的整体搜索性能优于其他两种算法。除了f7(x)外,FSPSO的中间值都小于PSO,除了f3(x)和f8(x)外,FSPSO的中间值都小于GA。除了f7(x)外,FSPSO的方差都小于PSO,除了f1(x),f3(x)和f8(x)外,FSPSO的方差都小于GA。所以,FSPSO的稳定性要好于其他两种算法。It can be seen from Table 2 that the optimal solutions of FSPSO for all functions are better than those of GA and PSO, so the global search ability of FSPSO is better than the other two algorithms. The average value of FSPSO is better than PSO for all functions, except for f 1 (x), f 3 (x) and f 8 (x), the average value of FSPSO is better than GA. Therefore, the overall search performance of FSPSO is better than the other two algorithms. Except for f 7 (x), the median value of FSPSO is smaller than that of PSO, and except for f 3 (x) and f 8 (x), the median value of FSPSO is smaller than that of GA. The variance of FSPSO is smaller than that of PSO except for f 7 (x), and the variance of FSPSO is smaller than that of GA except for f 1 (x), f 3 (x) and f 8 (x). Therefore, the stability of FSPSO is better than the other two algorithms.

收敛曲线用于评价三种算法的性能,为了不失一般性,随机选择单模态函数f2(x)和多模态函数f7(x)进行分析。为了更好地分析收敛曲线,在迭代中解的值取对数,三种算法的收敛曲线如图3和图4所示。由收敛曲线可知,相比较于其他两种算法,FSPSO有更好的全局搜索能力和收敛性。FSPSO找到全局最优值的迭代次数少于其他算法,所以收敛的效率更高。相比PSO,FSPSO更加有效。所以,在优化函数方面FSPSO表现出明显的改进效果,整体的性能也很好。The convergence curves are used to evaluate the performance of the three algorithms. In order not to lose generality, the single-mode function f 2 (x) and the multi-mode function f 7 (x) are randomly selected for analysis. In order to better analyze the convergence curve, the value of the solution in the iteration is logarithmic, and the convergence curves of the three algorithms are shown in Figure 3 and Figure 4. It can be seen from the convergence curve that, compared with the other two algorithms, FSPSO has better global search ability and convergence. The number of iterations for FSPSO to find the global optimal value is less than other algorithms, so the convergence efficiency is higher. Compared with PSO, FSPSO is more effective. Therefore, FSPSO shows a significant improvement in the optimization function, and the overall performance is also very good.

背包问题为典型的NP难问题,GA,PSO和FSPSO用于优化一个背包问题,该问题中价值矩阵c和体积矩阵w如下:The knapsack problem is a typical NP-hard problem. GA, PSO and FSPSO are used to optimize a knapsack problem. In this problem, the value matrix c and volume matrix w are as follows:

c=[220,208,198,192,180,180,165,162,160,158,155,130,125,122,120,118,115,110,105,101,100,100,98,96,95,90,88,82,80,77,75,73,72,70,69,66,65,63,60,58,56,50,30,20,15,10,8,5,3,1]c=[220,208,198,192,180,180,165,162,160,158,155,130,125,122,120,118,115,110,105,101,100,100,98,96,95,90,88,82,80,77,75,73,72,70,69,66,65,63,60,58,56,50,30,20,15 ,10,8,5,3,1]

w=[80,82,85,70,72,70,66,50,55,25,50,55,40,48,50,32,22,60,30,32,40,38,35,32,25,28,30,22,50,30,45,30,60,50,20,65,20,25,30,10,20,25,15,10,10,10,4,4,2,1]w=[80,82,85,70,72,70,66,50,55,25,50,55,40,48,50,32,22,60,30,32,40,38,35,32 ,25,28,30,22,50,30,45,30,60,50,20,65,20,25,30,10,20,25,15,10,10,10,4,4,2 ,1]

体积的限值为1000。因为背包问题为一个离散的问题,本发明中每个粒子位置的离散化方法为:The volume limit is 1000. Because the knapsack problem is a discrete problem, the discretization method of each particle position in the present invention is:

Xx idid kk == 11 ifif rr 44 << sigsig (( VV idid kk )) 00 elseelse -- -- -- (( 1111 )) ;;

sigsig (( VV idid kk )) == 11 11 ++ expexp (( -- VV idid kk )) -- -- -- (( 1212 )) ;;

其中r4是[0,1]间均匀分布的随机数。实验环境和算法参数的设置和函数优化问题中一样,三种算法求解背包问题的迭代曲线如图5所示。由图5可知,FSPSO的全局搜索能力强于其他两种算法。三种算法优化该背包问题的实验运行30次,30次解的最优值、最差值、平均值、中间值、方差如表3所示。Where r 4 is a random number uniformly distributed between [0,1]. The experimental environment and algorithm parameter settings are the same as in the function optimization problem. The iterative curves of the three algorithms for solving the knapsack problem are shown in Figure 5. It can be seen from Figure 5 that the global search ability of FSPSO is stronger than the other two algorithms. The experiment of optimizing the knapsack problem with three algorithms runs 30 times, and the optimal value, worst value, average value, median value and variance of the 30 solutions are shown in Table 3.

表3 三种算法优化背包问题的结果Table 3 The results of three algorithms for optimizing the knapsack problem

算法algorithm 最优best 最差the worst 平均average 中间middle 方差variance GAGA 29092909 26702670 2788.52788.5 27872787 49.549.5 PSOPSO 29552955 26872687 2820.32820.3 28122812 58.758.7 FSPSOFSPSO 30303030 28992899 2957.42957.4 29612961 31.831.8

由表3可知,FSPSO得到的最优值、最差值、平均值和中间值大于GA和PSO得到的相应值,FSPSO得到的方差则小于其他两种算法。所以,FSPSO的全局搜索能力和收敛性优于其他两种算法,说明对于背包问题FSPSO更加有效。It can be seen from Table 3 that the optimal value, the worst value, the average value and the median value obtained by FSPSO are greater than the corresponding values obtained by GA and PSO, and the variance obtained by FSPSO is smaller than that of the other two algorithms. Therefore, the global search ability and convergence of FSPSO are better than the other two algorithms, indicating that FSPSO is more effective for the knapsack problem.

本发明提出了一种基于鱼群扑食行为的改进粒子群算法FSPSO,当前全局最优的粒子通过其他少数随机粒子提供的自身最优位置信息寻找更优的位置,这样算法的全局搜索能力增强了。将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换,这样增加了种群的多样性并且避免了局部最优解。通过优化标准的测试函数和典型的背包问题,证明了FSPSO对于求解连续和离散优化问题都有更高的效率。The present invention proposes an improved particle swarm algorithm FSPSO based on the behavior of fish swarms. The current globally optimal particle finds a better position through its own optimal position information provided by other few random particles, so that the global search ability of the algorithm is enhanced. up. The weak particles near the current global worst particle are replaced by randomly generated particles, which increases the diversity of the population and avoids local optimal solutions. By optimizing standard test functions and typical knapsack problems, it is proved that FSPSO is more efficient for solving both continuous and discrete optimization problems.

应当理解的是,本说明书未详细阐述的部分均属于现有技术。It should be understood that the parts not described in detail in this specification belong to the prior art.

应当理解的是,上述针对较佳实施例的描述较为详细,并不能因此而认为是对本发明专利保护范围的限制,本领域的普通技术人员在本发明的启示下,在不脱离本发明权利要求所保护的范围情况下,还可以做出替换或变形,均落入本发明的保护范围之内,本发明的请求保护范围应以所附权利要求为准。It should be understood that the above-mentioned descriptions for the preferred embodiments are relatively detailed, and should not therefore be considered as limiting the scope of the patent protection of the present invention. Within the scope of protection, replacements or modifications can also be made, all of which fall within the protection scope of the present invention, and the scope of protection of the present invention should be based on the appended claims.

Claims (3)

1.一种高效的基于鱼群扑食行为的改进粒子群算法,其特征在于,包括以下步骤:1. an efficient improved particle swarm algorithm based on fish swarm behavior, is characterized in that, comprises the following steps: 步骤1:初始化高效的基于鱼群扑食行为的改进粒子群算法的参数,所述的参数包括群体个数n、最大迭代次数maxk、惯性权重w、学习因子c1和c2、搜索粒子数m、搜索因子c3和范围因子c4,其中0≤m≤n/10;Step 1: Initialize the parameters of an efficient improved particle swarm optimization algorithm based on the behavior of fish swarms, the parameters include the number of groups n, the maximum number of iterations maxk, inertia weight w, learning factors c 1 and c 2 , and the number of search particles m, search factor c 3 and range factor c 4 , where 0≤m≤n/10; 步骤2:对改进粒子群中每个粒子的位置和速度进行更新;Step 2: Update the position and velocity of each particle in the improved particle swarm; 步骤3:当前全局最优的粒子通过m个随机粒子提供的自身最优位置信息,寻找当前全局更优的位置;Step 3: The currently globally optimal particle finds a current globally optimal position through its own optimal position information provided by m random particles; 步骤4:将当前全局最差粒子附近的弱小粒子通过随机产生的粒子进行替换;Step 4: Replace the weak particles near the current global worst particle with randomly generated particles; 步骤5:判断,高效的基于鱼群扑食行为的改进粒子群算法是否收敛或者是否达到最大迭代次数?Step 5: Judgment, whether the efficient improved particle swarm optimization algorithm based on fish predation behavior converges or reaches the maximum number of iterations? 若是,则输出全局最优解的位置,这个位置即为优化问题的解;If so, output the position of the global optimal solution, which is the solution of the optimization problem; 若否,则回转执行所述的步骤2。If not, go back to step 2. 2.根据权利要求1所述的高效的基于鱼群扑食行为的改进粒子群算法,其特征在于:步骤3的具体实现过程为,随机选择m个粒子,它们自身的最优位置为:P1,P2,...,Pm,当前全局最优粒子第j个方向搜索的位置为:XXj=Pg+r3·c3·(Pj-Pg),其中1≤j≤n,r3是[0,1]间均匀分布的随机数,c3是搜索因子;判断:若XXj中的最优值优于Pg,则用这个最优值替代Pg;否则,Pg的值不变。2. the efficient improved particle swarm algorithm based on the behavior of shoals of fish pouncing food according to claim 1, is characterized in that: the concrete realization process of step 3 is, randomly selects m particles, and their own optimum position is: P 1 ,P 2 ,...,P m , the search position of the current global optimal particle in the jth direction is: XX j =P g +r 3 ·c 3 ·(P j -P g ), where 1≤j ≤n, r 3 is a uniformly distributed random number between [0,1], c 3 is the search factor; Judgment: If the optimal value in XX j is better than P g , use this optimal value to replace P g ; otherwise , the value of P g remains unchanged. 3.根据权利要求1所述的高效的基于鱼群扑食行为的改进粒子群算法,其特征在于:步骤4中所述的弱小粒子,若全局最差粒子群的位置为:Pb=(Pb1,Pb2,...,PbD),第j个弱小粒子的位置为Yj,由于弱小粒子在全局最差粒子的附近,则Yj和Pb的距离满足关系式其中c4是范围因子,Range是粒子的搜索范围。3. the efficient improved particle swarm algorithm based on fish swarming behavior according to claim 1, characterized in that: for the weak and small particles described in step 4, if the position of the global worst particle swarm is: P b =( P b1 ,P b2 ,...,P bD ), the position of the jth weak particle is Y j , since the weak particle is near the global worst particle, the distance between Y j and P b satisfies the relation where c 4 is the range factor, and Range is the search range of the particle.
CN201510252869.XA 2015-05-18 2015-05-18 Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling) Pending CN104809500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510252869.XA CN104809500A (en) 2015-05-18 2015-05-18 Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510252869.XA CN104809500A (en) 2015-05-18 2015-05-18 Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling)

Publications (1)

Publication Number Publication Date
CN104809500A true CN104809500A (en) 2015-07-29

Family

ID=53694310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510252869.XA Pending CN104809500A (en) 2015-05-18 2015-05-18 Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling)

Country Status (1)

Country Link
CN (1) CN104809500A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190851A (en) * 2018-10-25 2019-01-11 上海电机学院 A kind of optimal configuration algorithm based on the independent wind-light storage microgrid for improving fish-swarm algorithm
CN110610245A (en) * 2019-07-31 2019-12-24 东北石油大学 A method and system for leak detection of long oil pipelines based on AFPSO-K-means
CN112884117A (en) * 2021-03-19 2021-06-01 东南大学 RTID-PSO method and system for random topology
CN115808952A (en) * 2022-11-13 2023-03-17 西北工业大学 Energy system maximum power tracking control method based on improved particle swarm optimization

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190851A (en) * 2018-10-25 2019-01-11 上海电机学院 A kind of optimal configuration algorithm based on the independent wind-light storage microgrid for improving fish-swarm algorithm
CN109190851B (en) * 2018-10-25 2021-08-03 上海电机学院 An optimized configuration method for independent wind-solar storage microgrid based on improved fish swarm algorithm
CN110610245A (en) * 2019-07-31 2019-12-24 东北石油大学 A method and system for leak detection of long oil pipelines based on AFPSO-K-means
CN112884117A (en) * 2021-03-19 2021-06-01 东南大学 RTID-PSO method and system for random topology
CN115808952A (en) * 2022-11-13 2023-03-17 西北工业大学 Energy system maximum power tracking control method based on improved particle swarm optimization
CN115808952B (en) * 2022-11-13 2024-10-01 西北工业大学 Maximum Power Point Tracking Control Method for Energy System Based on Improved Particle Swarm Optimization

Similar Documents

Publication Publication Date Title
Yaghini et al. A hybrid algorithm for artificial neural network training
Ouyang et al. Adaptive spiral flying sparrow search algorithm
CN104573812B (en) A kind of unmanned plane air route determining method of path based on particle firefly colony optimization algorithm
CN110062390B (en) Wireless sensor network node optimization deployment method based on improved wolf colony algorithm
CN103279793B (en) A kind of unmanned vehicle formation method for allocating tasks determined under environment
CN103354642B (en) A kind of method improving mobile sensor network coverage rate
Adnan et al. A comparative study of particle swarm optimization and Cuckoo search techniques through problem-specific distance function
CN106650058A (en) Improved artificial bee colony algorithm-based task scheduling method for cooperative electronic jamming
CN106529674A (en) Multiple-unmanned-aerial-vehicle cooperated multi-target distribution method
CN108399450A (en) Improvement particle cluster algorithm based on biological evolution principle
CN104599501A (en) Traffic flow forecasting method optimizing support vector regression by mixed artificial fish swarm algorithm
CN104866898A (en) Multi-target flexible job shop scheduling method based on cooperative hybrid artificial fish swarm model
CN104809500A (en) Efficient improved FSPSO (Particle Swarm Optimization based on prey behavior of Fish Schooling)
CN101763528A (en) Gene regulation and control network constructing method based on Bayesian network
CN112446457B (en) A Gas Leakage Source Location Method Based on Improved Artificial Fish Swarm Algorithm
CN105095595A (en) Particle swarm optimization algorithm based on clustering degree of swarm
CN106447027A (en) Vector Gaussian learning particle swarm optimization method
CN107145934A (en) An artificial bee colony optimization method based on enhanced local search ability
Sun et al. A modified surrogate-assisted multi-swarm artificial bee colony for complex numerical optimization problems
Mu et al. An intelligent ant colony optimization for community detection in complex networks
Elashry et al. A chaotic reptile search algorithm for energy consumption optimization in wireless sensor networks
CN102567523B (en) A Design Method of Adaptive Spatial Sampling Scheme for Concentrated and Distributed Geographic Elements
Kang et al. A novel ecological particle swarm optimization algorithm and its population dynamics analysis
CN102779241A (en) PPI (Point-Point Interaction) network clustering method based on artificial swarm reproduction mechanism
CN104850646B (en) A kind of Frequent tree mining method for digging for single uncertain figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150729