CN113722853A - Intelligent calculation-oriented group intelligent evolutionary optimization method - Google Patents

Intelligent calculation-oriented group intelligent evolutionary optimization method Download PDF

Info

Publication number
CN113722853A
CN113722853A CN202111005276.5A CN202111005276A CN113722853A CN 113722853 A CN113722853 A CN 113722853A CN 202111005276 A CN202111005276 A CN 202111005276A CN 113722853 A CN113722853 A CN 113722853A
Authority
CN
China
Prior art keywords
individual
optimal
optimization
fitness value
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111005276.5A
Other languages
Chinese (zh)
Other versions
CN113722853B (en
Inventor
刘景森
李煜
杨杰
赵方园
侯嬿淋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202111005276.5A priority Critical patent/CN113722853B/en
Publication of CN113722853A publication Critical patent/CN113722853A/en
Application granted granted Critical
Publication of CN113722853B publication Critical patent/CN113722853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a group intelligent evolutionary optimization method facing intelligent computation, which comprises the steps of setting initial parameters and computing individual fitness values according to a target function; finding out an optimal fitness value and recording the position of the optimal fitness value; performing neighborhood disturbance on the optimal individual position in the population to generate a neighborhood individual position, and obtaining a new optimal individual position by adopting a greedy selection strategy; dividing an optimization evolution mechanism of the algorithm into three strategies according to probability, and obtaining corresponding new individual positions through the three strategies; solving the fitness value of the new individual according to the objective function, and obtaining the current position of the individual by adopting a greedy selection strategy; comparing whether the current iteration times reach the maximum iteration times and returning to the optimization process of continuous iterative evolution; and determining and outputting a final optimal value. The optimization capability of the invention has obvious superiority and effectiveness, and the method is simple to realize, does not increase the time complexity of the algorithm, and does not cause the problem of slow operation.

Description

Intelligent calculation-oriented group intelligent evolutionary optimization method
Technical Field
The invention belongs to the technical field of intelligent optimization, and relates to an intelligent calculation-oriented group intelligent evolutionary optimization method.
Background
In recent years, many intelligent group optimization algorithms with various advantages appear at home and abroad, but with the rapid development and deep fusion of the optimization methods with scientific technology and engineering application, new mechanisms, new methods, new ideas, new applications and new problems are continuously proposed, a flourishing situation that flowers are arranged together and still need to be continuously researched is presented, a plurality of problems have a space for further research, and the problems that the solution is not stable enough, sometimes the optimization precision is not high, the convergence speed is slow, the algorithm is easy to fall into local extremum, the dimensionality adaptability is not good and the like exist in many algorithms. Therefore, the invention provides and discloses an intelligent calculation-oriented group intelligent evolutionary optimization method, which is used for solving a complex function extremum optimization problem and an engineering design constraint optimization problem. In the aspect of solving complex function optimization, the algorithm constructed by the method is mainly used for carrying out optimization solution on the IEEE CEC benchmark test function set. The CEC test set comprises a large number of complex optimization problem test functions with different types and with the characteristics of rotation, deviation, mixing, combination and the like, so that the CEC test set has very high solving difficulty and problem universality, and is one of favorable evidences for verifying the effectiveness of the method when various intelligent optimization algorithms are evaluated relatively objectively and justly. In the aspect of solving the constraint optimization problem, the algorithm constructed by the method is mainly used for solving different types of engineering design constraint optimization problems so as to further objectively and fairly test the effectiveness of the method.
Disclosure of Invention
The scale of the optimization problem is larger and more complex, and the traditional optimization method is difficult to provide an effective solution in a reasonable time. The meta-heuristic group intelligent optimization algorithm is one of the most effective methods for solving the problems, but the related algorithms also have the defects of unstable solution, low optimization precision, low convergence speed, easy falling into local extreme values, poor dimensionality adaptability and the like. The invention aims to provide a group intelligent evolutionary optimization method oriented to intelligent computation, which is used as a mechanism for constructing a new or improved existing meta-heuristic intelligent optimization algorithm to solve the problems.
The invention adopts the technical scheme that an intelligent calculation-oriented group intelligent evolutionary optimization method is implemented according to the following steps:
step 1, setting initial parameters: the number N of individuals in the population, the maximum iteration times Max _ iter and the individual dimension D are obtained, and the initial position x of each individual is randomly generated in the optimization rangei,i=1,2,3……N;
Step 2, calculating the individual fitness value f (x) according to the objective function f (x)i);
Step 3, obtaining the fitness value f (x) according to the step 2i) Finding out the optimal fitness value fminAnd recording the optimum fitness value fminI.e. the optimal individual position xbest(T);
Step 4, performing neighborhood disturbance on the optimal individual position in the population to generate a neighborhood individual position, calculating the fitness value of new and old optimal individuals, and comparing new and old solutions by adopting a greedy selection strategy to obtain a new optimal individual position;
and 5, under the condition of keeping the population scale unchanged, dividing an optimization evolution mechanism of the algorithm into three strategies according to probability:
a. 1/3 individual of the probability executes the evolutionary strategy of the basic algorithm;
b. 1/3 individuals execute a non-uniform Gaussian variant evolution strategy;
c. the rest 1/3 individuals divide the algorithm into two sections according to the iteration number: the first half section executes t distribution variation, and the second half section executes a golden sine evolution strategy;
obtaining a new individual position corresponding to the original individual position through three strategies;
step 6, solving the fitness value of the new individual position obtained in the step 5 according to an objective function f (x), and selecting the fitness values of the original individual position and the new individual position by adopting a greedy selection strategy to obtain the current position of the individual;
step 7, comparing whether the current iteration time T reaches the maximum iteration time, if T is less than or equal to Max _ iter, returning to the step 2, and continuing the optimization process of iterative evolution;
and 8, determining and outputting the final optimal value.
The invention is also characterized in that:
step 4 comprises the following steps:
step 4.1, for the optimal individual position x obtained in step 3beat(T) carrying out neighborhood disturbance to generate neighborhood individual positions, and calculating the fitness value of the neighborhood individual positions;
step 4.2, adopting greedy selection strategy to carry out selection on optimal individual position xbestAnd (T) selecting the fitness value of the neighborhood individual position to obtain the optimal individual position.
The calculation formula of step 4.1 is as follows:
Figure BDA0003236867990000031
in the formula (1), rand is [0, 1]]Uniformly distributed random number, xbest(T) is the current generation optimal individual position.
The calculation formula of step 4.2 is as follows:
Figure BDA0003236867990000032
in the formula (2), xbestAnd (T +1) is the updated next generation optimal individual position, namely the current position of the individual.
Step 5 comprises the following implementation steps:
step 5.1, m ═ rand;
step 5.2, if m is less than 1/3, the evolution strategy of the basic algorithm is executed
Step 5.3, if m is more than 2/3, updating the position of the individual by the formula (3)
xi(T+1)=xi(T)+Δ(T,Gi(T)) (3)
Step 5.4, if m is not less than 1/3 and not more than 2/3, judging whether the algorithm is explored in the first half of iteration or developed in the second half of iteration, and if T is less than 1/2. Max _ iter, updating the position of the individual by a formula (4);
xi(T+1)=xi(T)+tmd(T)·xi(T) (4)
otherwise, updating the position of the individual by formula (5);
xi(T+1)=xi(T)·|sin(R1)|+R2·sin(R1)·|x1·xbest(T)-x2·xi(T)| (5)
in the formula (5), R1Is [0, 2 π ]]Random numbers which are uniformly distributed are used for determining the moving distance of the individual in the next iteration; r2Is [0, pi ]]The random numbers are uniformly distributed and used for determining the updating direction of the ith individual of the next iteration; x is the number of1And x2Is the golden section coefficient, golden section number
Figure BDA0003236867990000033
X is then1=-π+(1-q)·2π,x2Pi + q · 2 pi, Max iter is the maximum number of iterations, xi(T) is the position of the ith individual in the current generation, xi(T +1) is the location of the ith individual in the next generation after the update;
in the formula (3), Δ (T, G)i(T)) is a non-uniform mutation step size, is a mutation operator for adaptively adjusting the step size by Gaussian distribution, and is Delta (T, G)i(T)) calculated as follows:
Figure BDA0003236867990000041
Gi(T)=N·((xbext(T)-xi(T)),σ) (7)
in formulae (6) to (7), y is Gi(T), p is [0, 1]]And (3) uniformly distributing random numbers, wherein Max _ iter is the maximum iteration number, b is a system coefficient and determines the non-uniformity degree of variation calculation, b is 2, N is the number of population individuals, and sigma is a Gaussian distribution standard deviation.
The invention has the beneficial effects that:
1. according to the invention, an intelligent optimization algorithm is constructed or the existing algorithm is improved by adopting an optimal neighborhood disturbance strategy and a difference learning strategy based on a segmentation idea, which comprises mechanisms such as non-uniform Gaussian variation, t-distribution variation and a golden sine evolution strategy, and the optimization precision, the convergence speed and the adaptability of the algorithm are obviously improved.
2. The invention applies the constructed or improved algorithm to the solution of IEEE CEC reference function test suite and the engineering design constraint optimization problem, and experimental results show that the optimization capability of the algorithm has obvious superiority and effectiveness.
Drawings
FIG. 1 is a flow chart of the group intelligent evolutionary optimization method for intelligent computing.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a group intelligent evolutionary optimization method facing intelligent computation, which is specifically implemented according to the following steps as shown in figure 1:
step 1, setting initial parameters: the number N of individuals in the population, the maximum iteration times Max _ iter and the individual dimension D are obtained, and the initial position x of each individual is randomly generated in the optimization rangei(i=1,2,...,N);
Step 2, calculating the individual fitness value f (x) according to the objective function f (x)i);
Step 3, according to the fitness value f (x) of the individual in the population obtained in the step 2i) (ii) a Finding out optimal fitness value fminAnd recording the optimal fitness value fminI.e. the optimal individual position xbest(T);
Step 4, performing neighborhood disturbance on the optimal individual position in the population to generate a neighborhood individual position, calculating the fitness value of a new optimal individual and an old optimal individual, comparing the new solution and the old solution by adopting a greedy selection strategy, and replacing the original optimal individual position if the new solution is better; otherwise, the original optimal individual position is kept to obtain a new optimal individual position;
and 5, under the condition of keeping the population scale unchanged, dividing an optimization evolution mechanism of the algorithm into three strategies according to probability:
a. 1/3 individual of the probability executes the evolutionary strategy of the basic algorithm;
b. 1/3 individuals execute a non-uniform Gaussian variant evolution strategy;
c. the rest 1/3 individuals divide the algorithm into two sections according to the iteration number: the first half section executes t distribution variation, and the second half section executes a golden sine evolution strategy;
obtaining a new individual position corresponding to the original individual position through three strategies, namely generating a new solution from the old solution;
step 6, solving the fitness value of the new individual obtained in the step 5 according to an objective function f (x), selecting the fitness value of the original individual and the fitness value of the new individual by adopting a greedy selection strategy, comparing a new solution with an old solution, and replacing the new individual with the new individual if the new solution is better; otherwise, the original individual position is kept to obtain the current position of the individual;
step 7, comparing whether the current iteration time T reaches the maximum iteration time, if T is less than or equal to Max _ iter, returning to the step 2, and continuing the optimization process of iterative evolution;
and 8, determining and outputting the final optimal value.
Because most basic intelligent optimization algorithms take the current optimal individual position as the target of the iteration, in the whole iteration process, the optimal individual position is updated only when being superior to the original optimal individual position, so the total updating times of the optimal individual are often less, particularly in the middle and later stages of the iteration, and the optimization efficiency of the algorithms is not high. Therefore, the optimal neighborhood perturbation strategy is introduced in the step 4 to solve the problem, the strategy is to randomly generate neighborhood individual positions near the optimal individual positions, a greedy selection strategy is adopted to compare new and old solutions, if the new solution is better, the original optimal individual positions are replaced, otherwise, the original optimal individual positions are kept. The method can not only improve the convergence speed of the algorithm, but also improve the probability of the algorithm jumping out of the local extremum region. Step 4 is implemented according to the following steps:
step 4.1, the optimal individual position x in the population obtained in the step 3best(T) carrying out neighborhood disturbance to generate neighborhood individual positions, and calculating the fitness value of the neighborhood individual positions;
step 4.2, adopting greedy selection strategy to carry out selection on optimal individual position xbestAnd (T) selecting the fitness value of the neighborhood individual position to obtain the optimal individual position.
The calculation formula of the step 4.1 is as follows:
Figure BDA0003236867990000061
in the formula (1), rand is [0, 1]]Uniformly distributed random number, xbest(T) is the current generation optimal individual position.
The calculation formula of step 4.2 is as follows:
Figure BDA0003236867990000062
in the formula (2), xbestAnd (T +1) is the updated next generation optimal individual position, namely the current position of the individual.
For some complex multi-extreme optimization problems, most basic meta-heuristic intelligent optimization algorithms have the problems that global search in the early stage is not enough and is easy to fall into local extreme values, and local fine mining capability is not strong near the optimal solution in the later stage, so that optimization accuracy is not high. Therefore, the method probabilistically introduces a difference learning strategy based on a segmentation idea in the step 5 to solve the problem, so as to balance the global exploration and local mining capabilities of the meta-heuristic intelligent optimization algorithm and further improve the optimization precision and convergence performance of the algorithm. Step 5 comprises the following implementation steps:
step 5.1, m ═ rand;
step 5.2, if m is less than 1/3, the evolution strategy of the basic algorithm is executed
Step 5.3, if m is larger than 2/3, updating the position of the individual by the formula (3);
xi(T+1)=xi(T)+Δ(T,Gi(T)) (3)
step 5.4, if m is not less than 1/3 and not more than 2/3, judging whether the algorithm is explored in the first half of iteration or developed in the second half of iteration, and if T is less than 1/2. Max _ iter, updating the position of the individual by a formula (4);
xi(T+1)=xi(T)+trnd(T)·xi(T) (4)
otherwise, updating the position of the individual by formula (5);
xi(T+1)=xi(T)·|sin(R1)|+R2·sin(R1)·|x1·xbest(T)-x2·xi(T)| (5)
in the formulae (3) to (5), m is [0, 1]]Uniformly distributed random numbers, Max _ iter being the maximum number of iterations, xi(T) is the position of the ith individual in the current generation, xi(T +1) is the location of the ith individual in the next generation after the update;
in the formula (3), Δ (T, G)i(T)) is a non-uniform mutation step size, is a mutation operator for adaptively adjusting the step size through Gaussian distribution, and delta (T, G)i(T)) calculated as follows:
Figure BDA0003236867990000071
Gi(T)=N·((xbest(T)-xi(T)),σ) (7)
in formulae (6) to (7), y is Gi(T), p is [0, 1]]And (3) uniformly distributing random numbers, wherein Max _ iter is the maximum iteration number, b is a system coefficient and determines the non-uniformity degree of variation calculation, b is 2, N is the number of population individuals, and sigma is a Gaussian distribution standard deviation.
According to the non-uniform Gaussian variation of the formulas (3) and (6) to (7), the learning strategy of adaptively adjusting the variation step length by selecting the optimal individual position and the current individual position is favorable for further enriching the diversity of the population and improving the convergence rate of the algorithm.
In the formula (4), t-distribution random variation tmd (T) x is added based on the individuali(T), wherein most of the values of trnd (T) are distributed in [ -1, 1 [ ]]In the meantime, original individual information can be fully utilized to carry out disturbance updating on individual positions, so that the diversity of population can be further increased, and the rest small partial values are in the interval of [ -1, 1 [)]In addition, the position of the individual is greatly changed, so that the individual can get rid of the constraint of a local extremum region, and the global searching capability of the algorithm is improved.
According to the analysis formula (4), t distribution with the iteration times of the algorithm as parameter freedom is introduced in position updating to serve as a variation step length, the value of the iteration times is small in the early stage of the algorithm iteration, the t distribution generates large disturbance updating on an individual, the individual can escape from a local extremum region, the value of the iteration times is large in the later stage of the algorithm iteration, the t distribution is approximate to Gaussian distribution, the deep mining capacity of the algorithm is improved, and therefore the convergence precision of the algorithm is improved.
In the formula (5), R1Is [0, 2 π ]]Random numbers which are uniformly distributed are used for determining the moving distance of the individual in the next iteration; r2Is [0, pi ]]The random numbers are uniformly distributed and used for determining the updating direction of the ith individual of the next iteration; x is the number of1And x2Is the golden section coefficient, golden section number
Figure BDA0003236867990000081
X is then1=-π+(1-q)·2π,x2These coefficients narrow the search space, favoring the individual approaching an optimum.
As can be seen from the analysis formula (5), a gold sinusoidal evolution strategy is introduced into the position updating, so that each individual can fully communicate with the optimal individual, the search space can be gradually reduced by utilizing the introduced golden section coefficient, and the parameter R is used for1And R2The position updating distance and direction are controlled to further coordinate the global searching and local mining capabilities of the algorithm, so that the optimizing precision and speed of the algorithm are improved, and a more ideal optimizing effect is obtained.
The invention has the advantages that:
by adopting an optimal neighborhood disturbance strategy and a difference learning strategy based on a segmented idea and comprising mechanisms such as non-uniform Gaussian variation, t-distribution variation, a golden sine evolution strategy and the like to construct an intelligent optimization algorithm or improve the existing algorithm, the optimization precision, the convergence speed and the adaptability of the algorithm are obviously improved. The constructed or improved algorithm is applied to solving the IEEE CEC benchmark function test suite and the engineering design constraint optimization problem, experimental results show that the optimization capability of the algorithm has obvious superiority and effectiveness, the method is simple to implement, the time complexity of the algorithm is not increased, and the problem of slow operation is not caused.
The method is adopted to improve the basic JAYA, particle swarm, firefly and other intelligent optimization algorithms, and the improved algorithms are applied to solving the engineering design constraint optimization problems of IEEE CEC reference function test suite, three-rod truss, extension spring, cantilever beam, welded beam, pressure vessel, speed reducer, gear system and the like with different solving difficulties.

Claims (5)

1. A group intelligent evolutionary optimization method facing intelligent computing is characterized by comprising the following steps:
step 1, setting initial parameters: the number N of individuals in the population, the maximum iteration times Max _ iter and the individual dimension D are obtained, and the initial position x of each individual is randomly generated in the optimization rangei,i=1,2,3……N;
Step 2, calculating the individual fitness value f (x) according to the objective function f (x)i);
Step 3, obtaining the fitness value f (x) according to the step 2i) Finding out the optimal fitness value fminAnd recording the optimal fitness value fminI.e. the optimal individual position xbest(T);
Step 4, performing neighborhood disturbance on the optimal individual position in the population to generate a neighborhood individual position, calculating the fitness value of new and old optimal individuals, and comparing new and old solutions by adopting a greedy selection strategy to obtain a new optimal individual position;
and 5, under the condition of keeping the population scale unchanged, dividing an optimization evolution mechanism of the algorithm into three strategies according to probability:
a. 1/3 individual of the probability executes the evolutionary strategy of the basic algorithm;
b. 1/3 individuals execute a non-uniform Gaussian variant evolution strategy;
c. the rest 1/3 individuals divide the algorithm into two sections according to the iteration number: the first half section executes t distribution variation, and the second half section executes a golden sine evolution strategy;
obtaining a new individual position corresponding to the original individual position through the three strategies;
step 6, solving the fitness value of the new individual position obtained in the step 5 according to an objective function f (x), and selecting the fitness values of the original individual position and the new individual position by adopting a greedy selection strategy to obtain the current position of the individual;
step 7, comparing whether the current iteration time T reaches the maximum iteration time, if T is less than or equal to Max _ iter, returning to the step 2, and continuing the optimization process of iterative evolution;
and 8, determining and outputting the final optimal value.
2. The method of claim 1, wherein the step 4 comprises the following steps:
step 4.1, for the optimal individual position x obtained in step 3beat(T) carrying out neighborhood disturbance to generate neighborhood individual positions, and calculating the fitness value of the neighborhood individual positions;
step 4.2, adopting greedy selection strategy to carry out selection on optimal individual position xbestAnd (T) selecting the fitness value of the neighborhood individual position to obtain the optimal individual position.
3. The method of claim 2, wherein the formula of step 4.1 is as follows:
Figure FDA0003236867980000021
in the formula (1), rand is [0, 1]]Uniformly distributed random number, xbest(T) is the current generation optimal individual position.
4. The method of claim 3, wherein the formula of step 4.2 is as follows:
Figure FDA0003236867980000022
in the formula (2), xbestAnd (T +1) is the updated next generation optimal individual position, namely the current position of the individual.
5. The method of claim 1, wherein the step 5 comprises the following steps:
step 5.1, m is rand, and m is random numbers uniformly distributed among [0, 1 ];
step 5.2, if m is less than 1/3, the evolution strategy of the basic algorithm is executed
Step 5.3, if m is larger than 2/3, updating the position of the individual by the formula (3);
xi(T+1)=xi(T)+Δ(T,Gi(T)) (3)
step 5.4, if m is not less than 1/3 and not more than 2/3, judging whether the algorithm is explored in the first half of iteration or developed in the second half of iteration, and if T is less than 1/2. Max _ iter, updating the position of the individual by a formula (4);
xi(T+1)=xi(T)+tmd(T)·xi(T) (4)
otherwise, updating the position of the individual by formula (5);
xi(T+1)=xi(T)·|sin(R1)|+R2·sin(R1)·|x1·xbest(T)-x2·xi(T)| (5)
in the formula (5), R1Is [0, 2 π ]]Uniformly distributed random numbers, R2Is [0, pi ]]Uniformly distributed random number, x1And x2Is the golden section coefficient, golden section number
Figure FDA0003236867980000031
X is then1=-π+(1-q)·2π,x2=-π+q·2π,xi(T) is the position of the ith individual in the current generation, xi(T +1) is the location of the ith individual in the next generation after the update;
in the formula (3), Δ (T, G)i(T)) is a non-uniform mutation step size, is a mutation operator for adaptively adjusting the step size through Gaussian distribution, and delta (T, G)i(T)) calculated as follows:
Figure FDA0003236867980000032
Gi(T)=N·((xbest(T)-xi(T)),σ) (7)
in formulae (6) to (7), y is Gi(T), p is [0, 1]]And (3) uniformly distributing random numbers, wherein Max _ iter is the maximum iteration number, b is a system coefficient and determines the non-uniformity degree of variation calculation, b is 2, N is the number of population individuals, and sigma is a Gaussian distribution standard deviation.
CN202111005276.5A 2021-08-30 2021-08-30 Group intelligent evolutionary engineering design constraint optimization method for intelligent computation Active CN113722853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111005276.5A CN113722853B (en) 2021-08-30 2021-08-30 Group intelligent evolutionary engineering design constraint optimization method for intelligent computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111005276.5A CN113722853B (en) 2021-08-30 2021-08-30 Group intelligent evolutionary engineering design constraint optimization method for intelligent computation

Publications (2)

Publication Number Publication Date
CN113722853A true CN113722853A (en) 2021-11-30
CN113722853B CN113722853B (en) 2024-03-05

Family

ID=78679114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111005276.5A Active CN113722853B (en) 2021-08-30 2021-08-30 Group intelligent evolutionary engineering design constraint optimization method for intelligent computation

Country Status (1)

Country Link
CN (1) CN113722853B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117850213A (en) * 2024-03-08 2024-04-09 岳正检测认证技术有限公司 Enhanced PID control optimization method for killing robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0401687A2 (en) * 1989-06-08 1990-12-12 Hitachi, Ltd. Placement optimizing method/apparatus and apparatus for designing semiconductor devices
WO2013176784A1 (en) * 2012-05-24 2013-11-28 University Of Southern California Optimal strategies in security games
CN107045491A (en) * 2017-06-06 2017-08-15 西南民族大学 Double iterative multi-scale quantum wavelet transforms optimization method
CN107145934A (en) * 2017-05-09 2017-09-08 华侨大学 A kind of artificial bee colony optimization method based on enhancing local search ability
CN107292381A (en) * 2017-07-12 2017-10-24 东北电力大学 A kind of method that mixed biologic symbiosis for single object optimization is searched for
CN109214499A (en) * 2018-07-27 2019-01-15 昆明理工大学 A kind of difference searching algorithm improving optimizing strategy
CN110598831A (en) * 2019-08-14 2019-12-20 西安理工大学 Improved backtracking search optimization algorithm based on multiple strategies
CN111445092A (en) * 2020-04-21 2020-07-24 三峡大学 Multi-microgrid optimization method based on improved JAYA algorithm
CN111709511A (en) * 2020-05-07 2020-09-25 西安理工大学 Harris eagle optimization algorithm based on random unscented Sigma point variation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0401687A2 (en) * 1989-06-08 1990-12-12 Hitachi, Ltd. Placement optimizing method/apparatus and apparatus for designing semiconductor devices
WO2013176784A1 (en) * 2012-05-24 2013-11-28 University Of Southern California Optimal strategies in security games
CN107145934A (en) * 2017-05-09 2017-09-08 华侨大学 A kind of artificial bee colony optimization method based on enhancing local search ability
CN107045491A (en) * 2017-06-06 2017-08-15 西南民族大学 Double iterative multi-scale quantum wavelet transforms optimization method
CN107292381A (en) * 2017-07-12 2017-10-24 东北电力大学 A kind of method that mixed biologic symbiosis for single object optimization is searched for
CN109214499A (en) * 2018-07-27 2019-01-15 昆明理工大学 A kind of difference searching algorithm improving optimizing strategy
CN110598831A (en) * 2019-08-14 2019-12-20 西安理工大学 Improved backtracking search optimization algorithm based on multiple strategies
CN111445092A (en) * 2020-04-21 2020-07-24 三峡大学 Multi-microgrid optimization method based on improved JAYA algorithm
CN111709511A (en) * 2020-05-07 2020-09-25 西安理工大学 Harris eagle optimization algorithm based on random unscented Sigma point variation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张创业;何登旭;莫愿斌;: "多群多层协同进化算法的约束优化求解及应用", 计算机应用研究, no. 05 *
邓海潮;毛弋;彭文强;刘小丽;梁杉;范幸;: "基于改进人工蜂群算法的配电网重构", 电力系统及其自动化学报, no. 07 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117850213A (en) * 2024-03-08 2024-04-09 岳正检测认证技术有限公司 Enhanced PID control optimization method for killing robot
CN117850213B (en) * 2024-03-08 2024-05-24 羽禾人工智能(嘉兴)有限公司 Enhanced PID control optimization method for killing robot

Also Published As

Publication number Publication date
CN113722853B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
Guo et al. The enhanced genetic algorithms for the optimization design
CN104376581B (en) A kind of Gaussian Mixture using adaptive resampling is without mark particle filter algorithm
Zhi et al. A discrete PSO method for generalized TSP problem
CN115470704B (en) Dynamic multi-objective optimization method, device, equipment and computer readable medium
Sun et al. Multiple sequence alignment with hidden Markov models learned by random drift particle swarm optimization
CN111709511A (en) Harris eagle optimization algorithm based on random unscented Sigma point variation
CN110210072B (en) Method for solving high-dimensional optimization problem based on approximate model and differential evolution algorithm
CN114444646A (en) Function testing method based on improved multi-target particle swarm algorithm
CN111222799A (en) Assembly sequence planning method based on improved particle swarm optimization
WO2013008345A1 (en) Optimum solution searching method and optimum solution searching device
CN113722853A (en) Intelligent calculation-oriented group intelligent evolutionary optimization method
CN102937946B (en) A kind of complicated function minimal value searching method based on constraint regular pattern
Su et al. Analysis and improvement of GSA’s optimization process
Li et al. An improved algorithm optimization algorithm based on RungeKutta and golden sine strategy
Cao et al. Quantile-guided multi-strategy algorithm for dynamic multiobjective optimization
CN114117917B (en) Multi-objective optimization ship magnetic dipole array modeling method
Liu et al. A quantum computing based numerical method for solving mixed-integer optimal control problems
CN106934453B (en) Method for determining cubic material parent phase and child phase orientation relation
CN115775038A (en) Short-term load prediction method based on IGWO optimization LSSVM
CN115146389B (en) Permanent magnet magnetic levitation train dynamics feature modeling method
CN116345495B (en) Power plant unit frequency modulation optimization method based on data analysis and modeling
Wang et al. Double elite co-evolutionary genetic algorithm
Wang et al. Research on Robot Path Planning Based on Simulated Annealing Algorithm
CN114970719B (en) Internet of things operation index prediction method based on improved SVR model
Kanimozhi et al. Travelling Salesman Problem Solved using Genetic Algorithm Combined Data Perturbation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant