CN115222006A - Numerical function optimization method based on improved particle swarm optimization algorithm - Google Patents

Numerical function optimization method based on improved particle swarm optimization algorithm Download PDF

Info

Publication number
CN115222006A
CN115222006A CN202110403163.4A CN202110403163A CN115222006A CN 115222006 A CN115222006 A CN 115222006A CN 202110403163 A CN202110403163 A CN 202110403163A CN 115222006 A CN115222006 A CN 115222006A
Authority
CN
China
Prior art keywords
algorithm
particles
particle swarm
population
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110403163.4A
Other languages
Chinese (zh)
Inventor
熊聪聪
杨晓艺
王丹
赵青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Science and Technology
Original Assignee
Tianjin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Science and Technology filed Critical Tianjin University of Science and Technology
Priority to CN202110403163.4A priority Critical patent/CN115222006A/en
Publication of CN115222006A publication Critical patent/CN115222006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Abstract

The invention relates to the field of intelligent calculation, in particular to a numerical function optimization algorithm based on an improved particle swarm optimization algorithm. The main technical characteristics are as follows: the particle swarm optimization algorithm is improved, the diversity of the swarm is enhanced, and the optimization range of the particles is expanded. According to the optimization mode of the particle swarm optimization algorithm, the initial search space loses guiding significance for the whole algorithm after a few iterations, and the problems that the algorithm is easy to converge to a local minimum value, the diversity is reduced too fast, the parameters are sensitive and the like are easily caused. The algorithm is considered from the aspects of expanding the optimizing range and enhancing the diversity of the particle swarm, and a space search strategy combining local search and global search is added, namely, the initialization stage, the speed and the position iterative formula of the particle swarm algorithm are modified by using Bernstein particles and a reverse learning strategy.

Description

Numerical function optimization method based on improved particle swarm optimization algorithm
Technical Field
The invention belongs to the field of intelligent calculation, and particularly relates to a numerical function optimization algorithm based on an improved particle swarm optimization algorithm.
Background
In recent years, an emerging evolutionary computing technique, referred to as population intelligence, has become the focus of increasing researchers' attention, which has a very special relationship with artificial life, particularly with evolutionary strategies and genetic algorithms. The group intelligence utilizes the group advantages, and provides a new idea for searching a solution of a complex problem on the premise of no centralized control and no global model. Currently, swarm intelligence algorithms have been applied to practical scenarios. And a large number of documents have shown that the research of algorithm optimization by using a space parallel search mode combining a space local search strategy and a space global search strategy is not common in the aspect of group intelligence. It can be easily found that, in the case of genetic algorithms, the initial search space loses its guiding meaning to the whole algorithm after a few iterations, because the individuals evolve only in the space defined by known individuals of the population due to the crosses within the population. For this reason, the optimization of the algorithm by using the combination of the spatial local search strategy and the global search strategy may be embodied in the swarm intelligence optimization algorithm.
The invention takes a representative group intelligent algorithm-a particle swarm optimization algorithm as a basis, improves the defects of the representative group intelligent algorithm, takes the two aspects of expanding the optimization range and enhancing the diversity of the population into consideration, adds a space search strategy combining local search and global search, provides an improved particle swarm optimization algorithm applied to numerical function optimization, and verifies the feasibility and the effectiveness of the improved particle swarm optimization algorithm through a standard test function.
Disclosure of Invention
The invention aims to solve the problems of too low convergence speed, low convergence precision, easiness in convergence to a local minimum value, too high reduction of diversity, sensitive parameters and the like in a numerical function optimization process by providing a numerical function optimization method based on an improved particle swarm optimization. A space parallel search mode combining a space local search strategy and a space global search strategy is added into the existing particle swarm optimization algorithm to modify the particle update formula, so that the problem that the particle is easy to fall into a local optimal value in the numerical function optimization process is relieved to a certain extent. The convergence of the algorithm is superior to that of the traditional particle swarm optimization algorithm, and the cost and time for completing the optimization of the numerical function are lower under a certain task scale.
Particle Swarm Optimization (PSO) is a global stochastic optimization algorithm based on swarm intelligence, proposed by doctor Eberhart and doctor kennedy in 1995. The method simulates the foraging behavior of birds, a search space of a problem is similar to a flight space of the birds, each bird is abstracted into a particle to represent a candidate solution of the problem, and the optimal solution to be searched is equivalent to the food to be searched. The algorithm gives an initial position and velocity for each particle, and each particle updates its own position by updating the velocity. Through iterative search, the population can continuously find the position of a better particle, thereby obtaining a better solution of the optimization problem. The particle swarm optimization algorithm has two properties: speed and position, the speed representing the speed of movement and the position representing the direction of movement. And each particle independently searches an optimal solution in a search space, records the optimal solution as a current individual extremum, shares the individual extremum with other particles in the whole particle swarm, finds the optimal individual extremum as a current global optimal solution of the whole particle swarm, and adjusts the speed and the position of each particle in the particle swarm according to the found current individual extremum and the current global optimal solution shared by the whole particle swarm. The formula of the particle swarm optimization algorithm is as follows:
Figure RE-GSB0000193773970000011
wherein: omega is the inertial weight; r is 1 And r 2 Are random numbers evenly distributed between (0,1); c. C 1 And c 2 Is a learning factor.
Figure RE-GSB0000193773970000012
And
Figure RE-GSB0000193773970000013
the speed and the historical optimal position of the particle i in the t iteration are obtained;
Figure RE-GSB0000193773970000014
is the optimal position of the whole population at the t-th iteration.
In the particle swarmV in the method optimization mode i Representing the speed, can randomly have a certain influence on the direction and position of the current position, so that the algorithm searches on a given area. If evolutionary iteration of the algorithm is understood as an adaptive process, the particle position x i Is not replaced by new particles but is instead based on the velocity vector v i Adaptive variation is performed, which is unique in that each particle of each iterative algorithm flies only in the direction that the population experience deems it good, i.e., the basic particle swarm algorithm performs a "conscious" evolution.
In the standard particle swarm optimization algorithm, the movement direction of the particles is mainly determined by the historical optimal position and the global optimal position of the particles. The algorithm is characterized in that in order to increase population diversity and expand an optimization range, a mode of combining a spatial local search strategy and a spatial global search strategy is introduced, namely, a reverse learning idea is added in an initialization stage, bernstein particles of population optimal particles in the t iteration and reverse particles of any particles in a randomly selected population are added in an evolution stage of the particles to serve as guide particles, the population diversity is increased, the optimization range is expanded, and the algorithm is helped to quickly find a global optimal position.
The algorithm is adjusted based on an optimization mode of a particle swarm optimization algorithm, and a speed iteration formula, namely a position iteration formula is modified into the following formula:
Figure RE-GSB0000193773970000021
the improved particle swarm optimization algorithm increases the diversity of the population in the initialization stage and improves the convergence rate of the population; in the evolution stage, the speed and position iterative formula can ensure that the particles follow the optimization mode of the algorithm, the position is adjusted according to the speed, the optimization range is expanded, and the convergence rate is improved.
In the experiment, in order to verify whether the improvement is effective, unimodal and multimodal two types of Benchmark international standard functions are selected for testing, and the optimization effect of the improved algorithm is verified. The invention improves the optimizing probability by increasing the diversity of the population and helps the algorithm to escape from the local optimal position. Experiments show that compared with other algorithms, the method has higher convergence speed and higher precision.
Drawings
FIG. 1 is a flow chart of the particle swarm algorithm of the present invention
FIG. 2 is a graph showing the comparison of the results of the present invention and the original algorithm on a unimodal test function
FIG. 3 is a graph comparing the results of the present invention and the original algorithm on a multi-peak test function
Detailed Description
The invention relates to a numerical function optimization method based on an improved particle swarm optimization algorithm, which comprises the following steps of:
step 1: and generating an initial population, wherein the quality of the initial population is related to the speed of the search speed, and the good initial population is beneficial to an algorithm to quickly find an optimal solution. Usually, when we solve the problem, x, the initial value x is an empirical accumulation or a purely random guess. On this basis, we can simultaneously use the opposite value of x to try to get a better solution, by doing so making the next generation x approach the optimal solution faster. And (3) adopting a space global search strategy, namely the idea of reverse learning, namely, in the population evolution process, each particle generates a corresponding reverse position every time when finding a current optimal position, and if the adaptive value of the reverse position is better, selecting the particles with better fitness to form an initial population. Assuming the position of the ith particle in the population at the time of the tth iteration, the position of the corresponding inverted particle can be defined as:
Figure RE-GSB0000193773970000022
wherein x ij ∈[a j ,b j ],k、k 1 And k 2 Belongs to random number between (0,1) [ a j ,b j ]Is x ij The j-th dimension is specifically represented as: a is a j (t)=min(x ij (t),b j (t)-min(x ij (t))
And 2, step: the inertial weight and the learning factor can control the performance of the particles in the optimizing process. The convergence speed of the particle swarm can be effectively controlled by the weight, the genetic knowledge of the whole swarm and individuals is fully utilized by the learning factor, and the motion direction of the particles is effectively guided through the social activity among the particles. The proposed algorithm changes the constant learning factor to a semi-constant learning factor, wherein the learning factor c 3 Control is generated by Bernstein polynomials such that c 1 、c 2 The independence of the particles is ensured, and the possibility of particle aggregation is reduced by reducing the second learning factor. The bernstein polynomial is expressed as:
Figure RE-GSB0000193773970000031
wherein beta-U (0,1), k 1 ~U(0,1),k∈U{1:3},
Figure RE-GSB0000193773970000032
And step 3: in the standard particle swarm optimization algorithm, the movement direction of the particles is mainly determined by the historical optimal position and the global optimal position of the particles. In order to increase the diversity of the population, the algorithm introduces a space local search strategy, namely adding a Bernstein particle of the optimal population particle in the t iteration and randomly selecting a reverse particle of any particle in the population as a guide particle, thereby expanding the optimization range and helping the algorithm to quickly find the global optimal position. The optimization mode based on the particle swarm optimization is adjusted, so that in the improvement of the PSO algorithm, a speed iteration formula and a position iteration formula are modified into the following formulas:
Figure RE-GSB0000193773970000033
wherein: c. C 1 、c 2 And c 3 Is a learning factor; omega is the inertial weight; r is 1 、r 2 And r 3 Are all between (0,1);
Figure RE-GSB0000193773970000034
and
Figure RE-GSB0000193773970000035
the speed and the historical optimal position of the particle i in the t iteration are obtained;
Figure RE-GSB0000193773970000036
is the optimal position of the whole population during the t iteration;
Figure RE-GSB0000193773970000037
bernstein particles representing the best population of particles at the t-th iteration.
And 4, step 4: in the experiment, in order to verify whether the improvement is effective, unimodal and multimodal two types of Benchmark international standard functions are selected for testing, and the optimization effect of the improved algorithm is verified.

Claims (4)

1. A numerical function optimization method based on an improved particle swarm optimization algorithm is characterized by comprising the following steps:
step 1: the quality of the initial population is related to the speed of the search speed, and the good initial population is beneficial to an algorithm to quickly find an optimal solution. In general, when we solve the problem x, the initial value x is a guess that we accumulated through experience or is purely random. On this basis, we can simultaneously use the opposite value of x to try to get a better solution, by doing so making the next generation x approach the optimal solution faster. And (3) adopting a space global search strategy, namely the idea of reverse learning, namely, in the population evolution process, each particle generates a corresponding reverse position every time when finding a current optimal position, and if the adaptive value of the reverse position is better, selecting the particles with better fitness to form an initial population.
Step 2: the inertial weight and the learning factor can control the performance of the particles in the optimizing process. The weight can effectively control the convergence speed of the particle swarm; and the presence of a learning factorThe 'genetic knowledge' of the whole population and individuals is fully utilized, and the learning factors are used for effectively guiding the movement direction of the particles through social activities among the particles. The proposed algorithm changes the constant learning factor to a semi-constant learning factor, wherein the learning factor c 3 Control is generated by Bernstein polynomials such that c 1 、c 2 The independence of the particles is ensured, and the possibility of particle aggregation is reduced by reducing the second learning factor.
And step 3: in the standard particle swarm optimization algorithm, the movement direction of the particles is mainly determined by the historical optimal position and the global optimal position of the particles. In order to increase the diversity of the population, the proposed algorithm introduces a space local search strategy, namely adding Bernstein particles of population optimal particles in the t-th iteration and randomly selecting reverse particles of any particles in the population as guide particles, so that the optimization range is expanded, and the algorithm is helped to quickly find the global optimal position.
And 4, step 4: in the experiment, in order to verify whether the improvement is effective, unimodal and multimodal two types of Benchmark international standard functions are selected for testing, and the optimization effect of the improved algorithm is verified.
2. The numerical function optimization algorithm based on the improved particle swarm optimization algorithm according to claim 1, further comprising generating an initial population using a reverse learning strategy in step 1. Let X i (t)=(x i1 ,x i2 ,...,x iD ) The position of the ith particle in the population at the time of the t iteration is the position of the corresponding reverse particle
Figure FSA0000239252480000011
The above can be defined as:
Figure FSA0000239252480000012
wherein x ij ∈[a j ,b j ],k、k 1 And k 2 Belongs to random number between (0,1) [ a ] j ,b j ]Is x ij The j-th dimension is specifically represented as follows:
a j (t)=min(x ij (t),b j (t)-min(x ij (t)) (2)。
3. the numerical function optimization algorithm based on the improved particle swarm optimization algorithm according to claim 1, further comprising learning factor c in step 2 3 Generated by a bernstein polynomial expressed as:
Figure FSA0000239252480000013
wherein beta-U (0,1), k 1 ~U(0,1),k∈U{1∶3},
Figure FSA0000239252480000014
4. The improved particle swarm optimization algorithm-based numerical function optimization algorithm according to claim 1, further comprising that, in the step 3, the proposed particle swarm optimization-based optimization mode is adjusted, so that in the improvement of the PSO algorithm, the velocity iteration formula and the position iteration formula are modified to the following formulas:
Figure FSA0000239252480000015
wherein, c 1 、c 2 And c 3 Is a learning factor; omega is the inertial weight; r is 1 、r 2 And r 3 Are all between (0,1);
Figure FSA0000239252480000016
and
Figure FSA0000239252480000017
and particle i at the t-th iterationThe speed of the time and the historical optimal position,
Figure FSA0000239252480000018
is the optimal position of the whole population during the t iteration;
Figure FSA0000239252480000019
bernstein particles representing the best population of particles at the t-th iteration.
CN202110403163.4A 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm Pending CN115222006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110403163.4A CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110403163.4A CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Publications (1)

Publication Number Publication Date
CN115222006A true CN115222006A (en) 2022-10-21

Family

ID=83605430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110403163.4A Pending CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Country Status (1)

Country Link
CN (1) CN115222006A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116362521A (en) * 2023-05-29 2023-06-30 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery
CN116432687A (en) * 2022-12-14 2023-07-14 江苏海洋大学 Group intelligent algorithm optimization method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116432687A (en) * 2022-12-14 2023-07-14 江苏海洋大学 Group intelligent algorithm optimization method
CN116362521A (en) * 2023-05-29 2023-06-30 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery
CN116362521B (en) * 2023-05-29 2023-08-22 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery

Similar Documents

Publication Publication Date Title
Satapathy et al. Data clustering based on teaching-learning-based optimization
Li et al. A clustering particle swarm optimizer for dynamic optimization
Valdez et al. Modular neural networks architecture optimization with a new nature inspired method using a fuzzy combination of particle swarm optimization and genetic algorithms
Tsai et al. Particle swarm optimization with selective particle regeneration for data clustering
Liu et al. An adaptive online parameter control algorithm for particle swarm optimization based on reinforcement learning
Abd-Alsabour A review on evolutionary feature selection
Li et al. Hybrid optimization algorithm based on chaos, cloud and particle swarm optimization algorithm
Sarangi et al. A hybrid differential evolution and back-propagation algorithm for feedforward neural network training
CN115222006A (en) Numerical function optimization method based on improved particle swarm optimization algorithm
CN111553469A (en) Wireless sensor network data fusion method, device and storage medium
CN110147890A (en) A kind of method and system based on lion group's algorithm optimization extreme learning machine integrated study
Wang et al. A new chaotic starling particle swarm optimization algorithm for clustering problems
Zhang et al. Optimizing parameters of support vector machines using team-search-based particle swarm optimization
Irmak et al. Training of the feed-forward artificial neural networks using butterfly optimization algorithm
Yu et al. Distributed generation and control of persistent formation for multi-agent systems
Urade et al. Study and analysis of particle swarm optimization: a review
Martinez-Soto et al. Fuzzy logic controllers optimization using genetic algorithms and particle swarm optimization
Wu et al. Multiobjective optimization strategy of WSN coverage based on IPSO-IRCD
D’Ambrosio et al. Optimizing cellular automata through a meta-model assisted memetic algorithm
Aydın et al. A configurable generalized artificial bee colony algorithm with local search strategies
CN114662638A (en) Mobile robot path planning method based on improved artificial bee colony algorithm
Ruz et al. Reconstruction of Boolean regulatory models of flower development exploiting an evolution strategy
Jafarpour et al. A hybrid method for optimization (discrete PSO+ CLA)
CN112884117B (en) RTID-PSO method and system for random topology
Shen et al. Differential evolution with spatially neighbourhood best search in dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication