CN112949154B - Parallel asynchronous particle swarm optimization method and system and electronic equipment - Google Patents

Parallel asynchronous particle swarm optimization method and system and electronic equipment Download PDF

Info

Publication number
CN112949154B
CN112949154B CN202110296370.4A CN202110296370A CN112949154B CN 112949154 B CN112949154 B CN 112949154B CN 202110296370 A CN202110296370 A CN 202110296370A CN 112949154 B CN112949154 B CN 112949154B
Authority
CN
China
Prior art keywords
particle
group
optimal value
value
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110296370.4A
Other languages
Chinese (zh)
Other versions
CN112949154A (en
Inventor
辛靖豪
于丽英
李柠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110296370.4A priority Critical patent/CN112949154B/en
Publication of CN112949154A publication Critical patent/CN112949154A/en
Application granted granted Critical
Publication of CN112949154B publication Critical patent/CN112949154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/25Design optimisation, verification or simulation using particle-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a parallel asynchronous particle swarm optimization method, a system and electronic equipment, wherein the parallel asynchronous particle swarm optimization method comprises the following steps: establishing a fitness function for the target to be optimized, and using the fitness function to measure a decision variable; grouping the particle groups, and randomly initializing initial positions, optimal values and diverse optimization parameters of particles in each particle group; establishing an information sharing mechanism for each particle group, wherein the information sharing mechanism is used for sharing the optimal value of each particle group; arranging each particle group on different CPU cores to perform distributed parallel iterative computation, and asynchronously updating the historical optimal value of the particle group according to an information sharing mechanism; and when the global iteration times are more than or equal to the iteration time threshold, ending the iteration updating of each particle group, and outputting the optimal value of the particle group as a final optimization result. Compared with the traditional particle swarm algorithm, the optimization performance and robustness of the algorithm are improved, the calculation amount of the algorithm is reduced, the calculation efficiency of the algorithm is improved, and the method can be suitable for various complex optimization scenes.

Description

Parallel asynchronous particle swarm optimization method and system and electronic equipment
Technical Field
The invention relates to the field of optimization, in particular to the field of optimization of non-convex, non-continuous and non-derivable functions.
Background
The optimization technology is vital to the development of various social fields, for example, in the field of engine design, the efficiency of an engine can be higher and the emission can be lower through the optimization design of various parts of the engine; in the field of power dispatching, the power output combination of a plurality of generators under different loads is optimized, so that the energy utilization rate can be improved, and the economic benefit can be increased.
The optimization technology can be mainly divided into the following two types: optimization algorithms based on gradient information, such as random gradient descent algorithms; and intelligent optimization algorithms which are not based on gradient information, such as particle swarm optimization algorithm, genetic algorithm, simulated annealing algorithm and the like. Optimization algorithms based on gradient information require that the function to be optimized be derivable, which makes such algorithms unusable in the field of optimization where the objective function is not derivable. Second, optimization algorithms based on gradient information often fall into local optima due to the presence of extreme points and saddles. In order to solve such a complex optimization problem, an intelligent optimization algorithm that is not based on gradient information is born. Among such algorithms, the Particle Swarm Optimization (PSO) is widely spotlighted due to its excellent algorithm performance, profound bionics principles and simple algorithm implementation. However, due to the inherent randomness exploration mode of the particle swarm algorithm, the algorithm cannot ensure that each iteration can converge to an optimal solution. In addition, the optimization capability of the particle swarm algorithm is closely related to the number of particles, and in order to obtain a better optimization result, a larger particle swarm is often needed, so that huge calculation amount is brought, and the defect is more obvious in a high-dimensional optimization space.
Through the published literature search of the prior art, the documents B.Niu, Y.Zhu, X.He, H.Wu, mcpso: the authors of A multi-switch cooperative particulate switch optimizer, applied chemistry and computation 185 (2) (2007) 1050-1062 proposed an improved master-slave structure particle swarm optimization algorithm (MCPSO) to improve the optimization capability of the particle swarm optimization algorithm. But due to the lack of information interaction among the slave groups, the optimizing performance of the algorithm needs to be further improved. In addition, due to lack of information interaction, the algorithm needs to maintain the particle base numbers of all populations, so that the calculation amount of the algorithm is larger and the calculation time is longer compared with that of a common particle swarm algorithm, and therefore the algorithm cannot be applied to a scene needing to rapidly solve an optimal solution.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a parallel asynchronous particle swarm optimization method, system and electronic device, which are used to solve the technical problems of low optimization capability, poor robustness, large calculation amount and low operation mode performance of the existing optimization technology.
To achieve the above and other related objects, the present invention provides a parallel asynchronous particle swarm optimization method, including: establishing a fitness function for a target to be optimized, wherein the fitness function is used for measuring a decision variable; grouping the particle groups, and randomly initializing initial positions, optimal values and diverse optimization parameters of particles in each particle group; establishing an information sharing mechanism for each particle group, wherein the information sharing mechanism is used for sharing the optimal value of each particle group; arranging each particle group on different CPU cores to perform distributed parallel iterative computation, and asynchronously updating the historical optimal value of the particle group according to an information sharing mechanism; and when the global iteration times are more than or equal to the iteration time threshold, ending the iteration updating of each particle group, and outputting the optimal value of the particle group as a final optimization result.
In an embodiment of the present invention, the optimal values include: the historical optimum value of each particle, the historical optimum value of each particle group, and the historical optimum value of the particle cluster.
In an embodiment of the present invention, one implementation manner of the distributed parallel iterative computation by arranging the particle groups on different CPU cores is as follows: calculating the adaptive value of the current position of each particle according to the fitness function; comparing the adapted value of the current position of each particle with the historical optimum value of the particle: if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the group to which the particle belongs, the historical optimal value of the group to which the particle belongs is made equal to the historical optimal value of the particle; asynchronously updating the group optimal value of the particle swarm through an information sharing mechanism according to the historical optimal value of each group; and updating the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle group and the optimizing parameter of the group, and entering the next iteration.
In an embodiment of the invention, at least two sets of the optimization parameters configured during initialization of each particle set are different or each set is different.
In an embodiment of the present invention, the optimization parameters include: the weight of the historical optimum value of each particle, the weight of the historical optimum value of each particle group, the weight of the historical optimum value of the particle group, the maximum value of the particle velocity, the minimum value of the particle velocity, the initial iteration inertia factor, and the ending iteration inertia factor.
The embodiment of the present invention further provides a parallel asynchronous particle swarm optimization system, which includes: the function establishing module is used for establishing a fitness function for the target to be optimized and measuring a decision variable; the initialization module is used for grouping the particle groups and randomly initializing the initial positions, the optimal values and the optimization parameters of the diversity of the particles in each particle group; the information sharing module establishes an information sharing mechanism for each particle group and is used for sharing the optimal value of each particle group; the iterative computation module is used for arranging each particle group on different CPU cores to perform distributed parallel iterative computation and asynchronously updating the historical optimal value of the particle cluster according to an information sharing mechanism; and the result output module is used for finishing the iterative updating of each particle group when the global iteration times are more than or equal to the iteration time threshold value, and outputting the optimal value of the particle group as a final optimization result.
In an embodiment of the present invention, the optimal values include: the historical optimum value of each particle, the historical optimum value of each particle group, and the historical optimum value of the particle cluster.
In an embodiment of the invention, the iterative computation module includes: the adaptive value unit is used for calculating the adaptive value of the current position of each particle according to the fitness function; a comparison unit, configured to compare the adaptive value of the current position of each particle with the historical optimal value of the particle: if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the group to which the particle belongs, the historical optimal value of the group to which the particle belongs is made equal to the historical optimal value of the particle; the updating unit is used for asynchronously updating the group optimal value of the particle cluster through an information sharing mechanism according to the historical optimal value of each group; and the updating iteration unit is used for updating the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle cluster and the optimizing parameter of the group, and entering the next iteration.
In an embodiment of the present invention, at least two sets of the optimization parameters configured during initialization of each particle set are different or each set is different; the optimizing parameters comprise: the weight of the historical optimum value of each particle, the weight of the historical optimum value of each particle group, the weight of the historical optimum value of the particle group, the maximum value of the particle velocity, the minimum value of the particle velocity, the initial iteration inertia factor, and the ending iteration inertia factor.
Embodiments of the present invention also provide an electronic device, comprising a memory storing a computer program; a processor executing the computer program to implement the steps of the parallel asynchronous particle swarm optimization method as described above.
As described above, the parallel asynchronous particle swarm optimization method, system and electronic device of the present invention have the following beneficial effects:
compared with the traditional particle swarm algorithm, the optimization performance and robustness of the algorithm are improved, the calculation amount of the algorithm is reduced, the calculation efficiency of the algorithm is improved, and the method can be suitable for various complex optimization scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic overall flow chart of a parallel asynchronous particle swarm optimization method in an embodiment of the present application.
Fig. 2 is a schematic diagram illustrating an information sharing mechanism in the parallel asynchronous particle swarm optimization method according to an embodiment of the present application.
Fig. 3 is a diagram showing an example of a diversity optimization parameter setting manner in the parallel asynchronous particle swarm optimization method in an embodiment of the present application.
Fig. 4 and fig. 5 are schematic diagrams respectively showing output results of the parallel asynchronous particle swarm optimization method in an embodiment of the present application when different fitness functions are adopted.
Fig. 6 is a schematic block diagram of a parallel asynchronous particle swarm optimization system according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of an iterative computation module in the parallel asynchronous particle swarm optimization system according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Description of the element reference numerals
10. Electronic device
101. Processor with a memory having a plurality of memory cells
102. Memory device
100. Parallel asynchronous particle swarm optimization system
110. Function building module
120. Initialization module
130. Information sharing module
140. Iterative computation module
141. Adaptive value unit
142. Comparison unit
143. Updating unit
144. Updating an iteration Unit
150. Result output module
S100 to S500
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The embodiment aims to provide a parallel asynchronous particle swarm optimization method, a parallel asynchronous particle swarm optimization system and electronic equipment, and is used for solving the technical problems of low optimization capability, poor robustness, large calculation amount and low operation mode performance in the existing optimization technology.
The embodiment provides a parallel asynchronous particle swarm optimization algorithm, which groups particle swarms and gives different optimization parameters to each particle swarm to improve the diversity of the particle swarms; establishing an information sharing mechanism among groups to maximize the optimizing performance of the particle swarm; and deploying each particle group to different CPU cores for parallel computation and asynchronous updating so as to reduce the arithmetic operation time. Compared with the traditional particle swarm optimization, the optimization performance and robustness of the algorithm are improved, the calculation amount of the algorithm is reduced, the calculation efficiency of the algorithm is improved, and the method can be applied to various complex optimization scenes.
The principles and embodiments of the parallel asynchronous particle swarm optimization method, system and electronic device of the present invention will be described in detail below, so that those skilled in the art can understand the parallel asynchronous particle swarm optimization method, system and electronic device of the present invention without creative labor.
Example 1
Specifically, as shown in fig. 1, this embodiment provides a parallel asynchronous particle swarm optimization method, where the parallel asynchronous particle swarm optimization method includes:
and S100, establishing a fitness function for the target to be optimized, and using the fitness function to measure a decision variable.
Step S200, grouping the particle groups, and randomly initializing initial positions, optimal values and optimization parameters of diversity of particles in each particle group;
step S300, establishing an information sharing mechanism for each particle group, wherein the information sharing mechanism is used for sharing the optimal value of each particle group;
step S400, each particle group is arranged on different CPU cores to perform distributed parallel iterative computation, and the historical optimal value of the particle group is asynchronously updated according to an information sharing mechanism;
and step S500, when the global iteration times is more than or equal to the iteration time threshold, ending the iteration updating of each particle group, and outputting the optimal value of the particle group as a final optimization result.
The following will describe steps S100 to S500 of the parallel asynchronous particle swarm optimization method of the embodiment in detail with reference to fig. 2:
and S100, establishing a fitness function for the target to be optimized, and using the fitness function to measure the decision variables.
Firstly, establishing mathematical description f (X) for an object to be optimized, wherein f (·) is a fitness function and is used for evaluating the quality of a decision variable, and X is the decision variable of an optimization problem.
And step S200, grouping the particle groups, and randomly initializing initial positions, optimal values and optimization parameters of diversity of the particles in each particle group.
In this embodiment, the optimal values include: a historical optimum value of each particle, a historical optimum value of each particle group, and a historical optimum value of a particle cluster.
In this embodiment, at least two sets of the optimization parameters configured during the initialization of each particle group are different or each set is different, so that the optimization parameters configured during the initialization of each particle group have diversity, and the optimization capability of the particle swarm is enhanced. Wherein the optimizing parameters include: the weight of the historical optimum value of each particle, the weight of the historical optimum value of each particle group, the weight of the historical optimum value of the particle group, the maximum value of the particle velocity, the minimum value of the particle velocity, the initial iteration inertia factor, and the ending iteration inertia factor.
In this embodiment, as shown in fig. 2, the total number m of particles of the particle cluster is determined, the particle cluster is equally divided into N groups, and one CPU core is assigned to each group for data operation of the group. Then, the initial position X of each particle and the historical optimal value P of each particle are randomly generated best Historical optimum values G for each group best Historical optimum T of cluster best . If the optimization problem is a constrained optimization problem, X and P need to be satisfied best ,G best ,T best In the feasible field D.
Specifically, in the present embodiment, initiallyDividing the particle cluster number m =80 into N =8 groups, dividing the particle cluster number N =10 in each group, and randomly generating the initial position X of each particle and the historical optimal value P of each particle best Historical optimum values G for each group best Historical optimum T of cluster best . The position X of the ith particle in the d-dimensional search space can be represented as: x i =(X i1 ,X i2 ,...,X id ) (ii) a If the problem is a constrained optimization problem, X, P need to be satisfied best ,G best ,T best In the feasible field D. Then, the weight C of the historical optimal value of the optimization parameter particle is initialized for each group 1 Historical optimum value C of the particle group 2 Historical optimum value C of cluster 3 Initial iteration inertia factor omega init End of the iterative inertia factor omega end Maximum value of particle velocity V max Minimum value of particle velocity V min . To make the clusters diverse, the initial optimization parameters of each group should be kept inconsistent.
Wherein, C 1 ,C 2 ,C 3 Is used as a weighting factor for weighing P best ,G best ,T best The influence on the particle velocity is calculated by the following formula:
Figure BDA0002984508820000061
in the above formula, k represents the k-th iteration, i represents the i-th particle, rand j Representing random numbers satisfying a Gaussian distribution, V i k Representing the velocity of the ith particle at the kth iteration,
Figure BDA0002984508820000062
denotes the position, ω, of the ith particle at the kth iteration k Is the inertia factor at the k-th iteration. To ensure convergence of the algorithm, the inertia factor should decrease linearly with iteration:
Figure BDA0002984508820000063
where T is the maximum number of iterations, and its value is determined according to the complexity of the problem to be optimized, in this embodiment, T =1400 is taken.
In order to ensure the diversity of the particle groups and improve the exploration capacity of the particle groups to a feasible space, the initial parameters of each group are different. The recommended initial parameters in case of N =8 are shown in fig. 3, where P max For the maximum particle cluster velocity, P can be made to be the maximum particle cluster velocity in the case of constrained optimization max The value of (d) is the feasible region upper bound. It should be noted that the initial parameters shown in fig. 3 are not limited to the initial parameters of the method, but only serve to provide an embodiment, and in the case that the number N of sets is other values, it is only necessary to ensure that the initialization parameters of each set are different from each other, so as to make the particle groups have diversity. The simulation experiment proves that C i ∈[0,2]And omega belongs to (0,1), the performance of the algorithm is optimal.
Step S300, an information sharing mechanism is established for each particle group, and is used for sharing the optimal value of each particle group.
An information sharing mechanism is established for each particle group to reduce the number of particles in each particle group, thereby reducing the amount of calculation.
And step S400, distributing each particle group to different CPU cores to perform distributed parallel iterative computation, and asynchronously updating the historical optimal value of the particle group according to an information sharing mechanism.
In this embodiment, each group of particles is distributed to different CPUs for parallel computation and asynchronous update, so as to reduce the computation time.
One implementation manner of arranging each particle group on different CPU cores to perform distributed parallel iterative computation is as follows:
1) Calculating the adaptive value of the current position of each particle according to the fitness function;
2) Comparing the adapted value of the current position of each particle with the historical optimum value of the particle: if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the group to which the particle belongs, the historical optimal value of the group to which the particle belongs is made equal to the historical optimal value of the particle;
3) Asynchronously updating the group optimal value of the particle swarm through an information sharing mechanism according to the historical optimal value of each group;
4) And updating the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle group and the optimizing parameter of the group, and entering the next iteration.
Calculating the adaptive value of the current position X of each particle according to the fitness function f (X), and comparing the current adaptive value of each particle with the historical optimal value P of the particle best Comparing, and if the current adaptive value is superior to the historical optimal value, making the historical optimal value of the particle equal to the current adaptive value; thereafter, the historical optimum P for each particle in each group is calculated best G with historical optimum of the group best Comparing, if the historical optimum value of a certain particle is better than the historical optimum value of the group to which the particle belongs, the historical optimum value G of the group to which the particle belongs is made best Equal to the historical optimum P of the particle best (ii) a Then, in the same manner, in the particle cluster, the optimal value G is obtained from the history of each group best Updating the group optimum T of the cluster best . Finally, according to P of each particle best G of the group to which the particles belong best T of the particle cluster best And the set of optimization parameters: weight C of the historical optimum of the particle 1 Historical optimum value C of the particle group 2 Historical optimum value C of cluster 3 Initial iterative inertia factor ω init End of the iterative inertia factor omega end Maximum value of particle velocity V max Minimum value of particle velocity V min And updating the speed and the position of the particle and entering the next iteration.
Specifically, in this embodiment, assuming that the current optimization problem is a minimization problem (maximization problem is the same), for the jth particle in the ith group:
if f (X) ij )<f(Pbest ij ) Order Pbest ij =X ij
If f (Pbest) ij )<f(Gbest i ) Order Gbest i =Pbest ij
If f (Gbest) i ) < f (Tbest), let Tbest = Gtest i
Then according to
Figure BDA0002984508820000071
Calculating the inertia factor omega of the wheel k Then according to the formula
Figure BDA0002984508820000072
Calculating the velocity V of each particle in the set i k+1 . It should be noted that each iteration is based on the formula
Figure BDA0002984508820000073
The calculated particle velocity needs to be limited to V max And V min To ensure the convergence of the algorithm. Finally, according to the formula
Figure BDA0002984508820000074
And updating the position of each particle in the group, and enabling the global iteration number counter t to be self-increased by 1, and ending the current iteration.
And step S500, when the global iteration times is more than or equal to the iteration time threshold, ending the iteration updating of each particle group, and outputting the optimal value of the particle group as a final optimization result.
In this embodiment, when the global iteration count counter T is greater than or equal to T, the iterative updating of each group of particles is ended, and the optimal value T of the particle swarm is output best As a final optimization result. Wherein T is the maximum iteration number, i.e. the threshold of the iteration number. Waiting for the algorithm to reach the maximum iteration number, ending the whole optimizing process, and outputting the T of the particle cluster best As the optimal solution to be solved by the algorithm.
Therefore, the parallel asynchronous particle swarm optimization method in the embodiment equally divides the particle swarm with the number of m intoN groups, wherein the number of particles of each group
Figure BDA0002984508820000081
And after the optimization parameters and the initial positions of the particles of each group are respectively initialized, distributing each group to different CPU cores for operation. Calculating the historical optimum value P of each particle in the group by means of greedy algorithm best And the historical optimum G of the group best And with the group optimum value T of the particle swarm best And carrying out asynchronous interaction and updating. Meanwhile, each particle updates the speed and the position of the particle according to the group of optimization parameters, the historical optimal value of the particle, the historical optimal value of the group and the optimal value of the particle cluster. Outputting the community optimal value T through multiple iterations best As a final optimization result.
In the parallel asynchronous particle swarm optimization method, the particle swarm is divided into a plurality of groups, different optimizing parameters are given to different groups, the diversity of the particle swarm is improved, and the exploration capacity of the particle swarm to a feasible region is improved. Meanwhile, the parallel asynchronous particle swarm optimization method in the embodiment establishes an information sharing mechanism among different groups of particles, and historical optimal values G searched by each group best The method can be asynchronously shared, so that the optimization performance of the algorithm can be improved, the number of particles in each group can be reduced, and the operation amount of the algorithm is reduced. In addition, the parallel asynchronous particle swarm optimization method in this embodiment distributes different groups of particles to different CPU cores for operation, so that the hardware performance of the computer can be greatly exerted, and compared with the conventional particle swarm optimization, the parallel asynchronous particle swarm optimization method in this embodiment can shorten the operation time by more than 3 times on the basis of improving the final result.
The following describes the implementation effect of the parallel asynchronous particle swarm optimization algorithm in this embodiment through experiments and result analysis.
In this embodiment, two standard functions, which are different in difficulty and are commonly used for verifying the performance of the optimization algorithm, are selected to verify the effectiveness of the parallel asynchronous particle swarm optimization algorithm provided in this embodiment, where the two standard functions are a Sphere function and a Griewank function, respectively. Both problems are minimization problems.
Sphere function:
Figure BDA0002984508820000082
x i ∈[-30,30]
griewank function:
Figure BDA0002984508820000083
x i ∈[-600,600]。
the search space of the two functions is set to be 30 dimensions to increase the optimization difficulty, and further, the excellent optimizing performance and the rapid parallel computing capability of the invention are embodied.
The experimental results of the Sphere function and the Griewank function are shown in fig. 4 and 5, respectively.
Wherein: the L1 curve is a performance curve of a classical particle swarm algorithm; the L2 curve is the performance curve of the invention; the L3 curve is the performance curve of the prior art optimization found in a search of the open literature.
Experimental results show that the optimization performance of the invention is superior to that of the prior art under different difficulty degrees. The run times for the above experiments were repeated 50 times for three of the methods in the experiments as shown in table 1.
TABLE 1
PSO MCPSO IDPSO
Sphere function 369.2 seconds 536.8 seconds 120.4 seconds
Griewank function 569.9 seconds 787.1 seconds 152.7 seconds
As can be seen from table 1, the parallel asynchronous particle swarm optimization algorithm of this embodiment further increases the operation speed of the algorithm on the premise of improving the optimization performance. Compared with the classical particle swarm optimization, the parallel asynchronous particle swarm optimization provided by the embodiment has the operation speed about 3 times, and compared with the existing improved method MCPSO, the parallel asynchronous particle swarm optimization provided by the embodiment has the operation speed about 5 times.
Example 2
As shown in fig. 6, the present embodiment provides a parallel asynchronous particle swarm optimization system 100, where the parallel asynchronous particle swarm optimization system 100 includes: the function establishing module 110, the initializing module 120, the information sharing module 130, the iterative computation module 140 and the result output module 140.
In this embodiment, the function establishing module 110 is configured to establish a fitness function for an object to be optimized, and is configured to measure a decision variable;
in this embodiment, the initialization module 120 is configured to group particle groups, and randomly initialize the initial positions, the optimal values, and the optimization parameters of diversity of the particles in each particle group;
the optimal values include: the historical optimum value of each particle, the historical optimum value of each particle group, and the historical optimum value of the particle cluster.
At least two groups of optimization parameters configured during initialization of each particle group are different or each group is different;
the optimizing parameters comprise: the weight of the historical optimum value of each particle, the weight of the historical optimum value of each particle group, the weight of the historical optimum value of the particle cluster, the maximum particle velocity, the minimum particle velocity, the initial iteration inertia factor, and the end iteration inertia factor.
The information sharing module 130 establishes an information sharing mechanism for each particle group, so as to share the optimal value of each particle group.
In this embodiment, the iterative computation module 140 is configured to arrange each particle group on different CPU cores to perform distributed parallel iterative computation, and asynchronously update a historical optimal value of the particle swarm according to an information sharing mechanism.
Specifically, in the present embodiment, as shown in fig. 7, the iterative calculation module 140 includes: an adaptation value unit 141, a comparison unit 142, an update unit 143, and an update iteration unit 144.
In this embodiment, the adaptive value unit 141 is configured to calculate an adaptive value of the current position of each particle according to a fitness function;
in this embodiment, the comparing unit 142 is configured to compare the adaptive value of the current position of each particle with the historical optimal value of the particle: if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the group to which the particle belongs, the historical optimal value of the group to which the particle belongs is made equal to the historical optimal value of the particle;
in this embodiment, the updating unit 143 is configured to asynchronously update the community optimal value of the particle swarm through an information sharing mechanism according to the historical optimal values of the groups.
In this embodiment, the update iteration unit 144 is configured to update the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle cluster, and the optimization parameter of the group, and enter the next iteration.
In this embodiment, when the global iteration number is greater than or equal to the iteration number threshold, the result output module 150 ends the iteration update of each particle group, and outputs the optimal value of the particle group as the final optimization result.
The technical features of the specific implementation of the parallel asynchronous particle swarm optimization system 100 in this embodiment are basically the same as the principles of the steps in the parallel asynchronous particle swarm optimization method in embodiment 1, and the general technical contents between the method and the system are not repeated.
Example 3
As shown in fig. 8, the embodiment further provides an electronic device 10, where the electronic device 10 is, but not limited to, a smart phone, a tablet, a smart wearable device, a personal desktop computer, a notebook computer, a server cluster, and the like.
The electronic device 10 comprises a memory 102 for storing a computer program; a processor 101 for running the computer program to implement the steps of the parallel asynchronous particle swarm optimization method as described in embodiment 1.
The memory 102 is connected with the processor 101 through a device bus and completes mutual communication, the memory 102 is used for storing a computer program, and the processor 101 is used for running the computer program, so that the electronic device 10 executes the parallel asynchronous particle swarm optimization method. The parallel asynchronous particle swarm optimization method has been described in embodiment 1, and is not described herein again.
It should be noted that the device bus mentioned above may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The device bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory 102 may include a Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Example 4
The present embodiment provides a storage medium storing program instructions, which when executed by a processor, implement the steps of the parallel asynchronous particle swarm optimization method described in embodiment 1. The parallel asynchronous particle swarm optimization method has been described in embodiment 1, and is not described herein again.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs the steps comprising the method embodiments of embodiment 1; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In conclusion, compared with the traditional particle swarm algorithm, the optimization performance and robustness of the algorithm are improved, the calculation amount of the algorithm is reduced, the calculation efficiency of the algorithm is improved, and the method can be applied to various complex optimization scenes. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the claims of the present invention.

Claims (5)

1. A parallel asynchronous particle swarm optimization method is characterized in that: the method comprises the following steps:
establishing a fitness function for a target to be optimized, wherein the fitness function is used for measuring a decision variable;
grouping the particle groups, and randomly initializing initial positions, optimal values and diverse optimization parameters of particles in each particle group; at least two groups of optimization parameters configured during initialization of each particle group are different or each group is different; the optimizing parameters comprise: the weight of the historical optimal value of each particle, the weight of the historical optimal value of each particle group, the weight of the historical optimal value of a particle group, the maximum value of the particle velocity, the minimum value of the particle velocity, the initial iteration inertia factor and the ending iteration inertia factor;
establishing an information sharing mechanism for each particle group, wherein the information sharing mechanism is used for sharing the optimal value of each particle group;
arranging each particle group on different CPU cores to perform distributed parallel iterative computation, and asynchronously updating the historical optimal value of the particle group according to an information sharing mechanism;
when the global iteration times are larger than or equal to the iteration time threshold, ending the iteration updating of each particle group, and outputting the optimal value of the particle group as a final optimization result;
one implementation manner of arranging each particle group on different CPU cores to perform distributed parallel iterative computation is as follows:
calculating the adaptive value of the current position of each particle according to the fitness function;
comparing the adapted value of the current position of each particle with the historical optimum value of the particle:
if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the particle group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the particle group, the historical optimal value of the particle group is made equal to the historical optimal value of the particle;
asynchronously updating the group optimal value of the particle cluster through an information sharing mechanism according to the historical optimal value of each group;
and updating the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle cluster and the optimizing parameter of the group, and entering the next iteration.
2. The parallel asynchronous particle swarm optimization method of claim 1, characterized in that: the optimal values include: the historical optimum value of each particle, the historical optimum value of each particle group, and the historical optimum value of the particle cluster.
3. A parallel asynchronous particle swarm optimization system is characterized in that: the parallel asynchronous particle swarm optimization system comprises:
the function establishing module is used for establishing a fitness function for the target to be optimized and measuring a decision variable;
the initialization module is used for grouping the particle groups and randomly initializing the initial positions, the optimal values and the optimization parameters of the diversity of the particles in each particle group; at least two groups of optimization parameters configured during initialization of each particle group are different or each group is different; the optimizing parameters comprise: the weight of the historical optimal value of each particle, the weight of the historical optimal value of each particle group, the weight of the historical optimal value of a particle group, the maximum value of particle velocity, the minimum value of particle velocity, an initial iteration inertia factor and an end iteration inertia factor;
the information sharing module establishes an information sharing mechanism for each particle group and is used for sharing the optimal value of each particle group;
the iterative computation module is used for arranging each particle group on different CPU cores to perform distributed parallel iterative computation and asynchronously updating the historical optimal value of the particle cluster according to an information sharing mechanism;
the result output module is used for ending the iteration updating of each particle group when the global iteration times are more than or equal to the iteration time threshold value, and outputting the optimal value of the particle group as a final optimization result;
the iterative computation module comprises:
the adaptive value unit is used for calculating the adaptive value of the current position of each particle according to the fitness function;
a comparison unit, configured to compare the adaptive value of the current position of each particle with the historical optimal value of the particle: if the adaptive value of the current position is superior to the historical optimal value, the historical optimal value of the particle is made equal to the current adaptive value of the particle, the historical optimal value of each particle in each particle group is compared with the historical optimal value of the group, and if the historical optimal value of a certain particle is superior to the historical optimal value of the group to which the particle belongs, the historical optimal value of the group to which the particle belongs is made equal to the historical optimal value of the particle;
the updating unit is used for asynchronously updating the group optimal value of the particle swarm through an information sharing mechanism according to the historical optimal value of each group;
and the updating iteration unit is used for updating the speed and the position of the particle according to the historical optimal value of each particle, the historical optimal value of the group to which the particle belongs, the historical optimal value of the particle cluster and the optimizing parameter of the group, and entering the next iteration.
4. The parallel asynchronous particle swarm optimization system of claim 3, wherein: the optimal values include: the historical optimum value of each particle, the historical optimum value of each particle group, and the historical optimum value of the particle cluster.
5. An electronic device, characterized in that: comprises a memory storing a computer program; a processor executing said computer program to implement the steps of the parallel asynchronous particle swarm optimization method as claimed in any one of claims 1 to 2.
CN202110296370.4A 2021-03-19 2021-03-19 Parallel asynchronous particle swarm optimization method and system and electronic equipment Active CN112949154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110296370.4A CN112949154B (en) 2021-03-19 2021-03-19 Parallel asynchronous particle swarm optimization method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110296370.4A CN112949154B (en) 2021-03-19 2021-03-19 Parallel asynchronous particle swarm optimization method and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN112949154A CN112949154A (en) 2021-06-11
CN112949154B true CN112949154B (en) 2023-02-17

Family

ID=76227008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110296370.4A Active CN112949154B (en) 2021-03-19 2021-03-19 Parallel asynchronous particle swarm optimization method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN112949154B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779856B (en) * 2021-09-15 2023-06-27 成都中科合迅科技有限公司 Discrete particle swarm optimization modeling method for electronic system function online recombination

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951957A (en) * 2016-09-21 2017-07-14 常州信息职业技术学院 Particle swarm optimization algorithm, multicomputer method for parallel processing and system
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling
CN107015861A (en) * 2016-11-07 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method
CN110555506A (en) * 2019-08-20 2019-12-10 武汉大学 gradient self-adaptive particle swarm optimization method based on group aggregation effect
CN110598833A (en) * 2019-09-16 2019-12-20 中国矿业大学 High-dimensional particle swarm optimization method for packet evolution

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104037776B (en) * 2014-06-16 2016-08-24 国家电网公司 The electric network reactive-load capacity collocation method of random inertial factor particle swarm optimization algorithm
CN105631516A (en) * 2015-11-16 2016-06-01 长沙理工大学 Historical experience and real-time adjustment combination-based particle swarm optimization algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling
CN106951957A (en) * 2016-09-21 2017-07-14 常州信息职业技术学院 Particle swarm optimization algorithm, multicomputer method for parallel processing and system
CN107015861A (en) * 2016-11-07 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method
CN110555506A (en) * 2019-08-20 2019-12-10 武汉大学 gradient self-adaptive particle swarm optimization method based on group aggregation effect
CN110598833A (en) * 2019-09-16 2019-12-20 中国矿业大学 High-dimensional particle swarm optimization method for packet evolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于并行混沌量子粒子群算法的梯级水库群防洪优化调度研究;邹强等;《水利学报》;20160831;第47卷(第8期);全文 *
多核环境下并行粒子群算法;何莉等;《计算机应用》;20150910;第35卷(第9期);第0-1节、图1-2 *
跨流域调水条件下水库群联合调度图的多核并行计算研究;彭安帮等;《水利学报》;20141130;第45卷(第11期);全文 *

Also Published As

Publication number Publication date
CN112949154A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
Zeng et al. GraphACT: Accelerating GCN training on CPU-FPGA heterogeneous platforms
Li et al. Review of design optimization methods for turbomachinery aerodynamics
CN111597698B (en) Method for realizing pneumatic optimization design based on deep learning multi-precision optimization algorithm
Zhang et al. BoostGCN: A framework for optimizing GCN inference on FPGA
Zhou et al. Cloud computing model for big data processing and performance optimization of multimedia communication
Yan et al. Study on deep unsupervised learning optimization algorithm based on cloud computing
CN106295670A (en) Data processing method and data processing equipment
Zhang et al. An efficient space division–based width optimization method for RBF network using fuzzy clustering algorithms
CN115099459B (en) Workshop multi-row layout method considering gaps and loading and unloading points
CN112949154B (en) Parallel asynchronous particle swarm optimization method and system and electronic equipment
Zhang et al. Implementation and optimization of the accelerator based on FPGA hardware for LSTM network
Xiuli et al. An improved multi-objective optimization algorithm for solving flexible job shop scheduling problem with variable batches
Stevens et al. GNNerator: A hardware/software framework for accelerating graph neural networks
CN115437795A (en) Video memory recalculation optimization method and system for heterogeneous GPU cluster load perception
Shi et al. Versagnn: a versatile accelerator for graph neural networks
Deng et al. A parallel version of differential evolution based on resilient distributed datasets model
Shalom et al. Graphics hardware based efficient and scalable fuzzy c-means clustering
Liu et al. Gnnsampler: Bridging the gap between sampling algorithms of gnn and hardware
Nugroho et al. Parallel implementation of genetic algorithm for searching optimal parameters of artificial neural networks
Jin et al. Neural networks for fitness approximation in evolutionary optimization
Liu et al. A load-balancing approach based on modified K-ELM and NSGA-II in a heterogeneous cloud environment
Nguyen-Trang et al. An efficient hybrid optimization approach using adaptive elitist differential evolution and spherical quadratic steepest descent and its application for clustering
CN115270921B (en) Power load prediction method, system and storage medium based on combined prediction model
CN117093885A (en) Federal learning multi-objective optimization method integrating hierarchical clustering and particle swarm
WO2022095675A1 (en) Neural network sparsification apparatus and method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant