CN112270120B - Multi-objective optimization method based on hierarchical decomposition of tree structure - Google Patents

Multi-objective optimization method based on hierarchical decomposition of tree structure Download PDF

Info

Publication number
CN112270120B
CN112270120B CN202011022305.4A CN202011022305A CN112270120B CN 112270120 B CN112270120 B CN 112270120B CN 202011022305 A CN202011022305 A CN 202011022305A CN 112270120 B CN112270120 B CN 112270120B
Authority
CN
China
Prior art keywords
population
node
solution
weight vector
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011022305.4A
Other languages
Chinese (zh)
Other versions
CN112270120A (en
Inventor
辜方清
吴润佳
刘海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202011022305.4A priority Critical patent/CN112270120B/en
Publication of CN112270120A publication Critical patent/CN112270120A/en
Application granted granted Critical
Publication of CN112270120B publication Critical patent/CN112270120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/06Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a multi-objective optimization method based on hierarchical decomposition of a tree structure, which comprises the following steps: 1. initializing parameters and generating a group of uniformly distributed unit weight vectors; s2, constructing a tree and initializing a population; s3, an evolutionary algorithm; in the proposed algorithm, the candidate solution only needs to be compared with the solution on the path from the root of the tree to the leaf node. Therefore, the computational complexity of processing the candidate single solution is only O (M logN), and the present invention approximates and progressively refines the Pareto front by solving a few representative sub-problems. The strategy has the beneficial effects of low time complexity and high calculation efficiency for solving the large space optimization problem.

Description

Multi-objective optimization method based on hierarchical decomposition of tree structure
Technical Field
The invention relates to an optimization method, in particular to a multi-objective optimization method based on hierarchical decomposition of a tree structure.
Background
Over the past decades, researchers have proposed a variety of evolutionary multi-objective optimization algorithms. The evolutionary multi-objective optimization algorithms simultaneously evolve one population to approximate a Pareto frontier in one operation, and have achieved great success in practical engineering application. These algorithms can be roughly divided into three categories depending on the selection operator: the method comprises a Pareto domination based multi-objective optimization algorithm, a decomposition based multi-objective optimization algorithm and an index based multi-objective optimization algorithm.
Pareto dominated based algorithms, such as non-dominated ranking genetic algorithms and strong Pareto dominated multi-objective evolutionary algorithms, use Pareto dominated ranking as a primary selection mechanism and density based selection criteria (such as crowding distance) as a secondary selection mechanism. In recent years, many improved Pareto dominated based algorithms have been proposed for the challenge of specific problems. However, the algorithm complexity of Pareto dominant ranking calculation is high, and particularly when the population size increases, the calculation overhead will increase sharply. In addition, the Pareto-based algorithm has the characteristic that the algorithm performance is sharply reduced along with the increase of the target number due to insufficient selection pressure when the multi-objective optimization problem is processed.
Index-based evolutionary multi-objective optimization algorithms, such as S-metric-based evolutionary multi-objective algorithms, directly utilize the individual contribution to a performance index as a selection criterion to select offspring. These algorithms maintain the convergence and diversity of the candidate solution set well. However, the algorithm complexity of such algorithms is usually high. The complexity of the running time of the S-metric evolutionary selection multi-objective algorithm grows exponentially with the increase of the number of targets.
The decomposition-based multi-objective algorithm decomposes a multi-objective optimization problem into a number of simple optimization sub-problems through a set of weight vectors, but the performance is not very ideal in handling multi-objective optimization with irregular Pareto fronts.
The population size of the evolutionary multi-objective optimization algorithm is a given constant. The size of the population size generally has a large impact on the performance of the evolutionary algorithm. Small population sizes may lead to premature convergence. And if the population scale is too large, a large amount of resources are wasted, and the searching efficiency of the algorithm is reduced.
Disclosure of Invention
The invention aims to solve the problems and provides a multi-objective optimization method based on hierarchical decomposition of a tree structure. The method comprises the steps of rapidly approaching a Pareto front edge through a small-scale population by a successive approximation algorithm mechanism to obtain a rough front edge interface, and gradually refining by introducing more sub-problems step by step. The strategy can effectively improve the efficiency of the algorithm, and particularly, the performance of the algorithm is obviously improved when a large space optimization problem is solved.
The purpose of the invention can be achieved by adopting the following technical scheme:
a multi-objective optimization method based on hierarchical decomposition of a tree structure comprises the following steps:
s1, initializing parameters
Generating a group of unit weight vector sets V = { V } which are uniformly distributed in the first diagram limit 1 ,v 2 ,…,v n H, wherein v i I =1 … n for the ith weight vector;
s2, constructing a tree and initializing a population
Uniformly and randomly generating N initial solutions in a decision space as an initial solution set phi 0
Setting up activityMoving node layer number L =2, improvement rate of minimum population convergence Δ =0.1 and center weight vector
Figure GDA0003827998290000021
Wherein M is the number of the objective functions, i.e. the central weight vector is a column vector;
improved rate delta for initializing initial population convergence index 0 =1, current iteration number t =1 and outer set a = Φ;
constructing a spanning tree T by using an algorithm shown by a tree structure according to the unit weight vector set V, wherein each node of the tree contains a weight vector and is associated with the subproblems by virtue of the weight vector; the nodes of the first L =2 level are set as active, and the nodes without active descendants are recorded as leaf nodes;
each sub-problem of an active node is assembled from an initial solution Φ 0 Selecting an optimal individual, and forming an initial population T.X by the selected solution; group size N * = | T · X | is the size of the population, i.e. the number of active nodes;
s3, evolutionary algorithm
In each generation, selecting a solution set Q through a mating selection strategy, and creating N offspring Y according to Q;
current one population improvement rate Δ t-1 < Delta and N * When the ratio is less than N, the reaction solution is,
expanding the number of active nodes in the tree, with active node number L = L +1, and reinitializing Δ t =1, the solution set phi is the union of offspring Y and T.X, then, the subproblem of each active node selects an optimal individual from the solution set phi to form a new population T.X, the population size N * =|T·X|;
Otherwise, updating the solution corresponding to the node in the tree T and the external set A thereof according to the population evolution algorithm, and calculating the current improvement rate delta t
The number of iterations t = t +1;
and (5) exiting the algorithm when the iteration times reach the maximum.
The specific content of the step S2 is as follows:
constructing a tree from the weight vectors
V={v 1 ,v 2 ,…,v N } (1)
Is a set of unit weight vectors uniformly distributed in the first octave, wherein
Figure GDA0003827998290000031
Is the ith weight vector
Figure GDA0003827998290000033
A first octave being an M-dimensional space; constructing a tree structure based on the unit weight vector set, wherein each node of the tree comprises a weight vector, calculating the centers of all weight vectors distributed to the node, and recording as v c The center vector may not be in the unit weight vector V, we first find the closest center vector V in V c Weight vector v of
Figure GDA0003827998290000032
The node contains a weight vector V and removes V from the unit weight vector V; determining a center vector v using different methods for root nodes, level two nodes, and other nodes c
Weight vector v of each node i Define the following sub-problems
g(x|v i )=d 1 +θd 2x (5)
Wherein d is 1 =F(x) T v i ,d 2 =||F(x)-d 1 v i A | l; theta is a balance parameter for balancing the distance d 1 And d 2 F (x) is the objective function vector, T is the vector transpose; these sub-problems are adaptively organized through a tree structure, initially, the node of the first L =2 layers is set as an active node, and Φ = { x = 1 ,…,x N The method comprises the steps that (1) an initial solution set is adopted, then each subproblem of each active node selects an optimal solution according to the value of a formula (5) to initialize a population, and an initial population is formed;
for the weight directionQuantity v i X is the current best solution, y i For the newly generated solution, P is the current node pointer; p.v and P.x are respectively the weight vector of the current node and the associated optimal solution; s is the same weight vector v of formula (5) as F (x) i Equivalent points in direction, i.e.
s=g(x|v i )v i (7)
Dividing the target space into three sub-areas I, II and III by an equivalence point s;
subregion I: if g (y) i I P.v) < g (P.x I P.v) indicates that the solution for yi is better than x, P.x and y i Performing exchange processing, and updating an equivalent point P.s = g (P.x | P.v) P.v of the current node;
subregion III: if P.s < F (y) i ) If the solution ratio x in the area is different, the operation is not performed;
subregion II: if P is a leaf node, y i Randomly replacing one solution in archive A; otherwise, y i Is transmitted to the nearest child node
Figure GDA0003827998290000041
Wherein < T j .v,F(y i ) If represents the angle, the node pointer P is moved to the child node T j*
When the convergence rate of the population is lower than a given value, expanding the population scale; the yield of improvement of the population is defined as the total improvement of the individuals in the population:
Figure GDA0003827998290000042
Figure GDA0003827998290000043
and
Figure GDA0003827998290000044
of the same sub-problem in t-1 and t generation, respectivelySolving; when the convergence rate is lower than a given value, increasing the number of layers of active nodes in the tree; calculating the current improvement rate Delta by defining the population convergence improvement rate formula in the formula (6) t
The specific content of the step S3 is as follows:
if straight line
Figure GDA0003827998290000045
f i The variable is intersected with the Pareto front edge, the intersection point coordinate is the target vector of the optimal solution of the sub-problem in the sub-formula (5), and the included angle between the weight vector v and the target vector of the Pareto optimal solution of the corresponding sub-optimization problem is zero; the angle between the weight vector of the sub-optimization problem and the target vector of the solution is larger, and is farther from the optimal solution of the sub-optimization problem; if the line does not intersect the Pareto front, an algorithm for the solution life cycle is introduced:
for each solution x in the population i We remember the algebra generated from the individual to the current generation as the age of the individual i And calculating the included angle between the target vector and the corresponding weight vector, and recording as angle i (ii) a Selecting N solutions from the population as parents to participate in hybridization variation by using championship selection, and recording the set of parents as
Figure GDA0003827998290000046
p1 is the first parent; in the selection of the championship, two individuals are randomly selected from the population, and the individuals with small ages are selected as father individuals to participate in hybridization variation to generate new individuals; when the ages of the two individuals are the same, selecting a solution with a larger angle value; for each individual in parent1
Figure GDA0003827998290000047
Individuals paired therewith for recombination
Figure GDA0003827998290000048
The choice of (2) is divided into the following two cases, p2 being the second parent:
1) In the initial stage of the algorithm, when the species isGroup size N * When the ratio is less than or equal to N/2, introducing an external set A to improve the capability of the algorithm for keeping population diversity, and randomly selecting from the external set A
Figure GDA0003827998290000049
And
Figure GDA00038279982900000410
pairing and recombining to generate a new individual;
2) At the later stage of the algorithm, when the size N of the population * > N/2, recombination is performed based on two similar solutions,
Figure GDA0003827998290000051
is selected from
Figure GDA0003827998290000052
Randomly selecting pairs for recombination.
The pair of root nodes, the second layer nodes and other nodes use different methods to determine the central vector v c The specific contents are as follows:
the center vector of the root is set to
Figure GDA0003827998290000053
The center vector of the second layer node is
Figure GDA0003827998290000054
Extreme point in (1)
Figure GDA0003827998290000055
For other nodes, calculating a central vector by using an improved K-means clustering algorithm; each weight vector v j Assigning to the weighted nearest cluster;
Figure GDA0003827998290000056
λ i is the percentage of the weight vector assigned to the ith cluster, initialized to 0.5,
Figure GDA0003827998290000057
the central vector of the ith cluster group is the root node which has M child nodes, and other nodes have at most 2 child nodes; the weight vector is assigned to the nearest center; i.e. for each weight vector v j ∈V Calculate its nearest center:
Figure GDA0003827998290000058
the set of weight vectors that scores to assign to the ith cluster is V i (ii) a A tree structure is obtained by recursively invoking a tree construction algorithm flowchart.
The implementation of the invention has the following beneficial effects:
1. in the proposed algorithm, the candidate solution only needs to be compared with the solution on the path from the root of the tree to the leaf node. Therefore, the computational complexity of processing the candidate single solution is only O (mlogn), which has the advantage of low time complexity.
2. The invention provides a subproblem organization paradigm from coarse to fine. Therefore, the algorithm herein approximates and progressively refines the Pareto front by solving a few representative sub-problems. The strategy has the advantage of high calculation efficiency for solving the large space optimization problem.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the multi-objective optimization method based on hierarchical decomposition of tree structures.
FIG. 2 is a block diagram of a tree structure of the multi-objective optimization method based on hierarchical decomposition of a tree structure.
FIG. 3 is a flow chart of a set of uniformly distributed unit weight vectors of the multi-objective optimization method based on hierarchical decomposition of a tree structure.
Fig. 4 is a flow chart of a tree obtained by the tree construction method.
FIG. 5 is a region decomposition exemplary diagram of the multi-objective optimization method based on hierarchical decomposition of tree structure according to the present invention.
Fig. 6 is a frontal interface surface obtained by the present invention when solving the 9 proposed large space optimization problems.
FIG. 7 is a front edge interface obtained by solving the ZDT series of test functions according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, the embodiment relates to a multi-objective optimization method based on hierarchical decomposition of a tree structure, which includes the following steps:
s1, initializing parameters
Generating a group of unit weight vector sets V = { V } which are uniformly distributed in the first diagram limit 1 ,v 2 ,…,v n In which v is i For the ith weight vector, i =1 … n;
s2, constructing a tree and initializing a population
Constructing a spanning tree T by using an algorithm shown by a tree structure according to the unit weight vector set V, wherein each node of the tree contains a weight vector and is associated with the subproblems by virtue of the weight vector; the nodes of the first L =2 level are set as active, and the nodes without active descendants are recorded as leaf nodes;
each subproblem of the active node selects the optimal individual from the initial solutions according to the aggregation function value defined in equation (5), these selected solutions forming an initial population T · X; population size N * = T · X | is the size of the solution, i.e. the number of active nodes;
uniformly and randomly generating N initial solutions in a decision space as an initial solution set phi 0
Setting the number of active node layers L =2, the improvement rate of minimum population convergence Δ =0.1 and the center weight vector
Figure GDA0003827998290000071
Wherein M is the number of the objective functions, and T is the transposition of the vector, i.e. the central weight vector is a column vector;
improved rate delta for initializing initial population convergence index 0 =1, current iteration number t =1 and outer set a = Φ;
s3, evolutionary algorithm
In each generation, selecting a solution set Q through a mating selection strategy, and creating N offspring Y according to Q;
current one population improvement rate Δ t-1 < Delta and N * When the ratio is less than N, the reaction solution is,
expanding the number of layers of active nodes in the tree, wherein the optimal solution phi is the union of descendant Y and T.X, the number of active nodes L = L +1, and initializing delta again t =1, then, the subproblem for each active node selects the optimal individual from solution set Φ according to equation (5), forming a new T · X, population size N * =|T·X|;
Otherwise, updating the solution corresponding to the node in the tree T and the external set A thereof according to the population evolution algorithm, and calculating the current improvement rate delta according to the population convergence improvement rate formula defined in the formula (6) t
Number of iterations t = t +1
And (5) exiting the algorithm when the iteration times reach the maximum.
The above steps include two steps, as shown in fig. 2 to 4:
1. structure of tree
Constructing a tree from the weight vectors
V={v 1 ,v 2 ,…,v N } (1)
Is a set of uniformly distributed unit weight vectors, wherein
Figure GDA0003827998290000072
Is the ith weight vector
Figure GDA0003827998290000073
A first octave of an M-dimensional space; constructing a tree structure based on the unit weight vector set, wherein each node of the tree comprises a weight vector, calculating the centers of all the weight vectors distributed to the node, and recording as v c The center vector may not be in the unit weight vector V, we first find the nearest center vector V in V c Weight vector v of
Figure GDA0003827998290000074
The node contains a weight vector V and removes V from a set of vectors V; determining a center vector v using different methods for root nodes, level two nodes, and other nodes c The method comprises the following steps:
the center vector of the root is set to
Figure GDA0003827998290000075
The center vector of the second layer node is
Figure GDA0003827998290000076
Extreme point in (1)
Figure GDA0003827998290000081
For other nodes, calculating a central vector by using an improved K-means clustering algorithm; in particular, to balance each cluster, each weight vector v is applied j DispensingInto the weighted nearest cluster;
Figure GDA0003827998290000082
λ i is the percentage of the weight vector assigned to the ith cluster, initialized to 0.5,
Figure GDA0003827998290000083
the central vector of the ith cluster group is the root node which has M child nodes, and other nodes have at most 2 child nodes; the weight vector is assigned to the nearest center; i.e. for each weight vector v j ∈V Calculate its nearest center:
Figure GDA0003827998290000084
the set of weight vectors that scores to assign to the ith cluster is V i (ii) a A tree structure is obtained by recursively calling a tree construction algorithm flow chart.
2. Population initialization and update based on tree structure
Weight vector v of each node i All define the following sub-problems
g(x|v i )=d 1 +θd 2 (5)
Wherein d is 1 =F(x) T v i ,d 2 =||F(x)-d 1 v i L; theta is a balance parameter for balancing the distance d 1 And d 2 F (x) is the objective function vector, T is the vector transposition; these sub-problems are adaptively organized through a tree structure, initially, the node of the first L =2 layers is set as an active node, and Φ = { x = 1 ,…,x N The method comprises the steps that (1) an initial solution set is adopted, then each subproblem of each active node selects an optimal solution according to the value of a formula (5) to initialize a population, and an initial population is formed;
for the weight vector v i X is the current best solution, y i For the newly generated solution, P isA current node pointer; p.v, P.x are respectively the weight vector of the current node and the associated optimal solution; s is the same weight vector v of formula (5) as F (x) i Equivalent points in direction, i.e.
s=g(x|v i )v i (7) As shown in fig. 5, the target space is divided into three sub-regions I, II and III by the equivalence point s;
subregion I: if g (y) i I P.v) < g (P.x I P.v) indicates y i Has a better solution than x, P.x and y i Performing exchange processing to update an equivalent point P.s = g (P.x | P.v) P.v;
subregion III: if P.s < F (y) i ) If the solution ratio x in the area is different, the operation is not performed;
subregion II: if P is a leaf node, y i Randomly replacing one solution in archive A; otherwise, y i Is transmitted to the nearest child node
Figure GDA0003827998290000085
Wherein < T j ·v,F(y i ) If represents the angle, the node pointer P is moved to the child node T j*
When the convergence rate of the population is lower than a given value, the population scale is expanded; the yield of improvement of the population is defined as the total improvement of the individuals in the population:
Figure GDA0003827998290000091
Figure GDA0003827998290000092
and
Figure GDA0003827998290000093
which are the solutions of the same subproblem in the t-1 and t generation processes, respectively. When the convergence rate is lower than a given value, increasing the number of layers of active nodes in the tree; by defining the group convergence in equation (6)The improvement Rate equation calculates the current improvement Rate Δ t
3. Selection of individuals involved in hybrid variation
If straight line
Figure GDA0003827998290000094
f i The variable is intersected with the Pareto front edge, the coordinate of the intersection point is a target vector of the optimal solution of the sub-optimization problem (5), and an included angle between the weight vector v and the target vector of the Pareto optimal solution of the corresponding sub-optimization problem is zero; the angle between the weight vector of the sub-optimization problem and the target vector of the solution is larger, and the sub-optimization problem is farther from the optimal solution of the sub-optimization problem; therefore, we have reason to believe that this solution has a much greater capacity for improvement. However, this assumption will be invalid if the straight line does not intersect the Pareto front. We introduce a solution lifecycle to remedy this deficiency, i.e. if the straight line does not intersect the Pareto front, then a solution lifecycle algorithm is introduced. On the basis that the longer the life cycle of a solution is, the lower the probability of selecting the solution is, a selection strategy based on the matching of the life span and the hybridization variation individual of the included angle of the target vector and the corresponding weight vector is provided.
Specifically, for each solution x in the population i We remember the algebra generated from the individual to the current generation as the age of the individual i And calculating the included angle between the target vector and the corresponding weight vector, and recording as angle i (ii) a Selecting N solutions from the population as father individuals to participate in hybridization variation by using championship selection, and recording the set of the father individuals as
Figure GDA0003827998290000095
p1 is the first parent; in the selection of the championship, two individuals are randomly selected from the population, and the individuals with small ages are selected as father individuals to participate in hybridization variation to generate new individuals; when the ages of the two individuals are the same, selecting a solution with a larger angle value; for each individual in parent1
Figure GDA0003827998290000096
Individuals paired therewith for recombination
Figure GDA0003827998290000097
The choice of (2) is divided into the following two cases, p2 being the second parent:
1) In the initial stage of the algorithm, when the size N of the population * When the ratio is less than or equal to N/2, introducing an external set A to improve the capability of the algorithm for keeping population diversity, and randomly selecting from the external set A
Figure GDA0003827998290000098
And
Figure GDA0003827998290000099
pairing and recombining to generate a new individual;
2) At the later stage of the algorithm, when the size N of the population * > N/2, recombination is performed based on two similar solutions,
Figure GDA00038279982900000910
is from
Figure GDA00038279982900000911
Randomly selecting pairs for recombination.
Experimental data:
as shown in fig. 6 and 7, the proposed algorithm was compared with eight subsequent representative evolutionary multi-objective optimization algorithms:
1. NSGA-II: a fast and elitist multi-objective genetic algorithm NSGA-II (a fast and excellent multi-objective genetic algorithm: NSGA-II) is one of the most commonly used EMO algorithms to solve the multi-objective optimization problem. It uses Pareto-based dominance ordering as a primary selection mechanism and density-based selection criteria such as congestion distance as a secondary selection mechanism.
2. SMS-EMOA: SMS-EMOA Multi object selection based on dominated hypervolumes uses the individual's contribution in the hypervolume as a criterion for selecting offspring. Its purpose is to maximize the dominant hyper-volume of the resulting solution set in the optimization process. It preserves the convergence and diversity of the candidate solution set as much as possible.
3. IBEA: indicator-based selection in a multi-objective search (index-based selection in multi-objective search). IBEA uses indices to compare solutions and select next generation populations, which does not require additional diversity protection mechanisms. Binary index I s+ The method is used for simulation experiments.
4. SRA2: stochastic ranking algorithm for human-Objective optimization based on multiple index indicators (SRA) employs a Stochastic ranking technique to balance search bias of different indices. By storing well-converged solutions in conjunction with direction-based archiving, and maintaining diversity, the performance of the algorithm is further improved.
5. MOEA/D: a multi-objective evolution algorithm based on decomposition is taken as one of the most famous evolutionary multi-objective optimization decomposition algorithms, and the algorithm decomposes multi-objective optimization into a plurality of scalar subproblems through a group of weight vectors and optimizes the scalar subproblems at the same time. Since each sub-question is associated with a search direction, sufficient selection pressure can be provided for the Pareto front
6. M2M: the method decomposes a multi-objective optimization problem into a plurality of simple multi-objective optimization sub-problems and solves the sub-problems in a collaborative way. It prioritizes diversity. Therefore, the algorithm can well keep the population diversity and obtain a better effect on solving the imbalance optimization problem.
7. RVEA: areferencevector-oriented evolution for human-objective optimization evolution (a multi-objective optimization evolution algorithm based on reference vectors) the method utilizes a set of reference vectors to decompose a multi-objective optimization problem into a set of single-objective sub-problems. The angle penalty distance dynamically balances convergence and diversity according to the number of targets and the algebra.
8. Two _ Arch2: two _ Arch2: an improved Two-archive algorithm for human-object optimization An improved Two-archive evolution multi-objective optimization algorithm Two _ Arch2 is proposed in the text. In Two _ Arch2, two profiles (convergence profile and diversity profile) are maintained during the evolutionary search and Two different selection principles (index-based and Pareto-based) are assigned to the Two profiles, which may balance convergence, diversity and complexity.
Table 1 below: the scheme provides an average IGD value obtained by running 30 times of a ZDT series test function and an MF1-MF9 test function of a test problem by REMOA and 8 most advanced algorithms. According to the rank sum test result, the algorithm provided by the scheme has good performance.
Figure GDA0003827998290000111
The significance of the proposed algorithm is superior to that of a comparison algorithm.
Figure GDA0003827998290000112
Table 2 below: the scheme provides an average HV value obtained by running 30 times on a ZDT series test function and an MF1-MF9 test function of a test problem by REMOA and 8 most advanced algorithms. According to the rank sum test result, the algorithm provided by the scheme has good performance.
Figure GDA0003827998290000113
The significance of the proposed algorithm is superior to that of a comparison algorithm.
Figure GDA0003827998290000114
Figure GDA0003827998290000121
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (4)

1. A multi-objective optimization method based on hierarchical decomposition of a tree structure is characterized by comprising the following steps:
s1, initializing parameters
Generating a group of unit weight vector sets V = { V } which are uniformly distributed in the first diagram limit 1 ,v 2 ,…,v n In which v is i For the ith weight vector, i =1 … n;
s2, constructing a tree and initializing a population
Uniformly and randomly generating N initial solutions in a decision space as an initial solution set phi 0
Setting the number of active node layers L =2, the improvement rate of minimum population convergence Δ =0.1 and the center weight vector
Figure FDA0003840911210000011
Wherein M is the number of the objective functions, i.e. the central weight vector is a column vector;
improved rate delta for initializing initial population convergence index 0 =1, current iteration number t =1 and outer set a = Φ;
constructing a spanning tree T by using an algorithm shown by a tree structure according to the unit weight vector set V, wherein each node of the tree contains a weight vector and is associated with the subproblems by virtue of the weight vector; the nodes of the first L =2 level are set as active, and the nodes without active descendants are recorded as leaf nodes;
each sub-problem of an active node is assembled from an initial solution Φ 0 Selecting an optimal individual, and forming an initial population T.X by the selected solution; population size N * = | T · X | is the size of the population, i.e. the number of active nodes, X is the solution corresponding to the active nodes;
s3, evolutionary algorithm
In each generation, selecting a solution set Q through a mating selection strategy, and creating N offspring Y according to Q;
current one population improvement rate Δ t-1 < Delta and N * When the ratio is less than N, the reaction mixture is,
expanding the number of active nodes in the tree, with L = L +1, and reinitializing Δ t =1, the solution set phi is the union of offspring Y and T.X, then, the subproblem of each active node selects an optimal individual from the solution set phi to form a new population T.X, the population size N * =|T·X|;
Otherwise, updating the solution corresponding to the node in the tree T and the external set A thereof according to the population evolution algorithm, and calculating the current improvement rate delta t
The iteration time t = t +1;
and (5) exiting the algorithm when the iteration times reach the maximum.
2. The multi-objective optimization method based on hierarchical decomposition of tree structure as claimed in claim 1, wherein the specific content of step S2 is:
constructing a tree from the weight vectors
V={v 1 ,v 2 ,…,v N } (1)
Is a set of unit weight vectors uniformly distributed in the first octave, wherein
Figure FDA0003840911210000021
Figure FDA0003840911210000022
A first octave being an M-dimensional space; constructing a tree structure based on the unit weight vector set, wherein each node of the tree comprises a weight vector, calculating the centers of all weight vectors distributed to the node, and recording as v c First find the nearest center vector V in V c Weight vector v of
Figure FDA0003840911210000023
The node contains a weight vector V and removes x from the unit weight vector V; determining a center vector v using different methods for root nodes, level two nodes, and other nodes c
Weight vector v of each node i Define the following sub-problems
g(x|v i )=d 1 +θd 2 x (5)
Wherein d is 1 =F(x) T v i ,d 2 =||F(x)-d 1 v i L; theta is a balance parameter for balancing the distance d 1 And d 2 F (x) is the objective function vector, T is the vector transpose; these sub-problems are adaptively organized through a tree structure, initially, the node of the first L =2 layers is set as an active node, and Φ = { x = 1 ,…,x N The method comprises the steps that (1) an initial solution set is set, then each subproblem of each active node selects an optimal solution according to the value of a formula (5) to initialize a population, and an initial population is formed;
for the weight vector v i X is the current best solution, y i For the newly generated solution, P is the current node pointer; p.v and P.x are respectively the weight vector of the current node and the associated optimal solution; s is the same weight vector v of formula (5) as F (x) i Equivalent points in direction, i.e.
s=g(x|v i )v i (7)
Dividing the target space into three sub-regions I, II and III by equivalence point s;
subregion I: if g (y) i I P.v) < g (P.x I P.v) indicates y i Has a better solution than x, P.x and y i Performing exchange processing, and updating an equivalent point P.s = g (P.x | P.v) P.v of the current node;
subregion III: if it is not
Figure FDA0003840911210000025
The area is indicatedIs worse than x, no operation is performed at this time;
subregion II: if P is a leaf node, y i Randomly replacing one solution in archive A; otherwise, y i Is transmitted to the nearest child node
Figure FDA0003840911210000024
Wherein<T j .v,F(y i )>Representing the angle, moving the node pointer P to the child node T j*
When the convergence rate of the population is lower than a given value, the population scale is expanded; the yield of improvement of the population is defined as the total improvement of the individuals in the population:
Figure FDA0003840911210000031
Figure FDA0003840911210000032
and
Figure FDA0003840911210000033
respectively solving the same subproblem in the generation process of t-1 and t; when the convergence rate is lower than a given value, increasing the number of layers of active nodes in the tree; calculating the current improvement rate Delta by defining the population convergence improvement rate formula in the formula (6) t
3. The multi-objective optimization method based on hierarchical decomposition of tree structure as claimed in claim 2, wherein the specific content of step S3 is:
if straight line
Figure FDA0003840911210000034
f i As variable, intersecting with Pareto front edge, the coordinate of intersection point is the target direction of the optimal solution of the neutron problem in formula (5)If the weight vector v is zero, the included angle between the weight vector v and the target vector of the Pareto optimal solution of the corresponding sub-optimization problem is zero; the angle between the weight vector of the sub-optimization problem and the target vector of the solution is larger, and is farther from the optimal solution of the sub-optimization problem; if the line does not intersect the Pareto front, an algorithm for the solution life cycle is introduced:
for each solution x in the population i We remember the algebra generated from the individual to the current generation as the age of the individual i And calculating the angle between the target vector and the corresponding weight vector, and recording as angle i (ii) a Selecting N solutions from the population as parents to participate in hybridization variation by using championship selection, and recording the set of parents as
Figure FDA0003840911210000035
p1 is a first parent; in the selection of the championship, two individuals are randomly selected from the population, and the individuals with small ages are selected as father individuals to participate in hybridization variation to generate new individuals; when the ages of the two individuals are the same, selecting a solution with a larger angle value; for each individual in parent1
Figure FDA0003840911210000036
Individuals matched therewith for recombination
Figure FDA0003840911210000037
The choice of (2) is divided into the following two cases, p2 being the second parent:
1) In the initial stage of the algorithm, when the size N of the population * When the ratio is less than or equal to N/2, introducing an external set A to improve the capability of the algorithm for keeping population diversity, and randomly selecting from the external set A
Figure FDA0003840911210000038
And
Figure FDA0003840911210000039
pairing and recombining to generate a new individual;
2) After the algorithmWhen the population size N * > N/2, recombination is performed based on two similar solutions,
Figure FDA00038409112100000310
is from
Figure FDA00038409112100000311
Randomly selecting pairs for recombination.
4. The multi-objective optimization method based on hierarchical decomposition of tree structure as claimed in claim 2, wherein the central vector v is determined by different methods for the root node, the second level node and other nodes c The specific contents are as follows:
the center vector of the root is set to
Figure FDA0003840911210000041
The center vector of the second layer node is
Figure FDA0003840911210000043
Extreme point in (1)
Figure FDA0003840911210000042
For other nodes, calculating a central vector by using an improved K-means clustering algorithm; each weight vector v j Assigning to the weighted nearest cluster;
Figure FDA0003840911210000044
λ i is the percentage of the weight vector assigned to the ith cluster, initialized to 0.5,
Figure FDA0003840911210000045
is the ith clusterThe root node has M child nodes, and other nodes have at most 2 child nodes; the weight vector is assigned to the nearest center; i.e. for each weight vector v j E.g. V, calculate its nearest center:
Figure FDA0003840911210000046
the set of weight vectors that scores to assign to the ith cluster is V i (ii) a A tree structure is obtained by recursively invoking a tree construction algorithm flowchart.
CN202011022305.4A 2020-09-25 2020-09-25 Multi-objective optimization method based on hierarchical decomposition of tree structure Active CN112270120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011022305.4A CN112270120B (en) 2020-09-25 2020-09-25 Multi-objective optimization method based on hierarchical decomposition of tree structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011022305.4A CN112270120B (en) 2020-09-25 2020-09-25 Multi-objective optimization method based on hierarchical decomposition of tree structure

Publications (2)

Publication Number Publication Date
CN112270120A CN112270120A (en) 2021-01-26
CN112270120B true CN112270120B (en) 2022-11-22

Family

ID=74349305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011022305.4A Active CN112270120B (en) 2020-09-25 2020-09-25 Multi-objective optimization method based on hierarchical decomposition of tree structure

Country Status (1)

Country Link
CN (1) CN112270120B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1205863A1 (en) * 2000-11-14 2002-05-15 Honda R&D Europe (Deutschland) GmbH Multi-objective optimization
CN102708407A (en) * 2012-05-15 2012-10-03 广东工业大学 Self-adaptive hybrid multi-objective evolutionary method on basis of population decomposition
CN104200272A (en) * 2014-08-28 2014-12-10 北京工业大学 Complex network community mining method based on improved genetic algorithm
CN107122844A (en) * 2017-03-15 2017-09-01 深圳大学 A kind of Multipurpose Optimal Method and system being combined based on index and direction vector
CN107464022B (en) * 2017-08-11 2018-07-10 同济大学 A kind of Optimization Method for Location-Selection based on decomposition multi-objective Evolutionary Algorithm
CN111178485A (en) * 2019-09-05 2020-05-19 南宁师范大学 Multi-target evolutionary algorithm based on double population cooperation

Also Published As

Publication number Publication date
CN112270120A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
Lin et al. A clustering-based evolutionary algorithm for many-objective optimization problems
CN111814251B (en) Multi-target multi-modal particle swarm optimization method based on Bayesian adaptive resonance
Dorronsoro et al. Improving classical and decentralized differential evolution with new mutation operator and population topologies
CN108846472A (en) A kind of optimization method of Adaptive Genetic Particle Swarm Mixed Algorithm
Huang et al. Survey on multi-objective evolutionary algorithms
Gu et al. A rough-to-fine evolutionary multiobjective optimization algorithm
CN111369000A (en) High-dimensional multi-target evolution method based on decomposition
Xiong et al. Multi-feature fusion and selection method for an improved particle swarm optimization
CN113705109B (en) Hybrid preference model-based evolutionary high-dimensional multi-objective optimization method and system
Zhu et al. Multiobjective evolutionary algorithm-based soft subspace clustering
Shi et al. A multipopulation coevolutionary strategy for multiobjective immune algorithm
CN112270120B (en) Multi-objective optimization method based on hierarchical decomposition of tree structure
Urade et al. Study and analysis of particle swarm optimization: a review
Maneeratana et al. Compressed-objective genetic algorithm
Maeda et al. Parallel genetic algorithm with adaptive genetic parameters tuned by fuzzy reasoning
Tahernezhad-Javazm et al. R2-HMEWO: Hybrid multi-objective evolutionary algorithm based on the Equilibrium Optimizer and Whale Optimization Algorithm
Zhu et al. Multiobjective particle swarm optimization based on PAM and uniform design
Mezher et al. A new genetic folding algorithm for regression problems
Acharya et al. Biclustering of microarray data employing multiobjective ga
Davarynejad et al. Accelerating convergence towards the optimal pareto front
Konishi et al. Effects of accuracy-based single-objective optimization in multiobjective fuzzy genetics-based machine learning
CN109034359A (en) The genetic algorithm of non-dominated ranking with elitism strategy changes method
CN116542468B (en) Unmanned aerial vehicle cluster task planning method
Madani Multi-Guide Particle Swarm Optimization for Large-Scale Multi-Objective Optimization Problems
Hsieh et al. An improved multi-objective genetic algorithm for solving multi-objective problems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant