CN113205172A - Multitask evolution algorithm based on self-adaptive knowledge migration - Google Patents

Multitask evolution algorithm based on self-adaptive knowledge migration Download PDF

Info

Publication number
CN113205172A
CN113205172A CN202110519904.5A CN202110519904A CN113205172A CN 113205172 A CN113205172 A CN 113205172A CN 202110519904 A CN202110519904 A CN 202110519904A CN 113205172 A CN113205172 A CN 113205172A
Authority
CN
China
Prior art keywords
task
individual
individuals
population
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110519904.5A
Other languages
Chinese (zh)
Inventor
王磊
孙倩
江巧永
李薇
赵志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110519904.5A priority Critical patent/CN113205172A/en
Publication of CN113205172A publication Critical patent/CN113205172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a multi-task evolution algorithm based on self-adaptive knowledge migration, and provides a strategy for adaptively adjusting the random mating probability RMP between different tasks. The invention combines the proposed adaptive knowledge migration strategy with the basic multi-task optimization algorithm to verify the universality of the strategy, and carries out simulation experiments on various multi-task evolution algorithms and the proposed algorithm on test functions to verify the effectiveness of adaptively adjusting the RMP strategy to improve the multi-task optimization efficiency.

Description

Multitask evolution algorithm based on self-adaptive knowledge migration
Technical Field
The invention belongs to the technical field of a multitask optimization method in evolution calculation, and particularly relates to a multitask evolution algorithm based on self-adaptive knowledge migration.
Background
When complex problems in the fields of scientific research, engineering technology, economic management and the like are processed, it is not difficult to discover that a plurality of problems have certain similarity, a traditional evolution algorithm can only solve a single optimization problem and cannot dynamically interact effective information among a plurality of different optimization problems, and therefore mining of useful information among a plurality of tasks is omitted.
The introduction of Multi-tasking evolution Algorithm (MTEA), also known as Multi-factor evolution Algorithm (MFEA), enables evolution algorithms to solve a number of different Optimization tasks across domains. MTEA is inspired by a multifactorial genetic model, and many characteristics of natural organisms are not directly determined by a single genetic gene, but are controlled by multiple genetic genes and influenced by cultural genes. MTEA incorporates the concept of coordinated evolution between genes and cultures. The multiple tasks are equivalent to a multi-gene environment, when the multiple tasks are optimized at the same time, the tasks can learn effective information by utilizing the similarity of different gene evolutions, and the knowledge extracted from the learning experience of the similar tasks can be constructively applied to more complex or invisible tasks. Compared with the traditional evolutionary algorithm, MTEA has obvious advantages, a plurality of cross-domain tasks are optimized simultaneously by establishing a uniform search space, and information of different tasks in the uniform search space can be migrated. The target task accelerates the self optimization process by using the learning experience of other tasks in the optimization process, so that the optimal solution of the multiple optimization tasks can be obtained at a higher speed. The general expression for MTEA is shown below:
{x1,x2,x3,...xK}=argmin{f1(x) f2(x) f3(x),...fK(x)}
where K denotes the number of optimization tasks, fi(x) For the ith task, xiFor the corresponding optimal solution, i ∈ [1, k ]]。
In 2016, professor Ong of southern ocean university of singapore proposed MTEA for the first time to deal with the cross-domain multitask optimization problem. At present, the main research directions of the MTEA algorithm can be divided into two categories, namely the theoretical research of the MTEA and the fusion and improvement of the MTEA and other traditional evolutionary algorithms.The theoretical research of MTEA mainly researches the working mechanism of MTEA, and demonstrates the feasibility of MTEA from the theoretical level. Lian et al propose a simple MTEA to optimize the benchmark Jump simultaneouslykAnd a LeadingOnes function. Theoretical analysis shows that Jump does not increase the upper limit of the expected runtime of the LeadingOnes functionkThe upper bound of the expected runtime of the function is significantly improved compared to a single task. The result table MTEA can solve the problem that the traditional optimization method is difficult to optimize. Gupta et al, for example, take the solution of Sudoku puzzles (Sudoku puzzles) to verify the interrelationship of gene migration and population diversity in a multitasking environment. Although gene migration and population diversity have equally important roles in the MTEA algorithm, the gene migration has a more significant effect on the rapid convergence of the population for multitasks with strong complementarity. These findings demonstrate the theoretical feasibility of the MTEA algorithm.
MTEA can be fused with other classical evolutionary algorithms, and the optimization performance of the algorithms is improved by using the advantages of evolutionary operators of MTEA. Feng et al combines a classical Differential Evolution (DE) algorithm and a Particle Swarm Optimization (PSO) algorithm with MFEA to provide an MFDE and MFPSO algorithm, and the Optimization performance of the MFDE and the MFPSO algorithm is better than that of the original DE and PSO algorithms. Cheng performs multitask evolution from the point of view of the sub-population in combination with the PSO algorithm, generates new children when the optimization falls into the stagnation phase, and performs Lamark learning (Lamarkian learning) on globally optimal individuals. Meanwhile, the method is extended on the basis of the MFDE, so that the MFDE can adaptively adjust the migration mode of knowledge among different tasks, and Tang et al provide a self-adaptive multi-factor particle swarm optimization algorithm to adaptively adjust the learning probability among the tasks. Cai proposes a Differential Vector Sharing Mechanism (DVSM) for multitask differential evolution, which aims to capture, share and utilize useful knowledge among different tasks and improve the knowledge migration efficiency. Zhou et al adaptively select the MTEA crossover operator according to the lifting rate of the offspring, which improves the optimization performance of MTEA. Feng proposes an EMT algorithm with explicit genetic passing across tasks that allows multiple search mechanisms with different biases to be included in the EMT paradigm.
MTEA also successfully addresses real-world engineering practices. Cheng et al consider that the mechanism of cooperative co-evolution is applicable to exploiting commonality or complementarity between different (but possibly related) optimization tasks in a single multi-tasking environment, and further emphasize the effectiveness of cross-problem multi-tasking through a framework based on co-evolution algorithms with which complex engineering design engineering instance problems are effectively solved. Compared with the traditional respective optimization mode, the membership function optimization method based on fuzzy association mining has the advantage of knowledge sharing in the MTEA optimization process. Ding et al, promoted existing MTEA, and proposed two strategies, one for decision variable conversion and the other for decision variable shuffling, to facilitate knowledge transfer between optimization problems with different locations, effectively solving expensive optimization problems. Meanwhile, MTEA can be used for solving the parameter optimization problem of the photovoltaic model and the vehicle path planning problem. In addition, the multi-task optimization algorithm also well solves the optimization problem of the layout of the large-scale virtual machine in cloud computing and the optimization problem of the operation index in the mineral processing technology.
The MTEA algorithm is built at present in theoretical research and algorithm performance improvement, but still has many defects, and faces various challenges, which are mainly summarized as the following two aspects.
(1) When the technical problem in the real application scene is solved, the degree of first relation between optimization tasks cannot be reflected in real time, and therefore efficiency is low. There are negative effects when knowledge is migrated between tasks. Tasks in the multi-task environment have certain similarity or complementarity, and the MTEA transfers favorable information among the tasks by means of characteristics among the tasks, so that the convergence of the multiple tasks is accelerated and optimized. However, in the task optimization process, a phenomenon of dissimilarity or non-complementation occurs between tasks at each stage, and a multi-task algorithm cannot be well self-adapted to find the problem, so that a situation of negative migration occurs, the optimization direction of the tasks is converted, the performance of task optimization is influenced, and the problem solving efficiency is low.
(2) The efficiency and adaptability of the MTEA algorithm are to be improved. Multiple optimization tasks are performed simultaneously, and the efficiency and adaptability of the evolutionary algorithm are particularly critical to the application value of the algorithm. The traditional MTEA algorithm is only added with operations such as type selection mating and vertical culture propagation on the basis of evolution operators such as crossing, mutation and selection, and the algorithm efficiency is not high-efficiency. Therefore, on the basis of research task characteristics, a new encoding and decoding scheme and an efficient evolutionary operator are designed based on population distribution characteristics to realize active control of population diversity and adaptive adjustment of population search directions, and the method is a key problem to be solved at present.
Disclosure of Invention
The invention aims to provide a multitask evolution algorithm based on self-adaptive knowledge migration, which can effectively solve the multitask optimization problem compared with the conventional MFDE and MFEA.
The technical scheme adopted by the invention is as follows: a multitask evolution algorithm based on self-adaptive knowledge migration comprises the following steps:
step 1, population initialization: randomly generating N individuals in a unified search space as an initialization population P, wherein P is { y }1,y2,y3,...yNThe learning period is initialized to 75 generations, the mating probability RMP among tasks is initialized to 0.3, each individual is evaluated according to each optimized task, and the scalar fitness value of the individual is calculated
Figure BDA0003062387470000041
And skill factor τi
Wherein the scalar fitness value of the individual
Figure BDA0003062387470000042
Calculated by the following formula (1), λ is a penalty coefficient,
Figure BDA0003062387470000043
the total constraint violation factor on the jth task for the ith individual,
Figure BDA0003062387470000044
a target value for the ith individual on the jth task; skill factor τ of an individualiRepresenting the task that the individual performs most favorably,
Figure BDA0003062387470000045
Figure BDA0003062387470000051
ordering the individual i on the jth task according to the scalar fitness value, wherein j belongs to {1, 2, 3.. K }, and K is the total number of tasks in a search space;
Figure BDA0003062387470000052
step2, evaluating the correlation degree among tasks: calculating the degree of correlation between tasks using formula (2) cross-task fitness distance correlation CTFDC; t is1、T2Respectively represent task1 and task2, n is T1Number of individuals sampled on task, f1For sampling individuals at T1A vector of fitness values evaluated on the task, wherein
Figure BDA0003062387470000053
At task T for ith individual1A fitness value of (i ∈ [1, n ]];d2Is to sample the individual at T1Task-wise evaluated fitness value best individual xbestAnd the distance between all the sampled individuals, wherein
Figure BDA0003062387470000054
Is the ith individual and xbestThe Euclidean distance between;
Figure BDA0003062387470000055
and
Figure BDA0003062387470000056
σ(f1) And σ (d)2) Are respectively f1And d2Mean and variance of;
Figure BDA0003062387470000057
step 3, updating RMP: recalculating the RMP according to the correlation degree in the step2, if the tasks are in negative correlation, setting the RMP to be 0, otherwise, passing the similarity degree between the tasks reflected by the CTFDC through fm(s) mapping as a value of RMP anew, fmThe(s) function is shown by formula (3), a1、b1From the interval [0, 1]And [0.3, 1]Calculated according to the interval mapping relation shown in formula (4) and [ Nmax,Nmin]Is a target interval, [ O ]max,Omin]Original interval, Ox,yIs any point in the original interval;
RMP=fm(s)=a1·s+b1 (3)
Figure BDA0003062387470000058
step 4, mutation: when rand (0, 1) < RMP is satisfied, information is migrated between tasks and children are updated using equation (5) where,
Figure BDA0003062387470000059
for the individuals randomly selected in the target population,
Figure BDA00030623874700000510
f is a scaling factor for an individual selected from the source task population; otherwise, using the formula (6) DE/rand/1 and updating the skill factors of the offspring individuals,
Figure BDA0003062387470000061
selecting a source task and a target task as skill factors with the same probability for individuals randomly selected from a target population when generating filial generations in an inter-task information migration mode; otherwise, directly taking the target task as a skill factor of the filial generation;
Figure BDA0003062387470000062
Figure BDA0003062387470000063
step 5, crossing: performing cross operation on individuals generated by the variation;
and 6, evaluation: evaluating the filial generation by adopting a vertical culture propagation mechanism, wherein when the filial generation is generated by knowledge migration, the filial generation carries information of a plurality of different tasks, a father generation of the filial generation has different skill factors, the skill factors of the filial generation randomly inherit the skill factors of the father generation with the same probability, when the generation of the filial generation does not relate to the information migration among a plurality of tasks, the filial generation directly inherits the skill factors of the father generation, and the filial generation directly evaluates on the tasks represented by the skill factors;
step 7, merging and updating: merging the parent population and the offspring population, and updating the scalar fitness value and the skill factor of the individuals of the merged population;
step 8, selection: sorting the individuals in the combined population according to the scalar fitness value, and selecting the first N superior individuals to form a next generation population;
and 9, judging whether the termination condition is met, and returning to the step2 if the termination condition is not met.
The present invention is also characterized in that,
the step 1 specifically comprises the following steps: randomly generating N individuals in a search space as an initial population, and P ═ y1,y2,...,yNWith initial individual dimension max { D }i},i∈[1,K]K is the total number of the multitasks, the learning period is initialized to 75 generations, the RMP is initialized to 0.3, each individual is evaluated according to each optimization task, and the scalar fitness value of the individual is calculated
Figure BDA0003062387470000064
And skill factor τi
Step 1.1, calculating a target value by random key decoding: will initializeRandom key values in the population, according to xi=Li+(Ui-Li)×yiMapping to a search space of a certain actual optimization task, wherein L, U is the upper and lower boundaries of the actual optimization task respectively, and then calculating the target value of each individual on each task;
step 1.2, initial population sequencing: aiming at different optimization problems, the population is sequentially ranked according to the target value to obtain K ranking sets { set }1,set2,...setK},setiSorting sets of the N individuals of the population according to their target values on the Kth task;
step 1.3, individual evaluation: and (4) calculating scalar fitness values and skill factors of the individuals according to the population sorting result in the step 1.2.
Step 1.3 specifically comprises the following steps:
step 1.3.1, calculating scalar fitness values of individuals
Figure BDA0003062387470000071
Wherein, the lambda is a penalty coefficient,
Figure BDA0003062387470000072
the total constraint violation factor on the jth task for the ith individual,
Figure BDA0003062387470000073
a target value for the ith individual on the jth task;
step 1.3.2, calculating factor ranking of individuals
Figure BDA0003062387470000074
Factor ranking of individuals
Figure BDA0003062387470000075
A set ordered for individuals on each optimization task according to scalar fitness value, i represents the ith individual in the first population, i is E [1, N]J is an optimization task, j belongs to [1, K ]];
Step 1.3.3 calculating individual skill factor τi: skill of individualCan factor in the task that an individual has advantages,
Figure BDA0003062387470000076
wherein
Figure BDA0003062387470000077
Calculated from step 1.3.2.
The step2 specifically comprises the following steps: cross-task fitness distance correlation using equation (2)
Figure BDA0003062387470000078
To calculate the degree of correlation between tasks; t is1,T2Respectively represent task1 and task2, n is T1Number of individuals sampled on task:
step 2.1, calculating the fitness value vector of the sampling individual on the target task: f. of1For sampling individuals at T1A vector of fitness values evaluated on the task, wherein
Figure BDA0003062387470000079
At task T for ith individual1A fitness value of (i ∈ [1, n ]];
Step 2.2, calculating a distance vector between the sampling individual and the individual with the best performance on the source task: d2Is to sample the individual at T1Task-wise evaluated fitness value best individual xbestAnd the distance between all the sampled individuals, wherein
Figure BDA0003062387470000081
Is the ith individual and xbestThe Euclidean distance between;
step 2.3, calculating T1、T2Similarity between them: as shown in the formula (2),
Figure BDA0003062387470000082
and
Figure BDA0003062387470000083
σ(f1) And σ (d)2) Are respectively provided withIs f1And d2Mean and variance of.
Figure BDA0003062387470000084
The step 5 specifically comprises the following steps: performing cross operation on individuals generated by the variation, wherein the specific operation is shown in formula (7), and xi,jRepresenting the j dimension, v, of the i individuali,jIs xi,jIndividual dimension value, rand, obtained by variationnIs [0, 1 ]]Random number in between, CR is the crossover probability, jrandIs [1,...... ], D]D is the dimension of the unified search space.
Figure BDA0003062387470000085
The invention has the beneficial effects that: the invention relates to a multi-task evolution algorithm based on self-adaptive knowledge migration, which comprises the steps of firstly measuring the correlation degree between tasks through the cross-task fitness distance correlation degree, then dynamically adjusting the probability of knowledge migration between different tasks according to the correlation degree, dynamically reflecting the available information state between the tasks and increasing the forward knowledge migration probability between the tasks; compared with the existing MFDE and MFEA, the AKT _ MFDE provided by the invention can effectively solve the problem of multi-task optimization.
Drawings
FIG. 1 is a flow chart of the adaptive knowledge migration based multitask evolution algorithm AKT _ MFDE of the present invention;
FIG. 2 is a graph of the convergence of AKT _ MFDE and MFDE at CI + LS Task set T1(Task 1);
FIG. 3 is a graph of the convergence of AKT _ MFDE and MFDE at CI + LS Task set T2(Task 2);
FIG. 4 is a graph of the convergence of AKT _ MFDE and MFDE at MI + LS Task set T1(Task 2);
FIG. 5 is a graph of the convergence of AKT _ MFDE and MFDE at MI + LS Task set T2(Task 2);
FIG. 6 is the convergence curves of AKT _ MFDE and MFDE at the NI + LS Task set T1(Task 1);
FIG. 7 is the convergence curves for AKT _ MFDE and MFDE at the NI + LS Task set T2(Task 2).
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a multitask evolution algorithm based on self-adaptive knowledge migration, which comprises the following steps as shown in figure 1:
step 1: population initialization and evaluation: randomly generating N individuals in a search space as an initial population, and P ═ y1,y2,...,yNWith initial individual dimension max { D }i},i∈[1,K]K is the total number of the multitasks, the learning period is initialized to 75 generations, the RMP is initialized to 0.3, each individual is evaluated according to each optimization task, and the scalar fitness value of the individual is calculated
Figure BDA0003062387470000091
And skill factor τi
1.1: calculating a target value by random key decoding: initializing random key values in the population according to xi=Li+(Ui-Li)×yiMapping to a search space of a practical optimization task, wherein L, U is the upper and lower bounds of the practical optimization task, and then calculating the target value of each individual on each task.
1.2: sequencing an initial population: aiming at different optimization problems, the population is sequentially ranked according to the target value to obtain K ranking sets { set }1,set2,...setK},setiAnd (4) collecting N individuals of the population according to the ranking of the target values of the N individuals on the Kth task.
1.3: individual evaluation: and (4) calculating scalar fitness values and skill factors of the individuals according to the population sorting result in the step 1.2.
1.3.1: computing scalar fitness values for individuals
Figure BDA0003062387470000092
Wherein, the lambda is a penalty coefficient,
Figure BDA0003062387470000093
the total constraint violation factor on the jth task for the ith individual,
Figure BDA0003062387470000094
the target value of the ith individual on the jth task.
1.3.2: calculating factor ranking of individuals
Figure BDA0003062387470000101
Factor ranking of individuals
Figure BDA0003062387470000102
A set ordered for individuals on each optimization task according to scalar fitness value, i represents the ith individual in the first population, i is E [1, N]J is an optimization task, j belongs to [1, K ]]。
1.3.3: calculating a skill factor τ for an individuali: the skill factors of an individual represent the task that the individual has advantages,
Figure BDA0003062387470000103
wherein
Figure BDA0003062387470000104
Calculated from step 1.3.2.
Step 2: and (3) evaluating the correlation degree between tasks: when the algorithm evolution exceeds the learning period, the cross-task fitness distance correlation degree of the formula (2) is used
Figure BDA0003062387470000105
To calculate the degree of correlation between tasks, T1,T2Respectively represent task1 and task2, n is T1The number of individuals is sampled on the task.
2.1: calculating a fitness value vector of the sampling individual on a target task: f. of1For sampling individuals at T1A vector of fitness values evaluated on the task, wherein
Figure BDA0003062387470000106
At task T for ith individual1A fitness value of (i ∈ [1, n ]]。
2.2: calculating a distance vector between the sampling individual and the individual that performs best on the source task: d2Is to sample the individual at T1Task-wise evaluated fitness value best individual xbestAnd the distance between all the sampled individuals, wherein
Figure BDA0003062387470000107
Is the ith individual and xbestThe euclidean distance between them.
2.3: calculating T1、T2Similarity between them: as shown in the following (2),
Figure BDA0003062387470000108
and
Figure BDA0003062387470000109
σ(f1) And σ (d)2) Are respectively f1And d2Mean and variance of.
Figure BDA00030623874700001010
Step 3: and updating the RMP: recalculating RMP, a based on the degree of correlation in step21、b1From the interval [0, 1]And [0.3, 1]Calculated according to the interval mapping relation shown in the formula (4), Nmax,Nmin]Is a target interval, [ O ]max,Omin]Original interval, Ox,yIs any point in the original interval.
Figure BDA00030623874700001011
Figure BDA0003062387470000111
Step 4: mutation: in satisfyingrand (0, 1) < RMP, and information is migrated between tasks and children are updated using the following equation (5),
Figure BDA0003062387470000112
for the individuals randomly selected in the target population,
Figure BDA0003062387470000113
f is a scaling factor for the selected individuals in the source task population.
Figure BDA0003062387470000114
Satisfy rand (0, 1)>RMP uses the following formula (6) DE/rand/1 and updates the skill factors of the offspring individuals,
Figure BDA0003062387470000115
is an individual randomly selected in a target population.
Figure BDA0003062387470000116
Step 5: and (3) crossing: performing cross operation on individuals with variation, wherein the operation is shown in the following formula (7) xi,jRepresenting the j dimension, v, of the i individuali,jIs xi,jIndividual dimension value, rand, obtained by variationnIs [0, 1 ]]Random number in between, CR is the crossover probability, jrandIs [1,...... ], D]D is the dimension of the unified search space.
Figure BDA0003062387470000117
Step 6: evaluation: the method comprises the steps that a vertical culture propagation mechanism is adopted to evaluate the filial generations, when the filial generations are generated through knowledge migration, the filial generations carry information of a plurality of different tasks, the father generations have different skill factors, the skill factors of the filial generations randomly inherit the skill factors of the father generations with the same probability, when the generation of the filial generations does not relate to the information migration among the tasks, the filial generations directly inherit the skill factors of the father generations, and the filial generations directly evaluate on the tasks represented by the skill factors.
Step 7: merging and updating: and combining the parent population and the offspring population, and updating the scalar fitness value and the skill factor of the individuals of the combined population.
Step 8: selecting: and sequencing the individuals in the combined population according to the scalar fitness value, and selecting the first N superior individuals to form a next generation population.
Step 9: and judging whether the termination condition is met, and returning to step2 if the termination condition is not met.
The effects of the present invention are further explained by the following simulation experiments. The benchmark test function is specifically shown in table 1, where the third column is the value range of the variable, the optimal solution of the problem, and the optimal value of the problem. The optimal values of the 7 reference problems are all 0, so the smaller the optimized value is, the better the optimization performance of the algorithm is. Table 2 shows the set of multi-tasking optimization problems constructed with the benchmark test functions. Wherein the degree of intersection represents the degree of intersection of the global optimal solution between two tasks, and ci (complex interaction), pi (partial interaction), ni (no interaction) represent complete intersection, partial intersection, and non-intersection, respectively. The similarity between tasks is calculated by using a Spearman's correlation coefficient (Spearman's rank), and the Spearman's rank estimates the similarity degree between different tasks according to the distribution characteristics between sampling points on a problem, and is specifically expressed as follows:
Figure BDA0003062387470000121
r(x1),r(x2) And (2) ordering the sampling individuals on different problems according to the fitness value, wherein cov (x) is a covariance calculation expression, and std (x) is a standard deviation calculation expression. HS, MS, LS, respectively, indicate highly similar (High Similarity), moderately similar (Middle Similarity), and lowly similar (Low Similarity). Combining the problems with different degrees of intersection and the problems with different degrees of similarity to simulate nine different groups of problemsThe multi-tasking environment is shown in detail in FIG. 2.
TABLE 1 specific test function set
Figure BDA0003062387470000131
Table 2 summary of properties of the evolutionary multitask optimization problem
Figure BDA0003062387470000141
In order to prove the effectiveness of an adaptive migration strategy (AKT), the invention firstly combines the proposed adaptive migration strategy with original multitask algorithms MFEA _ alpha and MFEA _ beta to verify the universality of the adaptive knowledge migration strategy, the invention respectively runs MFEA _ alpha, AKT _ MFEA _ alpha, MFEA _ beta and AKT _ MFEA _ beta for 20 times, then the average value and standard deviation of the 20 runs are collated to obtain a table 3, the standard deviation is arranged in the brackets, the optimal average value is bolded by using a black body, and the data in the table 3 can be obtained: the multitask reference algorithm added with the adaptive knowledge migration strategy performs better than the original algorithm, and the AKT _ MFEA _ alpha is better than the MFEA _ alpha AKT _ MFEA _ beta in 14 instances and better than the MFEA _ beta in 12 instances, which shows that the adaptive knowledge migration strategy can generally and effectively increase forward knowledge migration in a multitask environment.
TABLE 3 Universal exploration of adaptive knowledge migration strategies
Figure BDA0003062387470000151
To better illustrate the effect of the present invention, a test simulation is performed according to the flow shown in fig. 1, and AKT _ MFDE is compared with four reference algorithms, MFEA _ alpha, MFEA _ beta, MFPSO, and MFDE, the results of 20 independent operations of the five algorithms are shown in table 4, and the optimal result is bolded in bold. From the analysis of table 4, the algorithm proposed by the present invention has 12 advantages over MFEA _ alpha, MFEA _ beta, MFPSO, MFDE in 18 examples, further illustrating the effectiveness of the algorithm proposed by the present invention.
Table 4 experimental results of five algorithms run on a multitask baseline problem 20
Figure BDA0003062387470000161
To intuitively illustrate the advantage of the AKT _ MFDE algorithm proposed by the present invention in increasing forward migration, the present invention compares the original MFDE with the AKT _ MFDE optimization process in more arbitrary groups with lower similarity. As shown in fig. 2 to fig. 7, which are the convergence curves of MFDE and AKT _ MFDE in more or less similar groups, it can be seen that AKT _ MFDE can effectively improve convergence when processing dissimilar tasks.

Claims (5)

1. A multitask evolution algorithm based on adaptive knowledge migration is characterized by comprising the following steps:
step 1, population initialization: randomly generating N individuals in a unified search space as an initialization population P, wherein P is { y }1,y2,y3,...yNThe learning period is initialized to 75 generations, the mating probability RMP among tasks is initialized to 0.3, each individual is evaluated according to each optimized task, and the scalar fitness value of the individual is calculated
Figure FDA0003062387460000011
And skill factor τi
Wherein the scalar fitness value of the individual
Figure FDA0003062387460000012
Calculated by the following formula (1), λ is a penalty coefficient,
Figure FDA0003062387460000013
total constraint violation factor for ith individual on jth task, fi jA target value for the ith individual on the jth task; anBody skill factor τiRepresenting the task that the individual performs most favorably,
Figure FDA00030623874600000111
Figure FDA0003062387460000014
Figure FDA0003062387460000015
ordering the individual i on the jth task according to the scalar fitness value, wherein j belongs to {1, 2, 3.. K }, and K is the total number of tasks in a search space;
Figure FDA0003062387460000016
step2, evaluating the correlation degree among tasks: calculating the degree of correlation between tasks using formula (2) cross-task fitness distance correlation CTFDC; t is1、T2Respectively represent task1 and task2, n is T1Number of individuals sampled on task, f1For sampling individuals at T1A vector of fitness values evaluated on the task, wherein f1 iAt task T for ith individual1A fitness value of (i ∈ [1, n ]];d2Is to sample the individual at T1Task-wise evaluated fitness value best individual xbestAnd the distance between all the sampled individuals, wherein
Figure FDA0003062387460000017
Is the ith individual and xbestThe Euclidean distance between;
Figure FDA0003062387460000018
and
Figure FDA0003062387460000019
σ(f1) And σ (d)2) Are respectively f1And d2Mean value ofVariance;
Figure FDA00030623874600000110
step 3, updating RMP: recalculating the RMP according to the correlation degree in the step2, if the tasks are in negative correlation, setting the RMP to be 0, otherwise, passing the similarity degree between the tasks reflected by the CTFDC through fm(s) mapping as a value of RMP anew, fmThe(s) function is shown by formula (3), a1、b1From the interval [0, 1]And [0.3, 1]Calculated according to the interval mapping relation shown in formula (4) and [ Nmax,Nmin]Is a target interval, [ O ]max,Omin]Original interval, Ox,yIs any point in the original interval;
RMP=fm(s)=a1·s+b1 (3)
Figure FDA0003062387460000021
step 4, mutation: when rand (0, 1) < RMP is satisfied, information is migrated between tasks and children are updated using equation (5) where,
Figure FDA0003062387460000022
for the individuals randomly selected in the target population,
Figure FDA0003062387460000023
f is a scaling factor for an individual selected from the source task population; otherwise, using the formula (6) DE/rand/1 and updating the skill factors of the offspring individuals,
Figure FDA0003062387460000024
selecting a source task and a target task as skill factors with the same probability for individuals randomly selected from a target population when generating filial generations in an inter-task information migration mode; otherwiseDirectly taking the target task as a skill factor of a descendant;
Figure FDA0003062387460000025
Figure FDA0003062387460000026
step 5, crossing: performing cross operation on individuals generated by the variation;
and 6, evaluation: evaluating the filial generation by adopting a vertical culture propagation mechanism, wherein when the filial generation is generated by knowledge migration, the filial generation carries information of a plurality of different tasks, a father generation of the filial generation has different skill factors, the skill factors of the filial generation randomly inherit the skill factors of the father generation with the same probability, when the generation of the filial generation does not relate to the information migration among a plurality of tasks, the filial generation directly inherits the skill factors of the father generation, and the filial generation directly evaluates on the tasks represented by the skill factors;
step 7, merging and updating: merging the parent population and the offspring population, and updating the scalar fitness value and the skill factor of the individuals of the merged population;
step 8, selection: sorting the individuals in the combined population according to the scalar fitness value, and selecting the first N superior individuals to form a next generation population;
and 9, judging whether the termination condition is met, and returning to the step2 if the termination condition is not met.
2. The adaptive knowledge migration-based multitask evolution algorithm according to claim 1, wherein the step 1 is specifically as follows: randomly generating N individuals in a search space as an initial population, and P ═ y1,y2,...,yNWith initial individual dimension max { D }i},i∈[1,K]K is the total number of the multitasks, the learning period is initialized to 75 generations, the RMP is initialized to 0.3, each individual is evaluated according to each optimization task, and the scalar fitness value of the individual is calculated
Figure FDA0003062387460000031
And skill factor τi
Step 1.1, calculating a target value by random key decoding: initializing random key values in the population according to xi=Li+(Ui-Li)×yiMapping to a search space of a certain actual optimization task, wherein L, U is the upper and lower boundaries of the actual optimization task respectively, and then calculating the target value of each individual on each task;
step 1.2, initial population sequencing: aiming at different optimization problems, the population is sequentially ranked according to the target value to obtain K ranking sets { set }1,set2,...setK},setiSorting sets of the N individuals of the population according to their target values on the Kth task;
step 1.3, individual evaluation: and (4) calculating scalar fitness values and skill factors of the individuals according to the population sorting result in the step 1.2.
3. The adaptive knowledge migration based multitask evolution algorithm according to claim 2, characterized in that said step 1.3 specifically comprises the following steps:
step 1.3.1, calculating scalar fitness values of individuals
Figure FDA0003062387460000032
Figure FDA0003062387460000033
Wherein, the lambda is a penalty coefficient,
Figure FDA0003062387460000034
total constraint violation factor for ith individual on jth task, fi jA target value for the ith individual on the jth task;
step 1.3.2, calculating factor ranking of individuals
Figure FDA0003062387460000041
Factor ranking of individuals
Figure FDA0003062387460000042
A set ordered for individuals on each optimization task according to scalar fitness value, i represents the ith individual in the first population, i is E [1, N]J is an optimization task, j belongs to [1, K ]];
Step 1.3.3 calculating individual skill factor τi: the skill factors of an individual represent the task that the individual has advantages,
Figure FDA0003062387460000043
wherein
Figure FDA0003062387460000044
Calculated from step 1.3.2.
4. The adaptive knowledge migration-based multitask evolution algorithm according to claim 3, wherein the step2 is specifically as follows: cross-task fitness distance correlation using equation (2)
Figure FDA0003062387460000045
To calculate the degree of correlation between tasks; t is1,T2Respectively represent task1 and task2, n is T1Number of individuals sampled on task:
step 2.1, calculating the fitness value vector of the sampling individual on the target task: f. of1For sampling individuals at T1A vector of fitness values evaluated on the task, wherein
Figure FDA0003062387460000046
At task T for ith individual1A fitness value of (i ∈ [1, n ]];
Step 2.2, calculating a distance vector between the sampling individual and the individual with the best performance on the source task: d2Is to sample the individual at T1Fitness of on-task assessmentBest valued individual xbestAnd the distance between all the sampled individuals, wherein
Figure FDA0003062387460000047
Is the ith individual and xbestThe Euclidean distance between;
step 2.3, calculating T1、T2Similarity between them: as shown in the formula (2),
Figure FDA0003062387460000048
and
Figure FDA0003062387460000049
σ(f1) And σ (d)2) Are respectively f1And d2Mean and variance of.
Figure FDA00030623874600000410
5. The adaptive knowledge migration-based multitask evolution algorithm according to claim 4, wherein the step 5 is specifically as follows: performing cross operation on individuals generated by the variation, wherein the specific operation is shown in formula (7), and xi,jRepresenting the j dimension, v, of the i individuali,jIs xi,jIndividual dimension value, rand, obtained by variationnIs [0, 1 ]]Random number in between, CR is the crossover probability, jrandIs [1,...... ], D]D is the dimension of the unified search space.
Figure FDA0003062387460000051
CN202110519904.5A 2021-05-12 2021-05-12 Multitask evolution algorithm based on self-adaptive knowledge migration Pending CN113205172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110519904.5A CN113205172A (en) 2021-05-12 2021-05-12 Multitask evolution algorithm based on self-adaptive knowledge migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110519904.5A CN113205172A (en) 2021-05-12 2021-05-12 Multitask evolution algorithm based on self-adaptive knowledge migration

Publications (1)

Publication Number Publication Date
CN113205172A true CN113205172A (en) 2021-08-03

Family

ID=77031010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110519904.5A Pending CN113205172A (en) 2021-05-12 2021-05-12 Multitask evolution algorithm based on self-adaptive knowledge migration

Country Status (1)

Country Link
CN (1) CN113205172A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115510561A (en) * 2022-09-29 2022-12-23 中南大学 Multitask-based automobile energy absorption box structure optimization design method and system
CN116151303A (en) * 2023-04-21 2023-05-23 武汉科技大学 Method for predicting optimal migration opportunity in multi-task multi-objective optimization problem

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115510561A (en) * 2022-09-29 2022-12-23 中南大学 Multitask-based automobile energy absorption box structure optimization design method and system
CN115510561B (en) * 2022-09-29 2023-04-04 中南大学 Multitask-based automobile energy absorption box structure optimization design method and system
CN116151303A (en) * 2023-04-21 2023-05-23 武汉科技大学 Method for predicting optimal migration opportunity in multi-task multi-objective optimization problem
CN116151303B (en) * 2023-04-21 2023-06-27 武汉科技大学 Method for optimizing combustion chamber design by accelerating multi-objective optimization algorithm

Similar Documents

Publication Publication Date Title
Hu et al. A self-organizing multimodal multi-objective pigeon-inspired optimization algorithm
Zhang et al. Vector coevolving particle swarm optimization algorithm
Wu et al. An Improved Teaching‐Learning‐Based Optimization Algorithm with Reinforcement Learning Strategy for Solving Optimization Problems
Li et al. A survey on firefly algorithms
Lan et al. A two-phase learning-based swarm optimizer for large-scale optimization
Jiang et al. A bi-objective knowledge transfer framework for evolutionary many-task optimization
CN113205172A (en) Multitask evolution algorithm based on self-adaptive knowledge migration
Gao et al. Multiobjective multitasking optimization with subspace distribution alignment and decision variable transfer
Jiang et al. Block-level knowledge transfer for evolutionary multitask optimization
Abed-Alguni et al. Opposition-based sine cosine optimizer utilizing refraction learning and variable neighborhood search for feature selection
Gong et al. Set-based many-objective optimization guided by a preferred region
Xiong et al. Multi-feature fusion and selection method for an improved particle swarm optimization
Wang et al. Domain adaptation multitask optimization
Bäck et al. Evolutionary algorithms for parameter optimization—thirty years later
Fan et al. A modified salp swarm algorithm based on the perturbation weight for global optimization problems
Wang et al. Application of hybrid artificial bee colony algorithm based on load balancing in aerospace composite material manufacturing
Cully et al. Quality-diversity optimisation
Zhao et al. Elite-ordinary synergistic particle swarm optimization
Yu et al. A framework based on historical evolution learning for dynamic multiobjective optimization
Wu et al. Evolutionary multitask optimization in real-world applications: A survey
Wu et al. Unified deep learning architecture for modeling biology sequence
Gao et al. An adaptive framework to select the coordinate systems for evolutionary algorithms
Liu et al. An efficient differential evolution via both top collective and p-best information
Miao et al. Promoting quality and diversity in population-based reinforcement learning via hierarchical trajectory space exploration
Shi et al. A modified multifactorial differential evolution algorithm with optima-based transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210803