US7383236B2 - Fuzzy preferences in multi-objective optimization (MOO) - Google Patents

Fuzzy preferences in multi-objective optimization (MOO) Download PDF

Info

Publication number
US7383236B2
US7383236B2 US10/501,378 US50137805A US7383236B2 US 7383236 B2 US7383236 B2 US 7383236B2 US 50137805 A US50137805 A US 50137805A US 7383236 B2 US7383236 B2 US 7383236B2
Authority
US
United States
Prior art keywords
functions
individuals
preferences
weighted sub
predetermined range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/501,378
Other versions
US20050177530A1 (en
Inventor
Yaochu Jin
Bernhard Sendhoff
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Research Institute Europe GmbH
Original Assignee
Honda Research Institute Europe GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Research Institute Europe GmbH filed Critical Honda Research Institute Europe GmbH
Assigned to HONDA RESEARCH INSTITUTE EUROPE GMBH reassignment HONDA RESEARCH INSTITUTE EUROPE GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JIN, YAOCHU, SENDHOFF, BERNHARD
Publication of US20050177530A1 publication Critical patent/US20050177530A1/en
Application granted granted Critical
Publication of US7383236B2 publication Critical patent/US7383236B2/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/06Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/911Nonmedical diagnostics
    • Y10S706/913Vehicle or aerospace

Definitions

  • the present invention relates to a method for the optimization of multi-objective problems using evolutionary algorithms, to the use of such a method for the optimization of aerodynamic or hydrodynamic bodies as well as to a computer software program product for implementing such a method.
  • the background of the present invention is the field of evolution algorithms. Therefore, with reference to FIG. 1 , at first the known cycle of an evolutionary algorithm will be explained.
  • step S 1 the object parameters to be optimized are encoded in a string called ‘individual’.
  • a plurality of such individuals comprising the initial parent generation is then generated and the quality (fitness) of each individual in the parent generation is evaluated.
  • step S 2 the parents are reproduced by applying genetic operators called mutation and recombination.
  • step S 3 which is called the offspring generation.
  • the quality of the offspring individuals is evaluated using a fitness function that is the objective of the optimization in step S 4 .
  • step S 5 selects, possibly stochastically, the best offspring individuals (survival of the fittest) which are used as parents for the next generation cycle if the termination condition in step S 6 is not satisfied.
  • evolutionary algorithms are known to be robust optimizers that are well-suited for discontinuous and multi-modal objective functions. Therefore, evolutionary algorithms have successfully been applied e.g. to mechanical and aerodynamic optimization problems, including preliminary turbine design, turbine blade design, multi-disciplinary rotor blade design, multi-disciplinary wing platform design and a military airframe preliminary design.
  • the evolutionary algorithms are applied to the simultaneous optimization of multiple objectives, which is a typical feature of practical engineering and design problems.
  • the principle multi-objective optimization differs from that in a single-objective optimization.
  • the target is to find the best design solution, which corresponds to the minimum or maximum value of the objective function.
  • the interaction among different objectives gives rise to a set of compromise solutions known as the Pareto-optimal solutions.
  • a definition of ‘Pareto-optimal’ and ‘Pareto front’ can be found in “Multi-Objective Evolutionary Algorithms: Analyzing the State of the Art” (Evolutionary Computation, 8(2), pp. 125-147, 2000) by D. A. Van VeldMapen and G. B. Lamont.
  • the target in a multi-objective optimization is to find as many Pareto-optimal solutions as possible. Once such solutions are found, it usually requires a higher-level decisionmaking with other considerations to choose one of them for implementation.
  • the first task is desired to satisfy optimality conditions in the obtained solutions.
  • the second task is desired to have no bias towards any particular objective function.
  • weighted aggregation approaches to multi-objective optimization are very easy to implement and computationally efficient.
  • aggregation approaches can provide only one Pareto-solution if the weights are fixed using problem-specific prior knowledge.
  • the weights of the different objectives are encoded in the chromosome to obtain more than one Pareto solutions. Phenotypic fitness sharing is used to keep the diversity of the weight combinations and mating restrictions are required so that the algorithm can work properly.
  • the function rdm (P) generates a uniformly distributed random number between 0 and P.
  • the weight combinations are regenerated in every generation.
  • the sine function is used simply because it is a plain periodical function between 0 and 1.
  • the weights w 1 (t) and w 2 (t) will change from 0 to 1 periodically from generation to generation.
  • the change frequency can be adjusted by F. The frequency should not be too high so that the algorithm is able to converge to a solution on the Pareto front. On the other hand, it seems reasonable to let the weight change from 0 to 1 at least twice during the whole optimization.
  • FIG. 2 A general procedure for applying fuzzy preferences to multi-objective optimization is illustrated in FIG. 2 . It can be seen that before the preferences can be applied in MOO, they have to be converted into crisp weights first. The procedure of conversion is described as follows:
  • R _ _ ( ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ) .
  • the weight for each objective can be obtained by:
  • ⁇ w 5 1 - ⁇ + 2 ⁇ ⁇ 8 + 2 ⁇ ⁇
  • w 6 3 - ⁇ - 2 ⁇ ⁇ 8 + 2 ⁇ ⁇ . Since ⁇ and ⁇ can vary between 0 and 0.5, one needs to heuristically specify a value for ⁇ and ⁇ (recall that ⁇ ) to convert the fuzzy preferences into a single-valued weight combination, which can then be applied to a conventional weighted aggregation to achieve one solution.
  • fuzzy preferences are converted into a weight combination with each weight being described by an interval instead of a single value.
  • FIG. 1 shows a cycle of an evolution strategy
  • FIG. 2 shows schematically a procedure to apply-fuzzy preferences in MOO
  • FIGS. 3 a , 3 b show the change of weights (w 1 and w 2 ) with the change of parameter ( ⁇ ), respectively, and
  • FIGS. 4 a , 4 b show the change of weights (w 3 and w 4 ) with the change of parameters ( ⁇ and ⁇ ), respectively.
  • linguistic fuzzy preferences can be converted into a weight combination with each weight being described by an interval.
  • FIGS. 3 a , 3 b , 4 a , 4 b show how the value of the parameters affects that of the weights. It can be seen from these figures that the weights vary a lot when the parameters ( ⁇ , ⁇ ) change in the allowed range. Thus, each weight obtained from the fuzzy preferences is an interval on [0,1]. Very interestingly, a weight combination in interval values can nicely be incorporated into a multi-objective optimization with the help of the RWA and DWA, which is explained e.g. in “Evolutionary Weighted Aggregation: Why does it Work and How?” (in: Proceedings of Genetic and Evolutionary Computation Conference, pp. 1042-1049, 2001) by Jin et al.
  • w 1 i ⁇ ( t ) w 1 min + ( w 1 max - w 1 min ) ⁇ rdm ⁇ ⁇ ( P ) P , where t is the generation index.
  • the evolutionary algorithm is able to provide a set of Pareto solutions that are reflected by the fuzzy preferences.
  • the invention proposes a method to obtain the Pareto-optimal solutions that are specified by human preferences.
  • the main idea is to convert the fuzzy preferences into interval-based weights.
  • RWA and DWA it is shown to be successful to find the preferred solutions on two test functions with a convex Pareto front.
  • the method according to the invention is able to find a number of solutions instead of only one, given a set of fuzzy preferences over different objectives. This is consistent with the motivation of fuzzy logic.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Genetics & Genomics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Processing (AREA)

Abstract

A method to obtain the Pareto solutions that are specified by human preferences is suggested. The main idea is to convert the fuzzy preferences into interval-based weights. With the help of the dynamically-weighted aggregation method, it is shown to be successful to find the preferred solutions on two test functions with a convex Pareto front. Compared to the method described in “Use of Preferences for GA-based Multi-Objective Optimization” (Proceedings of 1999 Genetic and Evolutionary Computation Conference, pp. 1504-1510, 1999) by Cvetkovic et al., the method according to the invention is able to find a number of solutions instead of only one, given a set of fuzzy preferences over different objectives. This is consistent with the motivation of fuzzy logic.

Description

This application is a 371 of PCT/EP02/14002 filed on Dec. 10, 2002. This application also claims priority based on application 02001252.2 filed in with the European Patent Office on Jan. 17, 2002 and application 02003557.2 filed with the European Patent Office on Feb. 15, 2002.
The present invention relates to a method for the optimization of multi-objective problems using evolutionary algorithms, to the use of such a method for the optimization of aerodynamic or hydrodynamic bodies as well as to a computer software program product for implementing such a method.
The background of the present invention is the field of evolution algorithms. Therefore, with reference to FIG. 1, at first the known cycle of an evolutionary algorithm will be explained.
In a step S1, the object parameters to be optimized are encoded in a string called ‘individual’. A plurality of such individuals comprising the initial parent generation is then generated and the quality (fitness) of each individual in the parent generation is evaluated. In a step S2, the parents are reproduced by applying genetic operators called mutation and recombination. Thus, a new generation is reproduced in step S3, which is called the offspring generation. The quality of the offspring individuals is evaluated using a fitness function that is the objective of the optimization in step S4. Finally, depending on the calculated quality value, step S5 selects, possibly stochastically, the best offspring individuals (survival of the fittest) which are used as parents for the next generation cycle if the termination condition in step S6 is not satisfied.
Before evaluating the quality of each individual, decoding may be needed depending on the encoding scheme used in the evolutionary algorithm. It should be noted that the steps S2, The algorithm of this evolutionary optimization can be expressed by the following pseudo-code:
t := 0
encode and initialize P(0)
decode and evaluate P(0)
do
  recombine P(t)
  mutate P(t)
  decode P(t)
  evaluate P(t)
  P(t+1) := select P(t)
  encode P(t+1)
  t := t + 1
until terminate

Thereby,
    • P(0) denotes the initial population size (t=0),
    • P(t) denotes the offspring population size in the t-th successor generation (t>0),
    • t is the index for the generation number (t∈N0).
Such evolutionary algorithms are known to be robust optimizers that are well-suited for discontinuous and multi-modal objective functions. Therefore, evolutionary algorithms have successfully been applied e.g. to mechanical and aerodynamic optimization problems, including preliminary turbine design, turbine blade design, multi-disciplinary rotor blade design, multi-disciplinary wing platform design and a military airframe preliminary design.
For example, details on evolutionary algorithms can be found in “Evolutionary Algorithms in Engineering Applications” (Springer-Verlag, 1997) by Dasgupta et al., and “Evolutionary Algorithms in Engineering and Computer Science” (John Wiley and Sons, 1999) by Miettinnen et al.
In the framework of the present invention, the evolutionary algorithms are applied to the simultaneous optimization of multiple objectives, which is a typical feature of practical engineering and design problems. The principle multi-objective optimization differs from that in a single-objective optimization. In single-objective optimization, the target is to find the best design solution, which corresponds to the minimum or maximum value of the objective function. On the contrary, in a multi-objective optimization with conflicting objectives, there is no single optimal solution. The interaction among different objectives gives rise to a set of compromise solutions known as the Pareto-optimal solutions. A definition of ‘Pareto-optimal’ and ‘Pareto front’ can be found in “Multi-Objective Evolutionary Algorithms: Analyzing the State of the Art” (Evolutionary Computation, 8(2), pp. 125-147, 2000) by D. A. Van Veldheizen and G. B. Lamont.
Since none of these Pareto-optimal solutions can be identified as better than others without any further consideration, the target in a multi-objective optimization is to find as many Pareto-optimal solutions as possible. Once such solutions are found, it usually requires a higher-level decisionmaking with other considerations to choose one of them for implementation.
Usually, there are two targets in a multi-objective optimization:
    • (i) finding solutions close to the true Pareto-optimal solutions, and
    • (ii) finding solutions that are widely different from each other.
The first task is desired to satisfy optimality conditions in the obtained solutions. The second task is desired to have no bias towards any particular objective function.
In dealing with multi-objective optimization problems, classical search and optimization methods are not efficient, simply because
    • most of them cannot find multiple solutions in a single run, thereby requiring them to be applied as many times as the number of desired Pareto-optimal solutions,
    • multiple application of these methods do not guarantee finding widely different Pareto-optimal solutions, and
    • most of them cannot efficiently handle problems with discrete variables and problems having multiple optimal solutions.
On the contrary, the studies on evolutionary search algorithms, over the past few years, have shown that these methods can efficiently be used to eliminate most of the difficulties of classical methods mentioned above. Since they use a population of solutions in their search, multiple Pareto-optimal solutions can, in principle, be found in one single run. The use of diversity-preserving mechanisms can be added to the evolutionary search algorithms to find widely different Pareto-optimal solutions.
A large number of evolutionary multi-objective algorithms (EMOA) have been proposed. So far, there are three main approaches to evolutionary multi-objective optimization, namely, aggregation approaches, population-based non-Pareto approaches and Pareto-based approaches. In the recent years, the Pareto-based approaches have been gaining increasing attention in the evolutionary computation community and several successful algorithms have been proposed. Unfortunately, the Pareto-based approaches are often very time-consuming.
Despite their shortcomings, weighted aggregation approaches to multi-objective optimization according to the state of the art are very easy to implement and computationally efficient. Usually, aggregation approaches can provide only one Pareto-solution if the weights are fixed using problem-specific prior knowledge. However, it is also possible to find more than one Pareto solution using this method by changing the weights during optimization. The weights of the different objectives are encoded in the chromosome to obtain more than one Pareto solutions. Phenotypic fitness sharing is used to keep the diversity of the weight combinations and mating restrictions are required so that the algorithm can work properly.
It has been found that the shortcomings of the conventional aggregation approach can be overcome by systematically changing the weights during optimization without any loss of simplicity and efficiency. Three methods have been proposed to change the weights during optimization to approximate the Pareto front. The randomly-weighted aggregation (RWA) method dividuals within the population and the weights are redistributed in each generation. In contrast, the dynamically-weighted aggregation (DWA) method changes the weights gradually when the evolution proceeds. If the Pareto-optimal front is concave, the bang-bang weighted aggregation (BWA) can also be used. In order to incorporate preferences, both RWA and DWA can be used.
Randomly Weighted Aggregation
In the framework of evolutionary optimization it is natural to take advantage of the population for obtaining multiple Pareto-optimal solutions in one run of the optimization. On the assumption that the i-th individual in the population has its own weight combination (w1 i(t), w2 i(t)) in generation t, the evolutionary algorithm will be able to find different Pareto-optimal solutions. To realize this, it can be found that the weight combinations need to be distributed uniformly and randomly among the individuals, and a re-distribution is necessary in each generation:
w 1 i ( t ) = rdm ( P ) P , w 2 i ( t ) = 1.0 - w 1 i ( t ) ,
wherein
    • i denotes the i-th individual in the population (i=1, 2, . . . , P),
    • P is the population size (P∈N), and
    • t is the index for the generation number (t∈N0).
The function rdm (P) generates a uniformly distributed random number between 0 and P. In this way, a uniformly distributed random weight combination (w1 i, w2 i) among the individuals can random weight combination (w1 i, w2 i) among the individuals can be obtained, where 0≦w1 i, w2 i≦1 and w1 i+w2 i=1. In this context, it should be noted that the weight combinations are regenerated in every generation.
Dynamic Weighted Aggregation
In the dynamically-weighted aggregation (DWA) approach, all individuals have the same weight combination, which is changed gradually generation by generation. Once the individuals reach any point on the Pareto front, the slow change of the weights will force the individuals to keep moving gradually along the Pareto front if the Pareto front is convex. If the Pareto front is concave, the individuals will still traverse along the Pareto front, however, in a different fashion. The change of the weights can be realized as follows:
w 1(t)=|sin(2nt/F)|,
w 2(t)=1.0−w 1(t).
where t is the number of generation. Here the sine function is used simply because it is a plain periodical function between 0 and 1. In this case, the weights w1(t) and w2(t) will change from 0 to 1 periodically from generation to generation. The change frequency can be adjusted by F. The frequency should not be too high so that the algorithm is able to converge to a solution on the Pareto front. On the other hand, it seems reasonable to let the weight change from 0 to 1 at least twice during the whole optimization.
In the above methods, it is assumed that all objectives are of the same importance. In this case, weights are changed between [0,1] in RWA and DWA to achieve all Pareto-optimal solutions. However, in many real-world applications, different objectives may have different importance. Thus, the goal is not to get the whole Pareto front, but only the desired part of the Pareto front. The importance of each objective is usually specified by the human user in term of preferences. For example, for a two-objective problem, the user may believe that one objective is more important than the other. To achieve the desired Pareto-optimal solutions, preferences need to be incorporated into multi-objective optimization. Instead of changing the weights between [0,1], they are changed between [wmin, wmax], where 0≦wmin<wmax≦1, are defined by the preferences. Usually, the preferences can be incorporated before, during or after optimization. In this invention, preference incorporation before optimization is concerned.
As discussed in “Use of Preferences for GA-based Multi-Objective Optimization” (Proceedings of 1999 Genetic and Evolutionary Computation Conference, pp. 1504-1510, 1999) by Cvetkovic et al., the incorporation of fuzzy preferences before optimization can be realized in two ways:
    • Weighted Sum: Use of the preferences as a priori knowledge to determine the weight for each objective, then direct application of the weights to sum up the objectives to a scalar. In this case, only one solution will be obtained.
    • Weighted Pareto Method: The non-fuzzy weight is used to define a weighted Pareto non-dominance:
U w V if  and  only  if   1 k i = 1 k w i I ( u i , v i ) 1
with the utility sets
    • U:={ui|i=1, 2, 3, . . . , k} for ui ∈[0,1] and
    • V:={vi|i=1, 2, 3, . . . , k} for vi ∈[0,1],
      where
I ( u i , v i ) = { 1 for u i v i 0 for u i < v i and i = 1 k w i = 1.
A general procedure for applying fuzzy preferences to multi-objective optimization is illustrated in FIG. 2. It can be seen that before the preferences can be applied in MOO, they have to be converted into crisp weights first. The procedure of conversion is described as follows:
Given L experts (with the indices m=1,2, . . . , L) and their preference relation P m, where P m is a (k×k)-matrix with pij denoting the linguistic preference of the objective oi over the objective oj (with the indices i,j=1,2, . . . , k). Then, based on the group decision-making method, they can be combined into a single collective preference P c. Each element of said preference matrix P c is defined by one of the following linguistic terms:
    • “much more important” (MMI),
    • “more important” (MI),
    • “equally important” (EI),
    • “less important” (LI), and
    • “much less important” (MLI).
For the sake of simplicity, the superscript c indicating the collective preference is omitted in the following text. Before converting the linguistic terms into real-valued weights, they should at first be converted into numeric preferences. To this end, it is necessary to use the following evaluations, to replace the linguistic preferences pij in the preference matrix, as indicated in “Use of Preferences for GA-based Multi-Objective Optimization” (Proceedings of 1999 Genetic and Evolutionary Computation Conference, pp. 1504-1510) by Cvetkovic et al.
    • a is much less important than b
      Figure US07383236-20080603-P00001
      pij=α, pji
    • a is less important than b
      Figure US07383236-20080603-P00001
      pij=γ, pji
    • a is equally important as b
      Figure US07383236-20080603-P00001
      pij=∈, pji=∈.
The value of the parameters needs to be assigned by the decision-making and the following conditions should be satisfied in order not to lose the interpretability of the linguistic terms:
α<γ<∈=0.5<δ<β,
α+β=1=γ+δ.
Consider an MOO problem with six objectives {o1, o2, . . . , o6} as used in “Use of Preferences for GA-based Multi-Objective Optimization” (Proceedings of 1999 Genetic and Evolutionary Computation Conference, pp. 1504-1510) by Cvetkovic et al. Suppose that among these six objectives o1 and o2, o3 and o4 are equally important. Thus, there are have four classes of objectives:
c1:={o1, o2}, c2:={o3, o4}, c3:={o5} and c4:={o6}.
Besides, there are the following preference relations:
    • c1 is much more important than c2;
    • c1 is more important than c3;
    • c4 is more important than c1;
    • c3 is much more important than c2.
From these preferences, it is easy to get the following preference matrix:
P _ _ = ( EI MMI MI LI MLI EI MLI MLI LI MMI EI LI MI MMI MI EI ) .
From the above fuzzy preference matrix, the following real-valued preference relation matrix R are obtained:
R _ _ = ( ɛ β δ γ α ɛ α α γ β ɛ γ δ β δ ɛ ) .
Based on this relation matrix, the weight for each objective can be obtained by:
w ( o i ) = S ( o i , R _ _ ) i = 1 k S ( o i , R _ _ )
with
S ( o i , R _ _ ) := j = 1 , j i k p ij .
For the above example, this results in
w 1 = w 2 = 2 - α 8 + 2 α , w 3 = w 4 = 3 α 8 + 2 α , w 5 = 1 - α + 2 γ 8 + 2 α , and w 6 = 3 - α - 2 γ 8 + 2 α .
Since α and γ can vary between 0 and 0.5, one needs to heuristically specify a value for α and γ (recall that α<γ) to convert the fuzzy preferences into a single-valued weight combination, which can then be applied to a conventional weighted aggregation to achieve one solution.
In order to convert fuzzy preferences into one weight combination, it is necessary to specify a value for α and γ. On the one hand, there are no explicit rules on how to specify these parameters, on the other hand, a lot of information will be lost in this process.
In view of this disadvantage it is the target of the present invention to improve the use of fuzzy preferences for multi-objective optimization.
This target is achieved by means of the features of the independent claims. The dependent claims develop further the central idea of the present invention.
According to the main aspect of the invention, e.g. fuzzy preferences are converted into a weight combination with each weight being described by an interval instead of a single value.
Further objects, advantages and features of the invention will become evident for the man skilled in the art when reading the following detailed description of the invention and by reference to the figures of the enclosed drawings.
FIG. 1 shows a cycle of an evolution strategy,
FIG. 2 shows schematically a procedure to apply-fuzzy preferences in MOO,
FIGS. 3 a, 3 b show the change of weights (w1 and w2) with the change of parameter (α), respectively, and
FIGS. 4 a, 4 b show the change of weights (w3 and w4) with the change of parameters (α and γ), respectively.
According to the underlying invention, linguistic fuzzy preferences can be converted into a weight combination with each weight being described by an interval.
FIGS. 3 a, 3 b, 4 a, 4 b show how the value of the parameters affects that of the weights. It can be seen from these figures that the weights vary a lot when the parameters (α, γ) change in the allowed range. Thus, each weight obtained from the fuzzy preferences is an interval on [0,1]. Very interestingly, a weight combination in interval values can nicely be incorporated into a multi-objective optimization with the help of the RWA and DWA, which is explained e.g. in “Evolutionary Weighted Aggregation: Why does it Work and How?” (in: Proceedings of Genetic and Evolutionary Computation Conference, pp. 1042-1049, 2001) by Jin et al.
On the assumption that the maximal and minimal value of a weight are wmax and wmin, when the parameters change, the weights are changed during an optimization algorithm in the following form, which is extended from RWA:
w 1 i ( t ) = w 1 min + ( w 1 max - w 1 min ) · rdm ( P ) P ,
where t is the generation index. Similarly, by extending the DWA, the weights can also be changed in the following form to find out the preferred Pareto solutions:
w 1 i(t)=w 1 min+(w 1 max −w 1 min)·|sin(2nt/F)|,
where t is the generation index. In this way, the evolutionary algorithm is able to provide a set of Pareto solutions that are reflected by the fuzzy preferences. However, it is recalled that DWA is not able to control the movement of the individuals if the Pareto front is concave, therefore, fuzzy preferences incorporation into MOO using DWA is applicable to convex Pareto fronts only, whereas the RWA method is applicable to both convex and concave fronts.
To illustrate the underlying invention, some examples on two-objective optimization using the RWA are presented in the following. In the simulations, two different fuzzy preferences are considered:
    • 1. Objective 1 is more important than objective 2;
    • 2. Objective 1 is less important than objective 2.
For the first preference, one obtains the following preference matrix:
P _ _ = ( 0.5 δ γ 0.5 ) ,
with 0.5<δ<1 and 0<γ<0.5. Therefore, the weights for the two objectives using the RWA method are:
w 1 i ( t ) = 0.5 + 0.5 · rdm ( P ) P , w 2 i ( t ) = 1.0 - w 1 i ( t ) .
Similarly, the following weights are obtained for the second preference:
w 1 i ( t ) = 0 + 0.5 · rdm ( P ) P , w 2 i ( t ) = 1.0 - w 1 i ( t ) .
To summarize, the invention proposes a method to obtain the Pareto-optimal solutions that are specified by human preferences. The main idea is to convert the fuzzy preferences into interval-based weights. With the help of the RWA and DWA, it is shown to be successful to find the preferred solutions on two test functions with a convex Pareto front. Compared to the method described in “Use of Preferences for GA-based Multi-Objective Optimization” (Proceedings of 1999 Genetic and Evolutionary Computation Conference, pp. 1504-1510, 1999) by Cvetkovic et al., the method according to the invention is able to find a number of solutions instead of only one, given a set of fuzzy preferences over different objectives. This is consistent with the motivation of fuzzy logic.
Many technical, industrial and business applications are possible for evolutionary optimization. Examples for applications can be found e.g. in “Evolutionary Algorithms in Engineering Applications” (Springer-Verlag, 1997) by Dasgupta et al., and “Evolutionary Algorithms in Engineering and Computer Science” (John Wiley and Sons, 1999) by Miettinnen et al.

Claims (9)

1. A method for multi-objective optimization of a mechanical, aerodynamic or hydrodynamic body using evolutionary algorithms, the method comprising the steps of:
(a) encoding object parameters of the mechanical, aerodynamic or hydrodynamic body to be optimized as individuals;
(b) setting up an initial population of the individuals as parents;
(c) reproducing a plurality of offspring individuals from the parents, the individuals representing object parameters to be optimized;
(d) evaluating quality of the offspring individuals using a fitness function comprising a sum of weighted sub-functions, each weighted sub-function corresponding to an objective of the multiple objective optimization;
(e) selecting the one or more offspring individuals having the highest evaluated quality value as parents for a next evolution cycle;
(f) changing weights of the weighted sub-functions for the next cycle within predetermined ranges, a first weight of the weighted sub-functions changing within a first predetermined range, a second weight of the weighted sub-functions changing within a second predetermined range different from the first predetermined range, the first predetermined range and the second predetermined range representing preferences given to objectives of the multiple objective optimization;
(g) repeating steps (c) to (f) until a termination criterion is met; and
(h) outputting one or more offspring individuals after the termination criterion is met as the optimized object parameters of the mechanical, aerodynamic or hydrodynamic body.
2. The method of claim 1, further comprising the step of:
converting preferences of the objectives represented as relative language into parameterized values to generate the first and second predetermined ranges within which the sub-functions can change.
3. The method of claim 2 further comprising the step of:
converting the preferences into parameterized values comprises assigning values within the first and second predetermined ranges to the parameterized values.
4. The method of claim 1, wherein the weights of the weighted sub-functions are randomly re-distributed within the first and second predetermined ranges among the different offspring individuals in each cycle.
5. The method of claim 1, further comprising the step of:
gradually changing the weights of the weighted sub-functions within the first and second predetermined ranges with change in the cycle.
6. The method of claim 5, further comprising the step of:
changing the weights within the predetermined ranges according to a periodic function.
7. The method of claim 5, wherein each offspring individual is evaluated using the same weighted sub-functions in the same cycle.
8. The method of claim 5, wherein the at least one weight generated from a sine function having a number of the cycle as its argument.
9. A computer product for multi-objective optimization of a mechanical, aerodynamic or hydrodynamic body, the computer program product comprising a computer readable storage medium structured to store instructions executable by a processor, the instructions, when executed cause the processor to:
(a) encode object parameters of the mechanical, aerodynamic or hydrodynamic body to be optimized as individuals;
(b) set up an initial population of the individual as parents;
(c) reproduce a plurality of offspring individuals from the parents, the individuals representing object parameters to be optimized;
(d) evaluate quality of the offspring individuals using a fitness function comprising a sum of weighted sub-functions, each weighted sub-function corresponding to an objective of the multiple objective optimization;
(e) select the one or more offspring individuals having the highest evaluated quality value as parents for a next evolution cycle;
(f) change weights of the weighted sub-functions for the next cycle within predetermined ranges, a first weight of the weighted sub-functions changing within a first predetermined range, a second weight of the weighted sub-functions changing within a second predetermined range different from the first predetermined range, the first predetermined range and the second predetermined range representing preferences given to objectives of the multiple objective optimization;
(g) repeat steps (a) to (f) until a termination criterion is met; and
(h) output one or more offspring individuals after the termination criterion is met as the optimized object parameters of the mechanical, aerodynamic or hydrodynamic body.
US10/501,378 2002-01-17 2002-12-10 Fuzzy preferences in multi-objective optimization (MOO) Expired - Fee Related US7383236B2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP02001252 2002-01-17
EP02001252.2 2002-01-17
EP02003557A EP1329845B1 (en) 2002-01-17 2002-02-15 Fuzzy preferences in multi-objective optimization (MOO)
EP02003557.2 2002-02-15
PCT/EP2002/014002 WO2003060821A1 (en) 2002-01-17 2002-12-10 Fuzzy preferences in multi-objective optimization (moo)

Publications (2)

Publication Number Publication Date
US20050177530A1 US20050177530A1 (en) 2005-08-11
US7383236B2 true US7383236B2 (en) 2008-06-03

Family

ID=26077566

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/501,378 Expired - Fee Related US7383236B2 (en) 2002-01-17 2002-12-10 Fuzzy preferences in multi-objective optimization (MOO)

Country Status (5)

Country Link
US (1) US7383236B2 (en)
EP (1) EP1329845B1 (en)
JP (1) JP4335010B2 (en)
AU (1) AU2002358658A1 (en)
WO (1) WO2003060821A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120036096A1 (en) * 2010-08-05 2012-02-09 King Fahd University Of Petroleum And Minerals Method of generating an integrated fuzzy-based guidance law for aerodynamic missiles
US8346690B2 (en) 2010-08-05 2013-01-01 King Fahd University Of Petroleum And Minerals Method of generating an integrated fuzzy-based guidance law using Tabu search
US10031950B2 (en) 2011-01-18 2018-07-24 Iii Holdings 2, Llc Providing advanced conditional based searching
US20220207196A1 (en) * 2020-12-25 2022-06-30 Institute Of Geology And Geophysics, Chinese Academy Of Sciences Optimal design method and system for slope reinforcement with anti-slide piles

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7587660B2 (en) * 2005-04-22 2009-09-08 Kansas State University Research Foundation Multiple-access code generation
US7664622B2 (en) * 2006-07-05 2010-02-16 Sun Microsystems, Inc. Using interval techniques to solve a parametric multi-objective optimization problem
JP2013073596A (en) * 2011-09-29 2013-04-22 Mitsubishi Heavy Ind Ltd Aircraft design device, aircraft design program and aircraft design method
CN107038489B (en) * 2017-04-14 2021-02-02 国网山西省电力公司电力科学研究院 Multi-objective unit combination optimization method based on improved NBI method
US10733332B2 (en) 2017-06-08 2020-08-04 Bigwood Technology, Inc. Systems for solving general and user preference-based constrained multi-objective optimization problems
CN109344448B (en) * 2018-09-07 2023-02-03 中南大学 fuzzy-FQD-based helical bevel gear shape collaborative manufacturing optimization method
CN111931997A (en) * 2020-07-27 2020-11-13 江苏大学 Weighted preference-based natural protection area camera planning method based on multi-objective particle swarm optimization
CN114648247B (en) * 2022-04-07 2026-01-02 浙江财经大学 A remanufacturing decision-making method integrating process planning and scheduling
CN115310353B (en) * 2022-07-26 2024-02-20 明珠电气股份有限公司 A power transformer design method based on fast multi-objective optimization
CN120354684A (en) * 2025-06-24 2025-07-22 湘潭大学 Method for optimizing chip breaking performance of indexable cutter chip breaking groove
CN120542886B (en) * 2025-07-28 2026-02-03 厦门渊亭信息科技有限公司 Intelligent task planning method based on large model and operation planning optimization

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020099929A1 (en) * 2000-11-14 2002-07-25 Yaochu Jin Multi-objective optimization

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020099929A1 (en) * 2000-11-14 2002-07-25 Yaochu Jin Multi-objective optimization

Non-Patent Citations (16)

* Cited by examiner, † Cited by third party
Title
Blumel, A. et al., "Multiobjective Optimization Of Fuzzy Logic Scheduled Controllers For Missile Autopilot Design," IEEE, 2001, pp. 1758-1763.
Coello Coello, C.A., "Handling Preferences In Evolutionary Multiobjective Optimization: A Survey," IEEE, 2000, pp. 30-37.
Cvetkovic, D. et al., "Use Of Preferences For GA-Based Multi-Objective Optimisation," In GECCO '99, Jul. 13-17, 1999, 6 pages.
Cvetkovic,D. et al. "Genetic Algorithm-based Multi-objective Optimisation and Conceptual Engineering Design" IEEE. 1999. *
Dasgupta, D. et al., "Evolutionary Algorithms-An Overview," 11 pages, 1997.
Deb, K., "Evolutionary Algorithms For Multi-Criterion Optimization in Engineering Design," 14 pages, 1999.
Fonseca, C. et al., "Multiobjective Optimization And Multiple Constraint Handling With Evolutionary Algorithms-Part I: A Unified Formulation," IEEE Transactions On Systems, Man, And Cybernetics-Part A: Systems And Humans, IEEE, Jan. 1998, pp. 26-37, vol. 28, No. 1.
Fonseca,C. et al. "Multiobjective Optimization and Multiple Constraint handling with evolutionary Algorithms-Part I: A Unified Formulation" IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, IEEE, pp. 26-37, vol. 28, No. 1. Jan. 1998. *
International Search Report, PCT/EP02/14002, Mar. 21. 2003, 4 pages.
Jin, Y. et al. "Adapting Weighted Aggregation for Multiobjective Evolution Strategies" pp. 96-110, Springer-Vertag Berlin Heidelberg. Mar. 2001. *
Jin, Y. et al., "Adapting Weighted Aggregation For Multiobjective Evolution Strategies," 2001, pp. 96-110, Springer-Vertag Berlin Heidelberg, Mar. 2001.
Jin, Y. et al., "Dynamic Weighted Aggregation For Evolutionary Multi-Objective Optimization: Why Does It Work And How?" 8 pages. GECCO '2001, Jul. 2001.
Jin, Y. et al., "Managing Approximate Models In Evolutionary Aerodynamic Design Optimization," IEEE, 2001, pp. 592-599.
Oyama,A et al. "Euler/Navier-Stokes Optimization of Supersonic Wing Design Based on Evolutionary Algorithm" 1999. *
Van Veldhuizen, D. A. et al., "Multiobjective Evolutionary Algorithms: Analyzing The State-Of-The-Art," Evolutionary Computation, 2000, pp. 125-147, vol. 8, No. 2.
Written Opinion, PCT/EP02/14002. Apr. 30, 2004, 8 pages.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120036096A1 (en) * 2010-08-05 2012-02-09 King Fahd University Of Petroleum And Minerals Method of generating an integrated fuzzy-based guidance law for aerodynamic missiles
US8195345B2 (en) * 2010-08-05 2012-06-05 King Fahd University Of Petroleum & Minerals Method of generating an integrated fuzzy-based guidance law for aerodynamic missiles
US8346690B2 (en) 2010-08-05 2013-01-01 King Fahd University Of Petroleum And Minerals Method of generating an integrated fuzzy-based guidance law using Tabu search
US10031950B2 (en) 2011-01-18 2018-07-24 Iii Holdings 2, Llc Providing advanced conditional based searching
US20220207196A1 (en) * 2020-12-25 2022-06-30 Institute Of Geology And Geophysics, Chinese Academy Of Sciences Optimal design method and system for slope reinforcement with anti-slide piles
US11459722B2 (en) * 2020-12-25 2022-10-04 Institute Of Geology And Geophysics, Chinese Academy Of Sciences Optimal design method and system for slope reinforcement with anti-slide piles

Also Published As

Publication number Publication date
US20050177530A1 (en) 2005-08-11
JP2005515564A (en) 2005-05-26
WO2003060821A1 (en) 2003-07-24
JP4335010B2 (en) 2009-09-30
EP1329845B1 (en) 2010-08-25
EP1329845A1 (en) 2003-07-23
AU2002358658A1 (en) 2003-07-30

Similar Documents

Publication Publication Date Title
US7383236B2 (en) Fuzzy preferences in multi-objective optimization (MOO)
Hameed et al. An optimized case-based software project effort estimation using genetic algorithm
Purshouse On the evolutionary optimisation of many objectives
Qi et al. The application of parallel multipopulation genetic algorithms to dynamic job-shop scheduling
EP1205863A1 (en) Multi-objective optimization
Rakka et al. A review of state-of-the-art mixed-precision neural network frameworks
Tiwari et al. Performance assessment of the hybrid archive-based micro genetic algorithm (AMGA) on the CEC09 test problems
Bhuvana et al. Memetic algorithm with preferential local search using adaptive weights for multi-objective optimization problems
Parveen et al. Review on job-shop and flow-shop scheduling using
Chung et al. Multi-objective evolutionary architectural pruning of deep convolutional neural networks with weights inheritance
Adamuthe et al. Solving single and multi-objective 01 knapsack problem using harmony search algorithm
Frenken Modelling the organisation of innovative activity using the NK-model
Nikbakhtsarvestani et al. Multi-objective ADAM Optimizer (MAdam)
Takahama et al. Constrained optimization by α constrained genetic algorithm (αGA)
Datta Efficient genetic algorithm on linear programming problem for fittest chromosomes
Nissen et al. An introduction to evolutionary algorithms
CN120523586A (en) A low-power task allocation and scheduling method for heterogeneous multi-core processing systems
Seghir A genetic algorithm with an elitism replacement method for solving the nonfunctional web service composition under fuzzy QoS parameters
CN119127501A (en) A task scheduling method, device, equipment and medium for industrial Internet equipment
Ojha et al. Multi-objective optimisation of multi-output neural trees
Alcalá et al. Linguistic modeling with weighted double-consequent fuzzy rules based on cooperative coevolutionary learning
Mezura-Montes et al. Use of multiobjective optimization concepts to handle constraints in genetic algorithms
Martín et al. Approximating nondominated sets in continuous multiobjective optimization problems
Alvarez A neural network with evolutionary neurons
Season Non-linear PLS using genetic programming

Legal Events

Date Code Title Description
AS Assignment

Owner name: HONDA RESEARCH INSTITUTE EUROPE GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIN, YAOCHU;SENDHOFF, BERNHARD;REEL/FRAME:016484/0101

Effective date: 20050331

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20160603