CN110533221A - Multipurpose Optimal Method based on production confrontation network - Google Patents
Multipurpose Optimal Method based on production confrontation network Download PDFInfo
- Publication number
- CN110533221A CN110533221A CN201910688044.0A CN201910688044A CN110533221A CN 110533221 A CN110533221 A CN 110533221A CN 201910688044 A CN201910688044 A CN 201910688044A CN 110533221 A CN110533221 A CN 110533221A
- Authority
- CN
- China
- Prior art keywords
- network
- optimization
- sample
- variables
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 114
- 238000004519 manufacturing process Methods 0.000 title abstract description 10
- 238000005457 optimization Methods 0.000 claims abstract description 235
- 238000012549 training Methods 0.000 claims abstract description 166
- 238000011156 evaluation Methods 0.000 claims abstract description 85
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 56
- 230000000694 effects Effects 0.000 claims abstract description 20
- 238000005070 sampling Methods 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 37
- 230000035772 mutation Effects 0.000 claims description 24
- 238000009826 distribution Methods 0.000 claims description 15
- 230000002068 genetic effect Effects 0.000 claims description 12
- 230000001419 dependent effect Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000013468 resource allocation Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 9
- 230000007547 defect Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 238000009776 industrial production Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 230000003042 antagnostic effect Effects 0.000 description 4
- 230000000052 comparative effect Effects 0.000 description 4
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 3
- NUFBIAUZAMHTSP-UHFFFAOYSA-N 3-(n-morpholino)-2-hydroxypropanesulfonic acid Chemical compound OS(=O)(=O)CC(O)CN1CCOCC1 NUFBIAUZAMHTSP-UHFFFAOYSA-N 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000000349 chromosome Anatomy 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 206010064571 Gene mutation Diseases 0.000 description 1
- 241001415395 Spea Species 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011273 social behavior Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/12—Computing arrangements based on biological models using genetic models
- G06N3/126—Evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Genetics & Genomics (AREA)
- Physiology (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A kind of Multipurpose Optimal Method based on production confrontation network disclosed by the invention, solves the problems such as existing optimization algorithm time cost is high, and training difficulty is excessive, and training network is easy collapse.Its implementation: stochastical sampling obtains initial sample;Pareto solution therein is selected as training set;It randomly selects half in training set to be pre-processed as training sample, building production fights network, obtains generating sample after being iterated training;Judge whether to need to be trained optimization again according to evaluation number;The result for generating sample and other comparison algorithms through the invention is compared, evaluation algorithms superiority and inferiority.Present invention reduces time costs, improve network robustness and stability, and effect of optimization is obvious, can be used for the resource allocation of multiple targets, the production scheduling of multiple products and optimization of the multiple performances of various software systems etc..
Description
Technical Field
The invention belongs to the technical field of computers, in particular relates to a multi-objective parameter optimization method based on a generative countermeasure network, which is called MOGAN for short, and can be used for the resource allocation of a plurality of objectives, the production scheduling of a plurality of products, the optimization of a plurality of performances of various software systems and the like by utilizing a computer technology.
Background
In industrial production and life, many problems are composed of multiple targets that conflict and affect each other. One often encounters an optimization problem that makes multiple targets as best as possible at the same time in a given area, namely a multi-objective optimization problem (MOP), which exists with more than one optimized target and requires simultaneous processing of multiple targets. For example, in the design of components, the power is required to be larger and the stability is required to be stronger, and the large power and the high stability are a pair of contradictory targets, which is an MOP problem; for a control system with various non-functional properties, such as running time, throughput, processing number per second, etc., taking distributed message system Kafka as an example, it has two non-functional properties of throughput and delay, also called targets, in some Kafka system, one wants the throughput to be a little larger and the delay to be a little smaller, and these two targets are conflicting, and the task of multi-objective optimization is to find a set of non-inferior solutions of these two targets by adjusting the configuration of its control system, which is also a MOP problem. In industrial production and life, many other MOP problems relate to aspects of life and industrial production, and a related solving method plays a crucial role in solving planning and decision problems in politics, finance, military, environment, production and manufacturing, social security and the like, and is a technical problem to be solved in modern industrial system engineering.
In the single-target optimization problem, only one optimal solution is usually provided, and the optimal solution can be obtained by a simpler and common mathematical method. However, in the multi-objective optimization problem, the targets are mutually restricted, so that the improvement of one target performance is usually at the cost of losing other target performance, and a solution for achieving the optimal target performance cannot exist, so that for the multi-objective optimization problem, the solution is usually a non-inferior solution set, namely a Pareto solution set.
In multi-objective planning, one solution is best at one objective and may be worse at other objectives, due to conflicts and incomparable phenomena between objectives. Pareto proposes the concept of a solution-independent solution (Non-dominant set) of multiple targets, which is defined as: assuming that for all targets, S1 is better than S2 for any two solutions S1 and S2, we call S1 dominates S2, and if S1 is not dominated by other solutions, S1 is called non-dominated solution (non-dominated solution), also called Pareto solution.
The multi-objective evolutionary algorithm (MOEA) is a global probability optimization search method formed by simulating a biological evolution mechanism, and the basic principle of the MOEA is described as follows: the multi-objective evolutionary algorithm starts from a group of randomly generated populations, and through evolutionary operations such as selection, intersection and variation on the populations, through multi-generation evolution, the fitness of individuals in the populations is continuously improved, so that the Pareto optimal solution set of the multi-objective optimization problem is gradually approached. Typical multi-objective evolutionary algorithms for comparison are NSGA2, PESA2 and SPEA 2. For the three algorithms, the advantages are more, but the disadvantages are more obvious.
The NSGA2 has the advantages of high operation efficiency, good distributivity of solution sets and particularly good performance on low-dimensional optimization problems; the disadvantage is that the solution process has defects in the high-dimensional problem, and the diversity of the solution is not ideal.
The PESA2 has the advantages that the solution has good convergence and is easy to approach to an optimal surface, especially in the case of high-dimensional problems; however, the disadvantages are that only one individual can be selected at a time, the time consumption is large, and the diversity of the stages is not good.
The SPEA2 has the advantages that a solution set with good distribution degree can be obtained, particularly in the solution of a high-dimensional problem, but the clustering process is long in time for keeping diversity and low in operation efficiency.
In addition to multi-objective evolutionary algorithms, there are also multi-objective particle swarm algorithms (MOPSO). The Particle Swarm Optimization (PSO) algorithm is an evolutionary technology which simulates social behaviors and is based on swarm intelligence, is widely applied to the field of engineering optimization by virtue of a unique search mechanism, excellent convergence performance and convenient computer implementation, and has the defects of high calculation complexity, low universality, poor convergence and the like when being applied to different optimization fields.
Multiobjective optimization is a problem often encountered and addressed in engineering, and although many of the above approaches have attempted to solve it, various problems and deficiencies still exist. In general, some optimization results are not good enough, some training is difficult, the number of required samples is too large, and other optimization time is too long and cost is too large. Besides these, each algorithm has other limitations, such as: NSGA2 has defects in the solution process in the high-dimensional problem, and the diversity of solution is not ideal; the PESA2 can only select one individual at a time, the time consumption is large, and the diversity of the levels is poor; the SPEA2 clustering process has the disadvantages of long time consumption for maintaining diversity and low operation efficiency; and MOPSO has the defects of high computational complexity, low universality, poor convergence and the like.
Disclosure of Invention
The invention aims to provide a multi-objective optimization method based on a generative confrontation network, which has better optimization result, is easy to train and is faster in optimization aiming at the defects of the prior art.
The invention relates to a multi-objective optimization method based on a generative countermeasure network, which is characterized by comprising the following steps:
(1) obtaining an initial sample:
(1a) selecting an optimization target: aiming at a multi-objective optimization problem existing in a certain system, selecting and determining a plurality of targets to be optimized, supposing that q optimization targets exist, p system variables in all system variables have influence on the q optimization targets, and each system variable has a data set and a value range;
(1b) setting the maximum evaluation times: each group of p system variables is defined as a group of independent variables, q optimization target values obtained by the group of independent variables are defined as dependent variables, and one-time evaluation refers to a process of obtaining corresponding dependent variables by the group of independent variables; setting the evaluation times as E in the optimization process, setting the maximum evaluation times as E, and initializing the evaluation times as zero;
(1c) determining the initial sample number: determining the number of the initial variables as m groups according to the maximum evaluation times E, and ensuring that the optimization process can be completed within the maximum evaluation times E;
(1d) random sampling to obtain initial samples: randomly sampling each system variable related to an optimization target to obtain m groups of p-dimensional initial variables, evaluating and measuring the quality of the m groups of initial variables by using a running method of a system where multiple targets are located, obtaining values corresponding to q optimization targets by each group of initial variables, and forming a group of p + q-dimensional initial samples by each group of randomly sampled initial variables and the values of the optimization targets; sampling and evaluating the initial variables, combining the initial variables with an optimization target once, adding 1 to the evaluation times e, traversing all m groups of initial variables, and adding m to the final evaluation times e to obtain m initial samples with p + q dimensions;
(2) constructing a multi-objective optimized generative countermeasure network (MOGAN): the method comprises the steps that a generation network G and a discrimination network D are adopted, wherein the generation network G and the discrimination network D both adopt three layers of fully-connected neural networks, and the generation network G and the discrimination network D are mutually confronted and trained to promote mutual continuous optimization to construct a multi-objective optimized generation type confrontation network (MOGAN);
(3) selecting a training set from the initial samples: according to the definition of the Pareto solutions, selecting all p + q dimensional Pareto solutions from an initial sample, then removing q optimization target values in each group of Pareto solutions, and taking the p dimensional Pareto solutions with the optimization target values removed as a training set;
(4) training a constructed generative countermeasure network (MOGAN) for multi-objective optimization:
(4a) determining a training sample: randomly selecting one half of the training samples from the training set as training samples, and then carrying out data standardization preprocessing on the training samples to obtain preprocessed training samples x;
(4b) inputting training samples into the MOGAN: inputting the preprocessed training sample x into a discrimination network D of a generative countermeasure network;
(4c) generating a sample: in a generating countermeasure network of the multi-objective optimization problem, generating a generating sample z with a dimension p by using a generating network G;
(4d) and obtaining a judgment result: inputting the generated sample z and the training sample x into a discrimination network D, and outputting a discrimination result;
(4e) training to generate a network G and a discrimination network D: according to the judgment result, fixing the generated network G, training the judgment network D, and continuously optimizing until the judgment network D can accurately judge whether a sample is from a training sample or a sample generated by the generated network G; fixing a discrimination network D according to a discrimination result, training the generation network G, and continuously optimizing until the discrimination network D can not judge whether a sample is from a training sample or a sample generated by the generation network G;
(4f) obtaining a generated sample set after multiple iterative training: executing the steps (4a) - (4e) once to finish the training of the generative countermeasure network once; judging whether the iteration times are reached, if the designed iteration times are not reached, repeating the steps (4a) - (4e), and continuing training; if the designed iteration number is reached, generating m by using a generation network G1Set p dimensional initial variables and evaluate the measure m1Calculating the quality of the group of samples, calculating the value of a q-dimensional optimization target, and adding m to the evaluation times e1Then m is1Combining the p-dimensional initial variables and the q-dimensional optimization targets to obtain m1A set of generated samples of dimension p + q;
(5) judging the evaluation times: judging the evaluation times, if the evaluation times E reach the maximum evaluation times E, taking the generated sample set obtained in the step (4f) as a final result set, then executing the step (9) to further verify the optimization effect, otherwise, executing the step (6);
(6) obtaining a cross result set: m is to be1Combining the generated sample set of the p + q dimension with the training sample x, selecting all p + q dimension Pareto solutions from the combined sample set, removing q optimization target values of the Pareto solutions to obtain the Pareto solutions of p dimension initial variables, and performing simulated binary cross operation on the Pareto solutions of the p dimension initial variables to obtain m dimension initial variables2A set of cross results for the set of p-dimensional initial variables;
(7) obtaining a variation result set: to m2Performing polynomial mutation operation on the cross result set of the p-dimensional initial variables to obtain m3A set of variant results for the set of p-dimensional initial variables;
(8) evaluation gave a new initial sample: evaluating and measuring m by using operation method of system with multiple targets3The method comprises the steps of obtaining the advantages and the disadvantages of p-dimensional initial variables, obtaining values corresponding to q optimization targets from each group of initial variables, forming a new p + q-dimensional initial sample by combining the values of p initial variables and q optimization targets in each group, adding 1 to the evaluation times e once when the initial variables are evaluated and combined with the optimization targets, traversing all m-dimensional initial variables, and obtaining the values of p initial variables and q optimization targets3Set p-dimensional initial variables, final evaluation times e plus m3To obtain m3A new set of initial samples of dimension p + q; performing the steps (3) to (5) again;
(9) and (3) verifying the optimization effect:
(9a) optimizing the p optimized variables selected in the step (1) by using other existing optimization algorithms to obtain a corresponding comparison result set;
(9b) the optimization effect of the invention is verified by comparing the final result set of the invention with the comparison result sets of other optimization algorithms.
The multi-objective optimization is a problem frequently encountered and solved in engineering, and many existing methods have various limitations and defects, some optimization results are not good enough, some training difficulties are large, the number of required samples is too large, and the optimization time is too long and the cost is too high. In addition, each algorithm has other limitations, such as: NSGA2 has defects in the solution process in the high-dimensional problem, and the diversity of solution is not ideal; the PESA2 can only select one individual at a time, the time consumption is large, and the diversity of the levels is poor; the SPEA2 clustering process has the disadvantages of long time consumption for maintaining diversity and low operation efficiency; and MOPSO has the defects of high computational complexity, low universality, poor convergence and the like.
Compared with the prior art, the invention has the following advantages:
two networks were used for the optimization against training, with good results: the method provided by the invention is optimized by using a mode of antagonistic training based on two networks, breaks through the inherent thought of the original multi-objective optimization problem solution method, uses one network to simulate and generate characteristic variables, judges the performance of the other network, and alternately and iteratively performs the two processes for optimization, so that the result is good, and simultaneously, because the two networks both use three layers of fully-connected networks, the training is easy.
The need for training sample numbers is reduced: the invention ensures the diversity and randomness of the training samples and the quality of the samples by a method of randomly selecting half of the sample characteristics of the experimental samples each time. Meanwhile, the process of obtaining a large number of samples through a large number of experiments is avoided, so that the time cost is saved to the maximum extent.
Effectively enlarged sample quantity, save time cost: the invention introduces the crossing idea in the genetic algorithm, enlarges the sample quantity by simulating binary crossing on the generated samples, searches the sample space more effectively, finds the data distribution of the Pareto solution more quickly and greatly saves the time cost.
Avoiding falling into local optima: the method introduces a variation thought in a genetic algorithm, effectively avoids the situation of falling into local optimum by performing polynomial variation on the generated samples, and is easier to find global optimum.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a sub-flow diagram of the internal logic of the generative countermeasure network of the present invention;
FIG. 3 is a block diagram of a discrimination network D and a generation network G in the present invention;
FIG. 4 is a graph of the optimization results of the present invention and other optimization algorithms for the ZDT1 function;
FIG. 5 is a graph of the optimization results of the present invention and other optimization algorithms for the ZDT6 function.
Detailed Description
The invention is described in detail below with reference to the figures and examples
Example 1
Multiobjective optimization is a problem often encountered in industrial production, and many problems are composed of multiple targets that conflict with each other and affect each other, for example, two conflicting performance targets, throughput and delay, and power and stability in circuit design, which are all multiobjective optimization problems. The multi-objective optimization problem relates to aspects of industrial production, and the related solving method plays a crucial role in solving planning decision problems in politics, finance, military, environment, production and manufacturing, social security and the like, and is a technical problem to be solved in modern industrial system engineering. Although many methods try to solve the multi-objective optimization problem, the methods have various problems and defects, some optimization results are not good enough, some training difficulties are large, the number of required samples is too large, and the optimization time is too long and the cost is too high. The invention develops research aiming at the current situation and provides a multi-objective optimization method based on a generative countermeasure network.
The invention relates to a multi-objective optimization method based on a generative countermeasure network, which is shown in figure 1 and comprises the following steps:
(1) obtaining an initial sample:
(1a) selecting an optimization target: aiming at the multi-objective optimization problem existing in a certain system, a plurality of targets needing to be optimized are selected and determined, q optimization targets are assumed, p system variables in all system variables influence the q optimization targets, and each system variable has a data set and a value range.
(1b) Setting the maximum evaluation times: each group of p system variables is defined as a group of independent variables, q optimization target values obtained by the group of independent variables are defined as dependent variables, and one-time evaluation refers to a process of obtaining corresponding dependent variables by the group of independent variables; in the optimization process, the evaluation frequency is set as E, the maximum evaluation frequency is set as E, and the evaluation frequency E is initialized to be zero.
(1c) Determining the initial sample number: and determining the number of the initial variables as m groups according to the maximum evaluation times E, and ensuring that the optimization process can be completed within the maximum evaluation times E.
(1d) Random sampling to obtain initial samples: randomly sampling each system variable related to an optimization target to obtain m groups of p-dimensional initial variables, evaluating and measuring the quality of the m groups of initial variables by using a running method of a system where multiple targets are located, obtaining values corresponding to q optimization targets by each group of initial variables, and forming a group of p + q-dimensional initial samples by each group of randomly sampled initial variables and the values of the optimization targets; sampling and evaluating the initial variables, combining the initial variables with an optimization target once, adding 1 to the evaluation times e, traversing all m groups of initial variables, and adding m to the final evaluation times e to obtain m initial samples with p + q dimensions.
The method comprises the steps of determining p system variables related to q optimization targets through testing, randomly sampling the p system variables, wherein each group of system variables is sampled, adding the values of the corresponding q optimization targets to p-dimensional initial variables formed by the random sampling to form p + q-dimensional initial samples, wherein each p + q-dimensional initial sample contains the information of the initial variables and the corresponding optimization targets, and the random sampling has universality and can have good results.
(2) Constructing a multi-objective optimized generative countermeasure network (MOGAN): the method comprises a generation network G and a discrimination network D, wherein the generation network G and the discrimination network D both adopt three layers of fully-connected neural networks, and the generation network G and the discrimination network D are mutually confronted and trained to promote mutual continuous optimization to construct a multi-objective optimized generation type confrontation network (MOGAN).
In the generative confrontation network, training samples are input into a discrimination network D, and training is carried out until the discrimination network D can accurately judge whether one sample is from the training samples or the samples generated by the generation network G; random noise is input into the generating network G and trained until the judging network D cannot judge whether a sample is from a training sample or a sample generated by the generating network G, and finally the generated sample generated by the generating network G is output by the whole network after the countertraining of the training sample and the sample.
Compared with other optimization algorithms, the method adopts the generative confrontation network, is based on the zero sum game thought in the game theory, has two different networks, namely the generative network and the discrimination network instead of a single network, and adopts the confrontation training mode in the training mode, so that the training difficulty is reduced.
(3) Selecting a training set from the initial samples: according to the definition of the Pareto solutions, all p + q dimensional Pareto solutions are selected from the initial samples, then q optimization target values in each group of Pareto solutions are removed, and the Pareto solutions with the p dimensions with the optimization target values removed are used as a training set.
(4) Training a constructed generative countermeasure network (MOGAN) for multi-objective optimization:
(4a) determining a training sample: and randomly selecting half of the training samples from the training set as training samples, and then carrying out data standardization preprocessing on the training samples to obtain preprocessed training samples x. According to the method, only half of the training samples are selected, and the number of the samples is small compared with that of the samples required by other optimization algorithms.
(4b) Inputting training samples into the MOGAN: the preprocessed training samples x are input into a discriminant network D of the generative confrontation network.
(4c) Generating a sample: in a generative countermeasure network for multiobjective optimization, a generative network G is used to generate p-dimensional generative samples z.
(4d) And obtaining a judgment result: and inputting the generated sample z and the training sample x into a discrimination network D, and outputting a discrimination result.
(4e) Training to generate a network G and a discrimination network D: according to the judgment result, fixing the generated network G, training the judgment network D, and continuously optimizing until the judgment network D can accurately judge whether a sample is from a training sample or a sample generated by the generated network G; and fixing the discrimination network D according to the discrimination result, training the generation network G, and continuously optimizing until the discrimination network D can not judge whether a sample is from the training sample or the sample generated by the generation network G.
(4f) Obtaining a generated sample set after multiple iterative training: performing the steps (4a) - (4e) once to complete one pairTraining an adult countermeasure network; judging whether the iteration times are reached, if the designed iteration times are not reached, repeating the steps (4a) - (4e), and continuing training; if the designed iteration number is reached, generating m by using a generation network G1Set p dimensional initial variables and evaluate the measure m1Calculating the quality of the group of samples, calculating the value of a q-dimensional optimization target, and adding m to the evaluation times e1Then m is1Combining the p-dimensional initial variables and the q-dimensional optimization targets to obtain m1The set of generated samples for the p + q dimensions.
(5) Judging the evaluation times: and (4) judging the evaluation times, if the evaluation times E reach the maximum evaluation times E, taking the generated sample set obtained in the step (4f) as a final result set, then executing the step (9), and further verifying the optimization effect, otherwise, executing the step (6).
(6) Obtaining a cross result set: m is to be1Combining the generated sample set of the p + q dimension with the training sample x, selecting all p + q dimension Pareto solutions from the combined sample set, removing q optimization target values of the Pareto solutions to obtain the Pareto solutions of p dimension initial variables, and performing simulated binary cross operation on the Pareto solutions of the p dimension initial variables to obtain m dimension initial variables2The set of cross results for the p-dimensional set of initial variables. According to the invention, through crossing, a new sample is generated, the number of samples is enlarged, the sample space is searched more effectively, the time cost is greatly saved, and the cost is reduced.
(7) Obtaining a variation result set: to m2Performing polynomial mutation operation on the cross result set of the p-dimensional initial variables to obtain m3The set of variation results for the p-dimensional set of initial variables.
(8) Evaluation gave a new initial sample: evaluating and measuring m by using operation method of system with multiple targets3The method comprises the steps of obtaining the advantages and the disadvantages of p-dimensional initial variables, obtaining values corresponding to q optimization targets from each group of initial variables, forming a new p + q-dimensional initial sample by combining the values of p initial variables and q optimization targets in each group, adding 1 to the evaluation times e once when the initial variables are evaluated and combined with the optimization targets, traversing all m-dimensional initial variables, and obtaining the values of p initial variables and q optimization targets3Set p-dimensional initial variables, final evaluation times e plus m3To obtain m3A new set of initial samples of dimension p + q; and (5) performing the steps (3) to (5) again, reselecting the training set, and performing a new round of training and judgment.
(9) And (3) verifying the optimization effect:
(9a) optimizing the p optimized variables selected in the step (1) by using other existing optimization algorithms to obtain a corresponding comparison result set;
(9b) the optimization effect of the invention is verified by comparing the final result set of the invention with the comparison result sets of other optimization algorithms. Experimental results show that the optimization effect of the method is better than that of other optimization algorithms.
Aiming at the multi-objective optimization problem frequently encountered in industrial production, the invention provides a technical scheme of a multi-objective optimization method based on a generative confrontation network, which has better result, is easy to train and has faster optimization. In the method, firstly, training samples are obtained through random sampling and processing, then a generative confrontation network is constructed, the training samples are trained to obtain a generation result, the generation result is subjected to cross variation to obtain more samples, then the training network is carried out again until the set evaluation times are reached, and the generated samples are the final result of the method.
The method provided by the invention is optimized by using a mode of antagonistic training based on two networks, breaks through the inherent thought of the original multi-objective optimization problem solution method, uses one network to simulate and generate characteristic variables, judges the performance of the other network, and alternately and iteratively performs the two processes for optimization, so that the result is good, and simultaneously, because the two networks both use three layers of fully-connected networks, the training is easy.
Example 2
The multi-objective optimization method based on the generative confrontation network is the same as that in the embodiment 1, and in the step (2), a generative confrontation network GAN model based on deep learning is designed, and a relation is established between a function value and a variable, so that the method can learn the potential characteristics of a Pareto solution by using a generative model G based on a selected training sample, and uses another discrimination network D for judgment, error calculation and continuous optimization of a result. Both the discriminating network D and the generating network G use a classical three-layer fully connected network structure, wherein:
the invention constructs a multi-objective optimized generation type countermeasure network, wherein the generation network G is a three-layer full-connection network comprising an input layer, a hidden layer and an output layer, the input layer comprises 5 nodes, and each node is a random number in the range of [ -1,1 ]; the hidden layer has 128 nodes, and each node has a weight relation with the input layer, and the initialization weight is a random number in the range of [ -1,1 ]; the output layer comprises n nodes, wherein n is the number of optimization targets, each node comprises an activation function relu, and the value of n is the variable number of a specific function;
the judgment network D is a three-layer fully-connected network comprising an input layer, a hidden layer and an output layer, wherein the input layer comprises n nodes, and n is the number of optimization targets; the hidden layer has 128 nodes, each node has a weight relation with the input layer, the initialization weight is also a random number in the range of [ -1,1], and each node contains an activation function sigmoid; the output layer contains 1 node representing the probability of input sample authenticity, and each node contains an activation function tanh.
The invention optimizes by using a mode of antagonistic training based on two networks, breaks through the inherent thought of the original multi-objective optimization problem solution, uses one network to simulate and generate characteristic variables, judges the performance of the other network, and optimizes by alternately and iteratively performing the two processes, so that the result is good, and simultaneously, because the two networks use three layers of fully connected networks, the training is easy.
Example 3
The multi-objective optimization method based on the generative countermeasure network is the same as that of the embodiment 1-2, and the pair of generative sample sets and the training samples described in the step (6) are merged and subjected to the simulated binary cross operation to obtain m2Specifically, the method comprises the steps of sequentially carrying out analog binary cross operation on two adjacent samples in a generated sample set according to the sequence, firstly carrying out probability judgment on the two samples, and if the probability is smaller than the set probability, carrying out probability judgment on the two samplesThe cross rate of the two solutions is crossed to generate two new solutions, which inherit part of the characteristics of the two solutions, so that the solutions are possibly very close to Pareto solutions. The specific crossing process is carried out according to the following formula for simulating binary crossing:
wherein,is two new samples, x, after the intersection of two adjacent samples of the jth group1j(t)x2j(t) is the j-th set of two adjacent samples before crossing, t represents the t-th generation in the genetic algorithm simulating binary crossing, j represents the j-th set of crossing operations, γjIs the intersection of the jth group and,
ujis a random number, and ujE is U (0,1), eta is a distribution index, and eta is more than 0.
The invention introduces the crossing idea in the genetic algorithm, generates new samples by simulating binary crossing on the sample set combined by the generation sample set and the training sample, enlarges the sample quantity, and searches the sample space more effectively, so that the generation type countermeasure network finds the data distribution of the Pareto solution more quickly, thereby being beneficial to generating better samples and greatly saving the time cost.
Example 4
The multi-objective optimization method based on the generative countermeasure network is similar to the method of the embodiments 1 to 3, and the polynomial mutation operation is performed on the cross result set in the step (7) to obtain m3Set of variation results for p-dimensional set of initial variablesThe method comprises the steps of carrying out mutation operation in a genetic algorithm on each sample in a cross result set, firstly carrying out probability judgment on each sample, carrying out random variation on a minimum part of the sample if the probability is smaller than a set mutation rate, wherein a specific variation process is related to a mutation operator, then generating a mutation sample of a p-dimensional initial variable, and the mutation operator for realizing polynomial mutation operation has the form:
v'k=vk+δ·(uk-lk) Wherein
in the formula, vkDenotes a parent, v'kRepresents a sub-individual ukRepresents the upper bound, l, of the value ranges of the p system variableskRepresenting the lower bound, δ, of the value ranges of the p system variables1=(vk-lk)/(uk-lk),δ1=(uk-vk)/(uk-lk) K denotes the kth generation in the genetic algorithm of polynomial variation, u is a [0, 1]]The random number within the interval is a random number,ηmis the distribution index.
The method introduces a variation thought in a genetic algorithm, effectively avoids the situation of falling into local optimum by performing polynomial variation on the generated samples, and is easier to find global optimum.
The invention not only carries out cross mutation operation, but also uses the newly generated sample as a new initial sample, thus being capable of searching in the sample space more quickly when training again, and being beneficial to finding better results more quickly.
A more detailed example is given below to further illustrate the present invention.
Example 5
The multi-objective optimization method based on the generative countermeasure network is the same as the embodiments 1-4, referring to fig. 1, and comprises the following steps:
(1) obtaining an initial sample:
(1a) selecting an optimization target: aiming at the multi-objective optimization problem existing in a certain system, a plurality of targets needing to be optimized are selected and determined, q optimization targets are assumed, p system variables in all system variables influence the q optimization targets, and each system variable has a data set and a value range.
(1b) Setting the maximum evaluation times: each group of p system variables is defined as a group of independent variables, q optimization target values obtained by the group of independent variables are defined as dependent variables, and one-time evaluation refers to a process of obtaining corresponding dependent variables by the group of independent variables; setting the evaluation times as E in the optimization process, setting the maximum evaluation times as E, and initializing the evaluation times as zero;
(1c) determining the initial sample number: determining the number of the initial variables as m groups according to the maximum evaluation times E, and ensuring that the optimization process can be completed within the maximum evaluation times E;
(1d) random sampling to obtain initial samples: randomly sampling each system variable related to an optimization target to obtain m groups of p-dimensional initial variables, evaluating and measuring the quality of the m groups of initial variables by using a running method of a system where multiple targets are located, obtaining values corresponding to q optimization targets by each group of initial variables, and forming a group of p + q-dimensional initial samples by each group of randomly sampled initial variables and the values of the optimization targets; sampling and evaluating the initial variables, combining the initial variables with an optimization target once, adding 1 to the evaluation times e, traversing all m groups of initial variables, and adding m to the final evaluation times e to obtain m initial samples with p + q dimensions;
(2) constructing a multi-objective optimized generative countermeasure network (MOGAN): the method comprises the steps that a generation network G and a discrimination network D are adopted, wherein the generation network G and the discrimination network D both adopt three layers of fully-connected neural networks, and the generation network G and the discrimination network D are mutually confronted and trained to promote mutual continuous optimization to construct a multi-objective optimized generation type confrontation network (MOGAN);
the method is characterized in that a function value and a variable are linked by a generated countermeasure network GAN model based on deep learning, so that the method can learn the potential characteristics of a Pareto solution by utilizing a generated model G based on a selected training sample, judge by using another judgment network D, calculate errors and continuously optimize results.
(3) Selecting a training set from the initial samples: according to the definition of the Pareto solutions, selecting all p + q dimensional Pareto solutions from an initial sample, then removing q optimization target values in each group of Pareto solutions, and taking the p dimensional Pareto solutions with the optimization target values removed as a training set;
(4) training a generative countermeasure network (MOGAN) of the constructed multi-objective optimization problem:
(4a) determining a training sample: randomly selecting one half of the training samples from the training set as training samples, and then carrying out data standardization preprocessing on the training samples to obtain preprocessed training samples x;
before each iteration process, randomly selecting a half of samples in the training set selected in the previous step, randomly arranging the half of samples to be used as a training sample x of the iteration process, and inputting the training sample x into a generating type countermeasure network to ensure the diversity and reliability of the training sample.
And carrying out standardization processing on the sample data to enable the value of each feature to meet the normal distribution of (0,1), and generating structured data capable of carrying out model training, so that the generative countermeasure network is more stable, and the Euclidean distance between the features is more reasonable in calculation.
(4b) Inputting training samples into the MOGAN: inputting the preprocessed training sample x into a discrimination network D of a generative countermeasure network;
(4c) generating a sample: in a generating countermeasure network of the multi-objective optimization problem, generating a generating sample z with a dimension p by using a generating network G;
and calculating to obtain a hidden layer node value of the network through a weight relation between the random number and the hidden layer node according to the random number of the generated network G in the range of [ -1,1] all the time in the input layer, transmitting the hidden layer node value to the output layer, and calculating the node value of the output layer through a relu function to obtain a generated sample z which is finally consistent with the form of the training sample x.
(4d) And obtaining a judgment result: inputting the generated sample z and the training sample x into a discrimination network D, and outputting a discrimination result;
and calculating a weight relation with the hidden layer to obtain a hidden layer node value of the network, calculating the hidden layer node value through a sigmoid function, transmitting the hidden layer node value to an output layer, and calculating the node value of the output layer through a tanh function to obtain the judgment probability of judging the authenticity of the network D on the two groups of samples.
(4e) Training to generate a network G and a discrimination network D: according to the judgment result, fixing the generated network G, training the judgment network D, and continuously optimizing until the judgment network D can accurately judge whether a sample is from a training sample or a sample generated by the generated network G; fixing a discrimination network D according to a discrimination result, training the generation network G, and continuously optimizing until the discrimination network D can not judge whether a sample is from a training sample or a sample generated by the generation network G;
the target formula used is expressed as follows:
wherein V represents the difference between the generated sample and the training sample, G represents the generated network, D represents the discriminant network, x-pr(x) Denotes the distribution of the characteristic x about the sample, r denotes the number of parameters of the sample, z to pn(z) represents a distribution with respect to a sample characteristic z, n represents the number of parameters of the sample, and E represents the expectation thereof;
(4e1) fixedly generating a network G, and optimizing the discrimination network D through a loss formula of D;
when the discrimination network D is optimized, the sum of the mean values of two probabilities needs to be maximized, so that the loss function of the discrimination network is obtained according to the thought of deep learning:
and (4) substituting the two probabilities obtained in the step (4D) into a loss function D _ loss of the discrimination network, and optimizing the weight between different layers of nodes of the discrimination network D by continuously minimizing the loss function.
(4e2) The fixed discrimination network D optimizes the generated network G through a loss formula of G;
when the generated network G is optimized, the probability mean value of the generated samples needs to be minimized, so that the loss function of the generated network is obtained according to the thought of deep learning:
and (4) substituting the two probabilities obtained in the step (4d) into a loss function G _ loss of the generated network, and optimizing the weight between different layers of nodes of the generated network G by continuously minimizing the loss function.
(4f) Obtaining a generated sample set after multiple iterative training: and (4) executing the steps (4a) to (4e) once to finish the training of the generative countermeasure network once. After one-time training, judging whether the iteration times are reached, if the designed iteration times are not reached, repeating the steps (4a) - (4e), and continuing training; if the designed iteration number is reached, generating m by using a generation network G1Set p dimensional initial variables and evaluate the measure m1Calculating the quality of the group of samples, calculating the value of a q-dimensional optimization target, and adding m to the evaluation times e1Then m is1Combining the p-dimensional initial variables and the q-dimensional optimization targets to obtain m1A set of generated samples of dimension p + q;
(5) judging the evaluation times: judging the evaluation times, if the evaluation times E reach the maximum evaluation times E, taking the generated sample set obtained in the step (4f) as a final result set, then executing the step (9) to further verify the optimization effect, otherwise, executing the step (6);
(6) obtaining a cross result set: m is to be1Combining the generated sample set of the p + q dimension with the training sample x, selecting all p + q dimension Pareto solutions from the combined sample set, removing q optimization target values of the Pareto solutions to obtain the Pareto solutions of p dimension initial variables, and performing simulated binary cross operation on the Pareto solutions of the p dimension initial variables to obtain m dimension initial variables2A set of cross results for the set of p-dimensional initial variables;
(7) obtaining a variation result set: to m2Performing polynomial mutation operation on the cross result set of the p-dimensional initial variables to obtain m3A set of variant results for the set of p-dimensional initial variables;
(8) evaluated to obtainInitial sample of (a): evaluating and measuring m by using operation method of system with multiple targets3The method comprises the steps of obtaining the advantages and the disadvantages of p-dimensional initial variables, obtaining values corresponding to q optimization targets from each group of initial variables, forming a new p + q-dimensional initial sample by combining the values of p initial variables and q optimization targets in each group, adding 1 to the evaluation times e once when the initial variables are evaluated and combined with the optimization targets, traversing all m-dimensional initial variables, and obtaining the values of p initial variables and q optimization targets3Set p-dimensional initial variables, final evaluation times e plus m3To obtain m3A new set of initial samples of dimension p + q; performing the steps (3) to (5) again, reselecting the training set, and performing a new round of training and judgment;
(9) and (3) verifying the optimization effect:
(9a) optimizing the p optimized variables selected in the step (1) by using other existing optimization algorithms to obtain a corresponding comparison result set;
(9b) the optimization effect of the invention is verified by comparing the final result set of the invention with the comparison result sets of other optimization algorithms.
The method provided by the invention is optimized by using a mode of antagonistic training based on two networks, breaks through the inherent thought of the original multi-objective optimization problem solution method, uses one network to simulate and generate characteristic variables, judges the performance of the other network, and alternately and iteratively performs the two processes for optimization, so that the result is good, and simultaneously, because the two networks both use three layers of fully-connected networks, the training is easy.
The present invention and its technical effects will be described below with reference to specific application examples.
Example 6
The multi-objective optimization method based on the generative countermeasure network is the same as the embodiments 1 to 5,
application example: the invention is used for optimizing the kafka system, and two optimization targets of the system, namely the optimization results of maximum throughput and minimum delay, are obtained.
The Kafka system is a distributed, publish/subscribe-based messaging system, written using Scala, which is widely used with horizontal scalability and high throughput, and has been used by many different types of companies as multiple types of data pipe and messaging systems.
Step 1, obtaining an initial sample through random sampling;
(1a) selecting an optimization target: the optimization target chosen in this example is two performance indexes of the distributed system Kafka, throughput and latency, and according to official documents, 11 system configuration variables x are determined1,x2,……,x11The system variable p is 11, as shown in table 1:
TABLE 1 System variables of kafka
Numbering | Variable names | Description of variables | Value (default) |
1 | num.network.threads | Number of threads to process network requests | Integer (3) |
2 | num.io.threads | Number of threads to process io | Integer (8) |
3 | queued.max.requests | Maximum number of requests | Integer (500) |
4 | num.replica.fetchers | Number of threads for synchronous copies | Integer (1) |
5 | socket.receive.buffer.bytes | Socket data receiving buffer byte number | Integer (102400) |
6 | socket.send.buffer.bytes | Socket data sending buffer byte number | Integer (102400) |
7 | socket.request.max.bytes | Maximum number of bytes of Socket request | Integer (104857600) |
8 | buffer.memory | Number of memory bytes for caching message records | Integer (33554432) |
9 | batch.size | Number of bytes processed in batches | Integer (16384) |
10 | linger.ms | Recording the number of milliseconds delayed in transmission | Integer (0) |
11 | compression.type | Type of compression algorithm | Enumeration (none) |
(1b) Setting the maximum evaluation times: the maximum evaluation number E in this example is set to 300, and one evaluation means that 11 system configuration variables x of kafka are changed1,x2,……,x11Then, operating a system once to obtain throughput and delay; setting the evaluation times as e, and initializing to 0;
(1c) determining the initial sample number: determining the number of random samples to be 100 according to the maximum evaluation time E of 300;
(1d) random sampling to obtain initial samples: for 11 independent variables x1,x2,……,x30Randomly sampling between variable ranges to obtain 100 groups of 11-dimensional random initial variables, and changing 11 system configuration variables x of kafka1,x2,……,x11Then, operating the system once to obtain the throughput and the delay, adding 1 to the evaluation times e of each operation, and adding x to the evaluation times e of each operation1,x2,……,x11And combining the throughput and the delay into a 13-dimensional initial sample, and adding 100 to the final evaluation times e to obtain 100 groups of 13-dimensional initial samples.
The initial sample of the present invention was derived closely related to the system variables and optimization objectives of kafka, in which step not all over one hundred system variables were sampled randomly, but instead 11 system variables related to throughput and delay were determined from actual testing, which saves time costs significantly.
Step 2, constructing a generating countermeasure network (MOGAN) of the multi-objective optimization problem: the method comprises a generation network G and a discrimination network D, wherein the generation network G and the discrimination network D both adopt three layers of fully-connected neural networks, and the generation network G and the discrimination network D are mutually confronted and trained to promote mutual continuous optimization to construct a generation type confrontation network (MOGAN) of a multi-objective optimization problem.
The multi-objective optimization problem of the system software performance belongs to multivariate data processing, so the mutual influence among all characteristics is considered when the performance is optimized. In the embodiment, a deep learning-based generative confrontation network GAN model is designed, and the performance and the characteristics of Kafka are linked, so that the method can learn the potential characteristics of configuration with good Kafka performance by using a generative model G based on a selected training sample, and uses another discrimination network D for judgment, error calculation and result continuous optimization. The model does not use the thought that the weight size relation between different characteristics is necessarily searched for by performing performance optimization in the past, but uses the fitting property of the network to search the relation between the performance and the characteristics, continuously optimizes the relation, and directly obtains optimized configuration parameters. The result shows that the invention can obtain the feature configuration with better performance by exploring the relationship of different features in the configuration space.
As shown in fig. 3, the generative countermeasure network of the present invention comprises: distinguishing a network model D and generating a network model G, wherein the two networks both use a classic three-layer fully-connected network structure, and the method comprises the following steps:
the generated network model G of the present invention, as shown in fig. 3(b), is a three-layer fully-connected network including an input layer, a hidden layer and an output layer, where the input layer includes 5 nodes, and each node is a random number in the range of [ -1,1 ]; the hidden layer has 128 nodes, and each node has a weight relation with the input layer, and the initialization weight is a random number in the range of [ -1,1 ]; the output layer contains n nodes, and each node contains an activation function relu, where the value of n is the number of variables of a specific function, in this example, the number of variables n of kafka is 11.
The discriminant network model D of the present invention, as shown in fig. 3(a), is a three-layer fully-connected network including an input layer, a hidden layer, and an output layer, where the input layer includes n nodes, i.e., 11 nodes; the hidden layer has 128 nodes, each node has a weight relation with the input layer, the initialization weight is also a random number in the range of [ -1,1], and each node contains an activation function sigmoid; the output layer contains 1 node representing the probability of input sample authenticity, and each node contains an activation function tanh.
Step 3, selecting a training set from the initial sample as a training set;
considering that there may be better-performing configurations around better-performing configuration features, the best-performing sample should be selected for training, and this example selects Pareto solutions regarding throughput and delay in the samples according to actual needs, and selects 32 (if less than 32, all) of these Pareto solutions as training sets for iterative training.
Step 4, training a constructed generative countermeasure network (MOGAN) of the multi-objective optimization problem;
referring to fig. 2, the specific implementation of this step is as follows:
(4a) and randomly selecting half of the training samples from the training set as training samples, and then carrying out data standardization preprocessing on the training samples to obtain preprocessed training samples x.
In each iteration process, randomly selecting half of the samples in the training set selected in step 3 to perform random arrangement, and using the half of the samples as training samples x in the iteration process, wherein the number of the training samples x is 16 in this example. Inputting the training samples into a generative confrontation network to ensure the diversity and reliability of the training samples. Half is chosen because if too few are chosen, the sample potential characteristics are not easy to learn; if too many choices are made, sample diversity cannot be guaranteed.
Traversing the selected system variables, firstly judging whether the system variables are enumeration variables, and if not, directly inputting the generation type countermeasure network; if the enumeration variable is the one-hot encoding processing, the enumeration variable is required to be subjected to one-hot encoding processing, namely, an N-bit state register is adopted to encode N states, each state has other independent register bits, and only one bit is effective at any time, and the one-bit effective encoding is used for representing the classification variable as a binary vector; through the one-hot coding, the value of the enumerated variable can be expanded to the Euclidean space, a certain value of the enumerated variable corresponds to a certain point of the Euclidean space, and meanwhile, the enumerated variable can be discretized into a combination of a plurality of variables to be directly processed by a generative countermeasure network, so that the calculation of the Euclidean distance between the variables is more reasonable.
And (3) carrying out standardization processing on the values of the variables to enable the value of each variable to meet the normal distribution of (0,1), and generating structured data capable of carrying out model training, so that the generative countermeasure network is more stable, and the Euclidean distance between the variables is more reasonable to calculate. This is done because in most machine learning or deep learning algorithms, the calculation of the distance between variables or the calculation of the similarity is very important, and the calculation of the distance or the similarity in the present example is the similarity calculation in the euclidean space, and the generative countermeasure network as a deep learning algorithm needs to be normalized to improve the stability and the robustness of the algorithm.
(4b) Inputting the preprocessed training sample x into a generative confrontation network;
(4c) using the generation network G, generating a generation sample z consistent with the dimension of the training sample x,
and calculating to obtain a hidden layer node value of the network through a weight relation between the random number and the hidden layer node according to the random number of the generated network G in the range of [ -1,1] all the time in the input layer, transmitting the hidden layer node value to the output layer, and calculating the node value of the output layer through a relu function to obtain a generated sample z with 11 dimensions which is finally consistent with the form of the training sample x.
(4d) And (4) respectively inputting the generated sample z and the training sample x selected in the step (4b) into a discrimination network D, calculating a weight relation with a hidden layer to obtain a node value of the hidden layer of the network, calculating the node value of the hidden layer by a sigmoid function, transmitting the calculated node value of the hidden layer to an output layer, and calculating the node value of the output layer by a relu function to obtain the discrimination probability of the discrimination network D about the authenticity of two groups of samples.
(4e) Optimizing the generative confrontation network according to a target formula:
the target formula is expressed as follows:
wherein V represents the difference between the generated sample z and the training sample x, G represents the generated network, D represents the discriminant network, x-pr(x) Represents the data distribution with respect to the training sample x, r represents the number of variables of the training sample, z-pn(z) represents a data distribution with respect to a generation sample z, n represents the number of variables of the generation sample,meaning that the data distribution of the training sample x is averaged,meaning taking the mean of the z-data distribution of the generated samples.
(4e1) Optimizing the discrimination network D:
it can be seen from the above objective formula that, when the decision network D is optimized, the sum of the mean values of the two probabilities needs to be maximized, so that according to the thinking of deep learning, the loss function of the decision network D is obtained:
and (4) substituting the two probabilities obtained in the step (4c) into a loss function D _ loss of the discrimination network, and continuously minimizing the loss function by continuously changing the weight among the nodes of different layers, namely w in the figure 3(a), so as to optimize the discrimination network D.
(4e2) Optimizing the generated network G:
as can be seen from the above objective formula, when optimizing the generated network G, it is necessary to minimize the probability mean of the generated samples, so according to the thinking of deep learning, the loss function of the generated network G is obtained:
and (4) substituting the two probabilities obtained in the step (4D) into a loss function G _ loss of the generated network, and continuously minimizing the loss function by continuously changing the weight among the nodes of different layers, namely w in the step (b) in fig. 3, so as to optimize the generated network D.
Through the two processes, the capability of generating a real sample of the network is improved, and the capability of judging the authenticity of the sample by the network is also improved.
(4f) Repeating (4a) to (4e) until the set iteration number is met, saving a plurality of final generated samples z 'generated by the final secondary network according to the actual requirement as a final optimization result, wherein z' represents that 100 groups of 11-dimensional initial variables are generated in the example, and then changing 11 system configuration variables x of kafka1,x2,……,x11And running the system once to obtain the throughput and the delay, and combining the throughput and the delay with the initial variables to obtain 100 groups of 13-dimensional generated sample sets.
The iteration times of the invention are calculated according to experience, and the example is calculated for 300000 times.
Step 5, judging the evaluation times;
in the above steps, 11-dimensional system variables of each pair of kafka systems are changed and operated once, namely, evaluated once, if the evaluation times reach the limit, the generated sample set in the step 4 is used as a final result set, the step 9 is carried out, the optimization result is further verified, and otherwise, the step 6 is carried out, and the optimized generative confrontation network is trained again.
Step 6, simulating binary system cross operation;
combining the 100 sets of 13-dimensional generated sample sets and the training sample x in the step 4, selecting all 13-dimensional Pareto solutions from the combined sample sets, removing values of 2 optimization targets of the Pareto solutions to obtain Pareto solutions of 11-dimensional initial variables, and performing simulated binary cross operation on the Pareto solutions of the 11-dimensional initial variables to obtain m2The set of 11-dimensional initial variables is the cross result set. m is2For the number of the cross result sets, some samples are crossed with each other, some are not crossed, and m is2The number of (2) is not fixed.
Crossover is a key operation in a genetic algorithm, which simulates the reproduction of offspring in nature, and means that two chromosomes paired with each other exchange part of their genes in some way, thereby forming two new individuals. In the invention, the data intersection between two generated solutions is referred to, and a new solution is generated, which inherits part of characteristics of the two solutions, so that the solution is possibly very close to a Pareto solution, and the purpose of the step is to enlarge the number of samples and better search the whole sample space.
In this example, two sets of 11-dimensional system variables are data-interleaved, resulting in two new sets of 11-dimensional system variables that inherit some of the characteristics of the two previous sets of system variables, so it is possible to closely approximate a Pareto solution with respect to throughput and latency.
Step 7, polynomial mutation;
for m in step 62Performing polynomial mutation operation on the cross result set of the 11-dimensional initial variables to obtain m3Set of mutation results for 11 dimensional initial variables. Some samples are varied or some are not varied due to variation rate in the operation process, so m3The number of (2) is not fixed.
Mutation is another key operation of genetic algorithm, which simulates gene mutation in nature, i.e. children may (with little probability) make some replication errors when they replicate the parent gene, and the mutation generates new chromosomes, expressing new traits. In the invention, the aim of the step is to avoid trapping in a local optimal solution and easily find a global optimal solution by carrying out part of random change on the generated solution to generate a new solution.
In this example, the data of a certain set of 11-dimensional system variables is randomly changed.
Step 8, evaluating to obtain a new initial sample;
for m in step 73The variation result set of the 11-dimensional initial variables is set, and the m is obtained by operating the system3The throughput and delay of the initial variables are set, 1 is added to the evaluation times e of each operation, and x is added1,x2,……,x11And the throughput and delay are combined into one 13-dimensional initial sampleNumber of final evaluations e plus m3To obtain m3And (5) setting 13-dimensional initial samples, and training the optimized generation type countermeasure network again in the steps 2 to 5.
Step 9, verifying the optimization effect;
the test was run again on kafka according to the final result set generated by the generative challenge network, resulting in HV values as in table 2 below. HV is an ultra-volume index which measures the volume of a dimensional region in a target space enclosed by a non-dominated solution set obtained by a multi-objective optimization algorithm and a reference point, and the larger the value of HV is, the better the optimization result is.
Table 2: HV values for the invention and other optimization algorithms
Experimental data \ different optimization method | The invention | NSGAII | IBEA | MOEA/D | MOPSO | MOEA/D-EGO | K-RVEA | PAREGO | CSEA |
First time data | 0.9742 | 0.9547 | 0.9559 | 0.9578 | 0.9566 | 0.9657 | 0.9584 | 0.9665 | 0.9676 |
Second order data | 0.9705 | 0.9658 | 0.9659 | 0.9653 | 0.9642 | 0.9682 | 0.9587 | 0.9677 | 0.9665 |
As seen from Table 2, the MOGAN algorithm in the invention has larger HV value than other algorithms, which shows that the invention is stronger than other existing optimization algorithms, and the validity and reasonableness of the multi-target optimization method based on the generative countermeasure network aiming at the kafka performance optimization problem are also verified.
Example 7
The multi-objective optimization method based on the generative countermeasure network is similar to embodiments 1-6, and the embodiment is an application example applied to solving the multi-objective optimization problem of ZDT 1.
Application example: using the present invention to find the minimum function values f1 and f2 of ZDT1
ZDT is a test function, which contains a multi-objective optimization problem, and has six functions of ZDT1, ZDT2, … … and ZDT6, in this example, ZDT1 is taken as an example, and describes the whole optimization process and results, and the formula of ZDT1 is as follows:
f1=x1
0≤xi≤1,i=1,...,n
in this example there are 30 system variables between 0 and 1, in this example the system variable is x1,x2,……,x30Denotes, in this example, x1,x2,……,x30I.e. the independent variable, has an effect on the optimization objective, i.e. p-30, which is the two function values f1 and f2 of ZDT1, i.e. the dependent variable, i.e. q-2. The optimization goal of the present invention is to minimize f1 and f2, but not directly change f1 and f2, and only change f1 and f2 by changing the argument x, to achieve the minimization of f1 and f 2. In this example, the maximum evaluation number E is also set to 300, and the initial samples are determined to be 100 sets of 32-dimensional samples in this example.
After the same procedure of example 6, after training and optimizing the generative confrontation network continuously, a fixed number of 30-dimensional generated samples (the generated samples only contain 30 independent variables, but do not contain f1 and f2) can be obtained, f1 and f2 of the generated samples are obtained by calculation through the formula of ZDT1, function curves of the optimized samples f1 and f2 are drawn, the function values of the present invention are compared with the function values of other comparative algorithms, and the result is shown in fig. 4, fig. 4 is a graph of the optimization results of the present invention and other optimization algorithms for ZDT1, the abscissa of fig. 4 is the value of f1, and the ordinate is the value of f2, and the results of 6 optimization algorithms such as MOGAN and k-rvea, moead, nsga, mopso on, ibea, etc. are shared, wherein the curve of the present invention is a solid line without any symbol.
Since the objective in this example is to make f1 and f2 as small as possible, and as can be seen from fig. 4, under the ZDT1 function, the curves represented by MOGAN in the present invention are much lower in the range of 0-0.5 on f1 and in the range of 2.3-4.3 on f2 than other comparative algorithms, which shows that the MOGAN method of the present invention is stronger than other existing optimization algorithms, and the effectiveness and rationality of the multi-objective optimization method based on the generative countermeasure network of the present invention are also verified.
Example 8
The multi-objective optimization method based on the generative countermeasure network is similar to embodiments 1-7, and the embodiment is an application example applied to solving the multi-objective optimization problem of ZDT 6.
Application example: using the present invention to find the minimum function values f1 and f2 of ZDT6
ZDT is a common test function for multi-objective optimization problems, and has six functions of ZDT1, ZDT2, … … and ZDT6, in this example ZD6, and describes the whole optimization process and results, and the formula of ZDT6 is as follows:
0≤xi≤1,i=1,...,n
in this example there are 30 system variables between 0 and 1, in this example the system variable is x1,x2,……,x30Denotes, in this example, x1,x2,……,x30I.e. the independent variable, has an effect on the optimization objective, i.e. p-30, which is the two function values f1 and f2 of ZDT6, i.e. the dependent variable, i.e. q-2. The optimization goal of the present invention is to minimize f1 and f2, but not directly change f1 and f2, and only change f1 and f2 by changing the argument x, to achieve the minimization of f1 and f 2. In this example, the maximum evaluation number E is also set to 300, and the initial samples are determined to be 100 sets of 32-dimensional samples in this example.
After the same procedure of example 6, after training and optimizing the generative confrontation network continuously, a fixed number of 30-dimensional generated samples (the generated samples only contain 30 independent variables, but do not contain f1 and f2) can be obtained, f1 and f2 of the generated samples are obtained by calculation through the formula of ZDT6, function curves of the optimized samples f1 and f2 are drawn, the function values of the present invention are compared with the function values of other comparative algorithms, and the result is shown in fig. 5, fig. 5 is a graph of the optimization results of the present invention and other optimization algorithms for ZDT1, the abscissa of fig. 5 is the value of f1, and the ordinate is the value of f2, and the results of 6 optimization algorithms such as MOGAN and k-rvea, moead, nsga, mopso on, ibea and the like are shared, wherein the curve of the present invention is a solid line without any symbol.
Since the objective in this example is to make f1 and f2 as small as possible, it can be seen from fig. 5 that the curve represented by the MOGAN of the present invention is lower than that of other comparative algorithms under the ZDT6 function, which shows that the MOGAN method of the present invention is stronger than other existing optimization algorithms, and the effectiveness and rationality of the multi-objective optimization method based on the generative countermeasure network of the present invention are also verified.
Example 9
The multi-objective optimization method based on the generative countermeasure network is the same as the embodiments 1 to 8,
application example: the invention is used for optimizing the maximum detection rate and the maximum detection precision of the image detection method
The specific implementation steps are the same as those in embodiment 6, in image detection, increasing the detection rate and increasing the detection accuracy are two conflicting targets, that is, the optimization target q is 2, when the detection rate is increased, the detection accuracy will decrease, and when the detection accuracy is increased, the detection rate will decrease. After the optimization of the invention, a plurality of optimal solutions about two targets of the recognition rate and the recognition precision can be obtained finally, and a combination with high detection rate and detection precision is selected from the optimal solutions, so that the method can be used in specific practical application.
In short, the invention discloses a multi-objective optimization Method (MOGAN) based on a generative countermeasure network, which mainly solves the problems of high time cost, overlarge training difficulty, easy breakdown of a training network and the like in the prior art. The implementation scheme is as follows: (1) obtaining an initial sample; (2) constructing a multi-objective optimized generative countermeasure network; (3) selecting a training set from the initial sample; (4) training a constructed multi-objective optimized generative confrontation network; (5) judging the evaluation times; (6) obtaining a cross result set; (7) obtaining a variation result set; (8) evaluating to obtain a new initial sample; (9) and (5) verifying the optimization effect. The invention reduces time cost, improves network robustness and stability, has obvious optimization effect, and can be used for resource allocation of a plurality of targets, production scheduling of a plurality of products, optimization of a plurality of performances of various software systems and the like.
Claims (4)
1. A multi-objective optimization method based on a generative countermeasure network is characterized by comprising the following steps:
(1) obtaining an initial sample:
(1a) selecting an optimization target: aiming at a multi-objective optimization problem existing in a certain system, selecting and determining a plurality of targets to be optimized, supposing that q optimization targets exist, p system variables in all system variables have influence on the q optimization targets, and each system variable has a data set and a value range;
(1b) setting the maximum evaluation times: each group of p system variables is defined as a group of independent variables, q optimization target values obtained by the group of independent variables are defined as dependent variables, and one-time evaluation refers to a process of obtaining corresponding dependent variables by the group of independent variables; setting the evaluation times as E in the optimization process, setting the maximum evaluation times as E, and initializing the evaluation times as zero;
(1c) determining the initial sample number: determining the number of the initial variables as m groups according to the maximum evaluation times E, and ensuring that the optimization process can be completed within the maximum evaluation times E;
(1d) random sampling to obtain initial samples: randomly sampling each system variable related to an optimization target to obtain m groups of p-dimensional initial variables, evaluating and measuring the quality of the m groups of initial variables by using a running method of a system where multiple targets are located, obtaining values corresponding to q optimization targets by each group of initial variables, and forming a group of p + q-dimensional initial samples by each group of randomly sampled initial variables and the values of the optimization targets; sampling and evaluating the initial variables, combining the initial variables with an optimization target once, adding 1 to the evaluation times e, traversing all m groups of initial variables, and adding m to the final evaluation times e to obtain m initial samples with p + q dimensions;
(2) constructing a multi-objective optimized generative confrontation network: the method comprises a generation network G and a discrimination network D, wherein the generation network G and the discrimination network D both adopt three layers of fully-connected neural networks, and the generation network G and the discrimination network D are mutually confronted and trained to promote mutual continuous optimization to construct a multi-objective optimized generation type confrontation network;
(3) selecting a training set from the initial samples: according to the definition of the Pareto solutions, selecting all p + q dimensional Pareto solutions from an initial sample, then removing q optimization target values in each group of Pareto solutions, and taking the p dimensional Pareto solutions with the optimization target values removed as a training set;
(4) training a constructed generative countermeasure network for multi-objective optimization:
(4a) determining a training sample: randomly selecting one half of the training samples from the training set as training samples, and then carrying out data standardization preprocessing on the training samples to obtain preprocessed training samples x;
(4b) inputting training samples into the MOGAN: inputting the preprocessed training sample x into a discrimination network D of a generative countermeasure network;
(4c) generating a sample: in a multi-objective optimized generative countermeasure network, generating a p-dimensional generative sample z by using a generative network G;
(4d) and obtaining a judgment result: inputting the generated sample z and the training sample x into a discrimination network D, and outputting a discrimination result;
(4e) training to generate a network G and a discrimination network D: according to the judgment result, fixing the generated network G, training the judgment network D, and continuously optimizing until the judgment network D can accurately judge whether a sample is from a training sample or a sample generated by the generated network G; according to the judgment result, fixing a judgment network D, training the generation network G, and continuously optimizing until the judgment network D can not judge whether a sample is from a training sample or a sample generated by the generation network G;
(4f) obtaining a generated sample set after multiple iterative training: executing the steps (4a) - (4e) once to finish the training of the generative countermeasure network once; judging whether the iteration times are reached, if the designed iteration times are not reached, repeating the steps (4a) - (4e), and continuing training; if the designed iteration number is reached, generating m by using a generation network G1Set p dimensional initial variables and evaluate the measure m1Calculating the quality of the group of samples, calculating the value of a q-dimensional optimization target, and adding m to the evaluation times e1Then m is1Combining the p-dimensional initial variables and the q-dimensional optimization targets to obtain m1A set of generated samples of dimension p + q;
(5) judging the evaluation times: judging the evaluation times, if the evaluation times E reach the maximum evaluation times E, taking the generated sample set obtained in the step (4f) as a final result set, then executing the step (9) to further verify the optimization effect, otherwise, executing the step (6);
(6) obtaining a cross result set: m is to be1Combining the generated sample set of the p + q dimension with the training sample x, selecting all p + q dimension Pareto solutions from the combined sample set, removing q optimization target values of the Pareto solutions to obtain the Pareto solutions of p dimension initial variables, and performing simulated binary cross operation on the Pareto solutions of the p dimension initial variables to obtain m dimension initial variables2A set of cross results for the set of p-dimensional initial variables;
(7) obtaining a variation result set: to m2Performing polynomial mutation operation on the cross result set of the p-dimensional initial variables to obtain m3A set of variant results for the set of p-dimensional initial variables;
(8) evaluation gave a new initial sample: evaluating and measuring m by using operation method of system with multiple targets3The method comprises the steps of obtaining the advantages and the disadvantages of p-dimensional initial variables, obtaining values corresponding to q optimization targets from each group of initial variables, forming a new p + q-dimensional initial sample by combining the values of p initial variables and q optimization targets in each group, adding 1 to the evaluation times e once when the initial variables are evaluated and combined with the optimization targets, traversing all m-dimensional initial variables, and obtaining the values of p initial variables and q optimization targets3Set p-dimensional initial variables, final evaluation times e plus m3To obtain m3A new set of initial samples of dimension p + q; performing the steps (3) to (5) again;
(9) and (3) verifying the optimization effect:
(9a) optimizing the p optimized variables selected in the step (1) by using other existing optimization algorithms to obtain a corresponding comparison result set;
(9b) the optimization effect of the invention is verified by comparing the final result set of the invention with the comparison result sets of other optimization algorithms.
2. The multi-objective optimization method based on generative countermeasure network as claimed in claim 1, wherein the generative network G of the multi-objective optimization generative countermeasure network of step (2) is a three-layer fully-connected network comprising an input layer, an hidden layer and an output layer, the input layer comprising 5 nodes, each node being a random number in the range of [ -1,1 ]; the hidden layer has 128 nodes, and each node has a weight relation with the input layer, and the initialization weight is a random number in the range of [ -1,1 ]; the output layer comprises p nodes, wherein p is the number of system variables, and each node comprises an activation function relu;
the discrimination network D is a three-layer fully-connected network comprising an input layer, a hidden layer and an output layer, wherein the input layer comprises p nodes, and p is the number of system variables; the hidden layer has 128 nodes, each node has a weight relation with the input layer, the initialization weight is also a random number in the range of [ -1,1], and each node contains an activation function sigmoid; the output layer contains 1 node, which represents the probability of judging the authenticity of the input sample of the network D, and each node contains an activation function tanh.
3. The multi-objective optimization method based on generative countermeasure network as claimed in claim 1, wherein the step (6) of performing simulated binary interleaving operation on the generative sample set to obtain m2Specifically, a cross result set of p-dimensional initial variables is generated by sequentially performing analog binary cross operation on two adjacent samples in a generated sample set according to the sequence, performing probability judgment on the two samples, and performing cross generation if the probability is smaller than a set cross rateTwo new solutions, the specific interleaving process, are performed according to the following formula for simulating binary interleaving:
wherein,is two new samples, x, after the intersection of two adjacent samples of the jth group1j(t) x2j(t) is the j-th set of two adjacent samples before crossing, t represents the t-th generation in the genetic algorithm simulating binary crossing, j represents the j-th set of crossing operations, γjIs the cross-over rate of the jth group,
ujis a random number, and ujE is U (0,1), eta is a distribution index, and eta is more than 0.
4. The method for multi-objective optimization based on generative countermeasure network as claimed in claim 1, wherein the polynomial mutation operation is performed on the generative sample set in step (7) to obtain m3The mutation result set of the p-dimensional initial variable is specifically a mutation operation in a genetic algorithm is performed on each generated sample in a generated sample set, probability judgment is performed on each sample, if the probability is smaller than a set mutation rate, minimum random change is performed on the sample, a specific change process is related to a mutation operator, then a variant sample of the p-dimensional initial variable is generated, and the form of the mutation operator for realizing the polynomial mutation operation is as follows:
v'k=vk+δ·(uk-lk) Wherein
in the formula, vkDenotes a parent, v'kRepresents a sub-individual ukRepresents the upper bound, l, of the value ranges of the p system variableskRepresenting the lower bound, δ, of the value ranges of the p system variables1=(vk-lk)/(uk-lk),δ1=(uk-vk)/(uk-lk) K denotes the kth generation in the genetic algorithm of polynomial variation, u is a [0, 1]]The random number within the interval is a random number,ηmis the distribution index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910688044.0A CN110533221A (en) | 2019-07-29 | 2019-07-29 | Multipurpose Optimal Method based on production confrontation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910688044.0A CN110533221A (en) | 2019-07-29 | 2019-07-29 | Multipurpose Optimal Method based on production confrontation network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110533221A true CN110533221A (en) | 2019-12-03 |
Family
ID=68661960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910688044.0A Pending CN110533221A (en) | 2019-07-29 | 2019-07-29 | Multipurpose Optimal Method based on production confrontation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110533221A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111481935A (en) * | 2020-04-08 | 2020-08-04 | 网易(杭州)网络有限公司 | Configuration method, device, equipment and medium for AI models of games with different styles |
CN112488486A (en) * | 2020-11-25 | 2021-03-12 | 吉林大学 | Multi-criterion decision method based on zero sum game |
CN112508093A (en) * | 2020-12-03 | 2021-03-16 | 北京百度网讯科技有限公司 | Self-training method and device, electronic equipment and readable storage medium |
WO2021128805A1 (en) * | 2019-12-24 | 2021-07-01 | 浙江大学 | Wireless network resource allocation method employing generative adversarial reinforcement learning |
CN113434459A (en) * | 2021-06-30 | 2021-09-24 | 电子科技大学 | Network-on-chip task mapping method based on generation of countermeasure network |
CN118024445A (en) * | 2024-04-11 | 2024-05-14 | 苏州顶材新材料有限公司 | Modification optimization method and system for blending type interpenetrating network thermoplastic elastomer |
-
2019
- 2019-07-29 CN CN201910688044.0A patent/CN110533221A/en active Pending
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021128805A1 (en) * | 2019-12-24 | 2021-07-01 | 浙江大学 | Wireless network resource allocation method employing generative adversarial reinforcement learning |
CN111481935A (en) * | 2020-04-08 | 2020-08-04 | 网易(杭州)网络有限公司 | Configuration method, device, equipment and medium for AI models of games with different styles |
CN111481935B (en) * | 2020-04-08 | 2023-04-18 | 网易(杭州)网络有限公司 | Configuration method, device, equipment and medium for AI models of games with different styles |
CN112488486A (en) * | 2020-11-25 | 2021-03-12 | 吉林大学 | Multi-criterion decision method based on zero sum game |
CN112488486B (en) * | 2020-11-25 | 2022-04-15 | 吉林大学 | Multi-criterion decision method based on zero sum game |
CN112508093A (en) * | 2020-12-03 | 2021-03-16 | 北京百度网讯科技有限公司 | Self-training method and device, electronic equipment and readable storage medium |
CN112508093B (en) * | 2020-12-03 | 2022-01-28 | 北京百度网讯科技有限公司 | Self-training method and device, electronic equipment and readable storage medium |
CN113434459A (en) * | 2021-06-30 | 2021-09-24 | 电子科技大学 | Network-on-chip task mapping method based on generation of countermeasure network |
CN118024445A (en) * | 2024-04-11 | 2024-05-14 | 苏州顶材新材料有限公司 | Modification optimization method and system for blending type interpenetrating network thermoplastic elastomer |
CN118024445B (en) * | 2024-04-11 | 2024-06-21 | 苏州顶材新材料有限公司 | Modification optimization method and system for blending type interpenetrating network thermoplastic elastomer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533221A (en) | Multipurpose Optimal Method based on production confrontation network | |
Guendouz et al. | A discrete modified fireworks algorithm for community detection in complex networks | |
Zhang et al. | Hybrid fuzzy clustering method based on FCM and enhanced logarithmical PSO (ELPSO) | |
CN108805193A (en) | A kind of power loss data filling method based on mixed strategy | |
CN106934722A (en) | Multi-objective community detection method based on k node updates Yu similarity matrix | |
CN112270398B (en) | Cluster behavior learning method based on gene programming | |
Wang et al. | Ppisb: a novel network-based algorithm of predicting protein-protein interactions with mixed membership stochastic blockmodel | |
Zarei et al. | Detecting community structure in complex networks using genetic algorithm based on object migrating automata | |
Orouskhani et al. | Multi-objective evolutionary clustering with complex networks | |
CN114143210A (en) | Deep learning-based command control network key node identification method | |
Yu et al. | Unsupervised euclidean distance attack on network embedding | |
Zhang et al. | Hierarchical community detection based on partial matrix convergence using random walks | |
CN115481727A (en) | Intention recognition neural network generation and optimization method based on evolutionary computation | |
Song et al. | Importance weighted expectation-maximization for protein sequence design | |
CN115423008A (en) | Method, system and medium for cleaning operation data of power grid equipment | |
Guang et al. | Benchmark datasets for stochastic Petri net learning | |
CN118069868A (en) | Error correction method for knowledge graph fused with LLM large model | |
CN113704570B (en) | Large-scale complex network community detection method based on self-supervision learning type evolution | |
Liu et al. | A weight-incorporated similarity-based clustering ensemble method | |
Van Someren et al. | Searching for limited connectivity in genetic network models | |
CN117093885A (en) | Federal learning multi-objective optimization method integrating hierarchical clustering and particle swarm | |
Dave et al. | Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network | |
Lu et al. | Quantum Wolf Pack Evolutionary Algorithm of Weight Decision‐Making Based on Fuzzy Control | |
Chen et al. | Clustering without prior knowledge based on gene expression programming | |
Hu et al. | Apenas: An asynchronous parallel evolution based multi-objective neural architecture search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191203 |