CN104460594A - Dispatching optimization method based on two-layer nest structure - Google Patents

Dispatching optimization method based on two-layer nest structure Download PDF

Info

Publication number
CN104460594A
CN104460594A CN201410601666.2A CN201410601666A CN104460594A CN 104460594 A CN104460594 A CN 104460594A CN 201410601666 A CN201410601666 A CN 201410601666A CN 104460594 A CN104460594 A CN 104460594A
Authority
CN
China
Prior art keywords
optimization
algorithm
evaluation
mrow
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410601666.2A
Other languages
Chinese (zh)
Inventor
江永亨
付骁鑫
王京春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201410601666.2A priority Critical patent/CN104460594A/en
Publication of CN104460594A publication Critical patent/CN104460594A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41865Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by job scheduling, process planning, material flow
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a dispatching optimization method based on a two-layer nest structure and belongs to the technical field of industrial automation. A nest optimization algorithm structure is adopted for the problem of mixed integral non-linear programming (MINLP) existing in the production dispatching of most of manufacturing enterprises, global search is conducted by means of continuous updating of multiple solutions without the gradient information of a solution space, a feasible coarse judgment model is added and constructed, the feasibility of a solution is coarsely and fast judged before the solution evaluation, so that it is ensured that the true satisfactory solution of the dispatching model is obtained with a high probability, the calculated amount is small, the optimizing process is simple, and the application range is wide.

Description

Scheduling optimization method based on two-layer nested structure
Technical Field
The invention relates to a scheduling optimization method based on a two-layer nested structure, which is suitable for solving the problem of Mixed integer Non-Linear Programming (MINLP) encountered in production scheduling and belongs to the technical field of industrial production automation.
Background
With the increasing expansion of the production scale of the manufacturing industry, the production process is increasingly complex, the market competition is increasingly fierce, and the production scheduling is an important tool for improving the enterprise management level and obtaining greater economic benefits. The generalized production scheduling problem is how to arrange raw material resources, processing control variables, processing time and sequence occupied by each decomposition flow under the condition of meeting constraint conditions aiming at decomposable production flows so as to realize maximization of production benefits (comprehensive evaluation on production cost and product quality). The scheduling optimization problem is similar to a general optimization problem, but has new characteristics, such as large problem scale, complex description of a production process, difficulty in handling constraint conditions and objective functions, and the like. The mathematical model of the scheduling optimization problem is mainly described by a mathematical programming description method, namely discrete variables are used for representing discrete decision states such as arrangement sequence, production scheme selection and the like, continuous variables are used for representing continuous operation conditions, and algebraic equations or inequalities are used for describing objective functions and constraint conditions, so that the problem is abstracted into an MINLP model, and the description mode is visual and easy to understand and is beneficial to measuring the complexity of the model.
The two-layer optimization method is a special algorithm for solving the MINLP problem, and utilizes the type characteristics of optimization variables and adopts a two-layer structure to solve. The two-layer optimization algorithm can be divided into an interval approximation algorithm and a nested optimization algorithm. The former obtains the lower boundary and the upper boundary of the original MINLP optimization problem by iteratively solving a series of MILP main problems and NLP subproblems until the interval difference of the two boundaries is smaller than a set range, so the algorithm is called as an interval approximation algorithm. The model is simplified into an NLP model aiming at each fixed discrete variable candidate solution, so that NLP optimization can be used as a sub-optimization problem under the condition that the discrete variable value is determined, and the optimal value obtained by the sub-optimization problem is used as the evaluation of the discrete variable solution and is regarded as the optimization basis of the discrete variable. Thus, the original MINLP optimization problem is converted into the nested optimization problem of the outer MIP model and the inner NLP model, and therefore the method is called a nested optimization algorithm. How to improve the solution efficiency is a main problem faced by the "nested optimization" algorithm, because each discrete variable candidate solution in the outer layer combined optimization process corresponds to an inner layer NLP optimization problem. If the optimal values of the NLP models are required to be obtained, a large amount of solving time is consumed, most of the current 'nested optimization' algorithms adopt a random-accurate two-layer structure, and when the inner NLP model is a convex optimization problem, the accurate searching algorithm has obvious advantages in solving efficiency compared with a random searching algorithm.
Disclosure of Invention
The invention aims to provide a scheduling optimization method based on a two-layer nested structure, which can ensure that a real satisfactory solution of a scheduling model can be obtained with higher probability for the MINLP problem existing in production scheduling of most manufacturing enterprises, and has the advantages of small calculated amount, simple optimization process and wide application range.
The scheduling optimization method of the nested structure provided by the invention comprises the following steps:
step 1) initializing parameters:
setting parameters of an outer GA algorithm and an inner PSO algorithm, including a population size Ns,NfPSO velocity update weighting factor ω, φpgGA crossover probability pcAnd the probability of variation pmMaximum number of iteration steps
Step 2) probability optimization stage;
step 2.1 setting outer optimization iteration step number ksInitializing a discrete variable candidate solution s (1, k) when equal to 0s),…,s(ns,ks),…,s(Ns,ks);
Step 2.2 if the outer layer optimizes the iteration step number ksNot exceedFor s (n)s,ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step and record historical optimization data <math> <mrow> <mi>e</mi> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msubsup> <mi>K</mi> <mi>f</mi> <mi>FM</mi> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
Step 2.3 for s (n)s,ks)(ns=1,...,Ns) Using its historical minimum penalty value And feasibility coarse decision model estimationAnd then determining s (n)s,ks) Determination of feasibility of
Step 2.4 if s (n)s,ks) Infeasible, reserveAs a coarse penalty valueIf s (n)s,ks) Feasible, iterative optimization submodel with inner layer optimization algorithm (PSO algorithm)ToStep, recording historical optimization dataEstimation using a coarse evaluation modelPreserving estimatesAs roughness evaluation value
Step 2.5, according to the obtained rough evaluation value and rough penalty value, updating a rough guarantee set by utilizing an outer layer optimization algorithm (GA algorithm) and a feasibility rule method
In the step, according to a feasibility rough judgment model of a discrete variable candidate solution, a rough penalty value corresponding to a discrete variable combination s is quickly obtainedAnd feasibility estimationFurther judging whether to continue evaluating s; if the evaluation is feasible, further performing rough evaluation to obtain a rough evaluation valueBased on the above evaluation results, the outer layer random optimization algorithm optimizes the problem by solving the following combinations:
before obtaining by outer layer random optimization algorithmA candidate solution of a discrete variable that estimates the best and feasible, i.e.
Store it in the rough keep-good setIn (1). Executing the optimization-preserving updating operation of the outer-layer random optimization algorithm under the rough evaluation condition by a feasibility rule method;
step 2.6 outer optimization iteration step number ks=ks+1 if outer optimization iteration step number ksIs less thanReturning to step 2.2, if the outer layer optimizes the iteration step number ksExceedStep 3) is entered;
step 3) accurate evaluation stage:
solving for s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step of obtaining an accurate evaluation valueAnd corresponding optimal continuous variable solution f * | s ( n s , K s ) ( n s = 1 , . . . , N s ) ;
Step 4) determining a final satisfactory solution:
comparing the accurate evaluation values of each NLP optimization submodelAnd taking an optimal group as a satisfactory solution of the scheduling model according to the following feasibility rule method:
f g = f * | s g - - - ( 9 )
the final satisfactory solution is s g = s ( G , K s max ) , f g = f * | s ( G , K s max ) , Wherein <math> <mrow> <mi>G</mi> <mo>=</mo> <mi>arg</mi> <msub> <mi>min</mi> <mrow> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>&Element;</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> <mo>}</mo> </mrow> </msub> <msup> <mi>e</mi> <mo>*</mo> </msup> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>K</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>.</mo> </mrow> </math>
3. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.2), a key step of inner layer optimization is to utilize a feasible rough judgment model and a rough evaluation model to perform solution screening and evaluation, and essentially predict convergence results of penalty values and evaluation values, that is, numerical information (inner layer optimization historical data) obtained by finite step number iteration is used to estimate an accurate convergence value obtained after infinite step iteration calculation, and a specific implementation means is to fit an iterative descending process of the penalty values (evaluation values) with a negative power function curve to obtain a final convergence result, and the specific steps are as follows:
a.1 acquisition of inner-layer optimization historical data
The process of obtaining the inner optimization historical data is a process of iterative calculation of an inner optimization algorithm. The invention adopts Particle Swarm Optimization (PSO) algorithm to solveThe cost of any individual f in the population is defined as <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> </mrow> </msub> <mo>,</mo> </mrow> </math> The algorithm calculation process is as follows:
wherein, KfOptimizing iteration steps, w, phi, for a maximum inner layer for obtaining historical datapgUpdating the weighting coefficient for the speed of the PSO algorithm;
a.2 solving of the coarse evaluation model
Because the negative power function can better describe and predict e (k)f)|sUsing the optimization history data e (k)f)|s,kf=1,2,...,KfFitting a negative power function to estimate e (∞) luminances
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mover> <mi>e</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>,</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>=</mo> <mi>A</mi> <mo>&CenterDot;</mo> <msup> <msub> <mi>k</mi> <mi>f</mi> </msub> <mrow> <mo>-</mo> <mi>B</mi> </mrow> </msup> <mo>+</mo> <mi>C</mi> <mo>,</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>&infin;</mo> </mtd> </mtr> <mtr> <mtd> <mi>&theta;</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>C</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
Further adopting a recursion fitting algorithm, and obtaining new data e (k) in each iteration of the inner-layer random optimization algorithmf)|sThe fitting parameter θ is then analytically updated with minimal computation by the following recursions (11) and (12) until the inner-layer algorithm iterates to the number of steps KfSo that the data amount satisfies the fitting accuracy:
wherein,to representThe weight coefficient of (1);indicates when the algorithm iterates to kfStep time, fit function valueAnd the observed value e (k)f)|sMean square error between:
the initial value of the recursion algorithm is set according to the following method:
θ0=(A0,B0,C0)=(0,0.5,e(1)|s) (14)
<math> <mrow> <msup> <mrow> <mo>{</mo> <mfrac> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <mi>&theta;</mi> </mrow> </mfrac> <msub> <mi>J</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>}</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>&epsiv;</mi> <mo>&CenterDot;</mo> <mi>I</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, a very small constant is represented;
using optimized historical data e (k)f)|s,kf=1,2,...,KfObtaining the current information of e (∞)sThe estimated values of (c) are:
<math> <mrow> <mover> <mi>e</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <msub> <mi>k</mi> <mi>f</mi> </msub> </msub> <mo>,</mo> <mo>&infin;</mo> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>=</mo> <msub> <mi>C</mi> <msub> <mi>K</mi> <mi>f</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
the rough evaluation process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer submodel is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical optimized data e (k)f)|sFitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal evaluation value as s
4. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in said step 2.3), the key step is to calculate the historical minimum penalty value p' (k)f)|sAnd using a feasibility rough judgment modelTo calculateAnd then determining s (n)s,ks) The method comprises the following specific implementation steps:
B. determination of feasibility rough decision model:
note e (k)f)|sSolving the sub-model by the inner-layer random optimization algorithm under the condition that the fixed discrete variable S belongs to SAnd iterate to the kthfEvaluation value of step e (k)f)|sThe sum of the target value and the penalty value is decomposed as:
e(kf)|s=o(kf)|s+p(kf)|s (17)
wherein, o (k)f)|sAnd p (k)f)|sRespectively representing the inner layer random optimization iteration to the k < th >fObjective function value and penalty value of step.
When the parameters are selected properly and the iteration step number tends to be infinite, the random optimization algorithm converges to the global optimal solution with the probability of 1. Thus, e (∞) ventilationsAs an optimal target value for the submodel, i.e. an accurate evaluation value e for a discrete variable s*|s
Defining a historical minimum penalty value p' (k)f)|s,kf=1,2,...,KfComprises the following steps:
historical minimum penalty value for discrete variable s is p' (∞) & gtsBy solving for historical minimum penalty value p' (∞) & gtluminancesThe feasibility judgment of the optimal solution in the solution subspace corresponding to the discrete variable s can be carried out, and the feasibility judgment is feasibleWorking process of the sexual intercourse model, p' (∞) dichlorosCalculating process of (1) and e (∞) luminancesIs similar to but only willThe evaluation value in the expression is replaced by a historical minimum penalty value, and the modified expression (13) is as follows:
the feasibility rough judgment process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer sub-model is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical minimum penalty value data p' (k)f)|s,Fitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal historical minimum penalty value p' (∞) as ssThen, the feasibility of the test piece is determined.
5. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.5), the key step of outer layer optimization is to update a rough guarantee set by using an outer layer optimization algorithm (GA algorithm) and a feasibility rule method, and the specific steps are as follows:
C. calculation of skin optimization
The outer layer optimization and the inner layer optimization are used as evaluation tools of discrete variable candidate solutions, and a basic genetic algorithm (GA algorithm) is adopted to solve mins∈SJ(s), the cost of any individual s in the population is defined asContinuously updating the population by iterative calculation of fixed step number to finally obtainAnd (4) solving the high-quality discrete variable candidate solution. The algorithm calculation process is as follows:
wherein, KsOptimizing the number of iteration steps, p, for the maximum skinc,pmCross probability and mutation probability of the GA algorithm.
6. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 3), the key step is to solve s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodel <math> <mrow> <msub> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> </mrow> </msub> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>K</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> </mrow> </math> The method comprises the following specific steps:
D. precise evaluation solving algorithm
Solving by adopting Particle Swarm Optimization (PSO) algorithm in the accurate evaluation stageThe cost of any individual f in the population is defined asThe algorithm calculation process is as follows:
wherein,maximum number of iteration steps, w, phi, for the precise evaluation phasepgUpdating the weighting coefficient for the speed of the PSO algorithm;
accurately solving the current rough optimization-preserving set by utilizing the algorithmNLP submodel Further obtainSolution to continuous variableAnd a corresponding optimum evaluation value of
The invention provides a scheduling optimization method based on a two-layer nested structure, which has the following characteristics and advantages:
(1) the method adopts a two-layer nested structure, and solves the outer layer combination optimization problem by adopting a discrete GA algorithm; and solving the inner layer non-convex optimization problem by adopting a continuous PSO algorithm.
(2) The method of the invention employs two stages of steps. In the rough evaluation stage, the outer-layer discrete GA algorithm finds a plurality of optimal candidate solutions from the discrete variable solution space based on the evaluation value obtained by the inner-layer rough evaluation model, and stores the optimal candidate solutions in the rough optimization set. In the accurate evaluation stage, the inner-layer continuous PSO algorithm performs accurate evaluation on each candidate solution in the rough optimization-preserving set through sufficient iteration, and one group with the optimal evaluation value is taken as a final solution of the problem.
(3) The method requires the rough model to have a faster evaluation speed, and the evaluation result allows a certain range of noise or deviation. Therefore, the estimation process of the recursive fitting algorithm on the optimal target value of the inner-layer sub-model by using the limited iteration data of the inner-layer continuous PSO algorithm can be regarded as a rough evaluation process of the candidate solution of the corresponding discrete variable of the outer layer, and the more sufficient the iteration data amount is, the more accurate the evaluation result is but the lower the evaluation efficiency is.
(4) The probability of obtaining a satisfactory solution is improved by replacing the conventional optimization algorithm with the guaranteed coarse guaranteed set to guarantee a single optimal solution. The larger the size of the merit set, the higher the probability value, but at the same time, the more computation is required for the accurate evaluation phase.
(5) The method can rapidly eliminate the infeasible solutions with high probability by adopting a feasible rough judgment model, and can ensure that the real satisfactory solution of the scheduling model is obtained with high probability by combining a rough 'evaluation' and 'preferential' means based on the sequence optimization thought.
Drawings
Fig. 1 is a flowchart of a scheduling optimization method based on a two-layer nested structure according to the present invention.
Detailed Description
The proposed two-layer structure optimization method belongs to one of the nested optimization algorithms. The method solves the MIP problem at the outer layer, is suitable for a random optimization method, and is characterized in that the global search is carried out by utilizing the continuous update of a plurality of solutions without depending on the gradient information of a solution space. The solution updating process depends on the optimal value obtained by solving the NLP problem by the inner layer structure and is used as a basis for screening the old solution and generating the new solution. In essence, the goal of the screening process is to find out the better individuals in the target group, and the accurate evaluation value of each individual is not concerned, so that the screening result is only guaranteed to be correct. Therefore, the invention introduces the thought of order optimization, and obtains satisfactory screening results only with higher probability by utilizing rough evaluation. Sequence optimization is a soft optimization method provided for a random simulation optimization problem, and is provided for solving the random simulation optimization problem which is very time-consuming in evaluation, so that the purpose is how to obtain a satisfactory solution with high probability under a rough evaluation condition. The design idea of the order optimization algorithm is to construct a rough evaluation model, evaluate candidate solutions sampled from a solution space inaccurately but quickly, and select an estimated satisfactory solution ranked in the first few bits as a candidate solution based on the idea of comparing the rough evaluation value with the order. Since the "order" converges exponentially and is robust to bias and noise, there will be at least one true satisfactory solution with high probability among the candidate solutions.
In the method provided by the invention, only the optimal target value of the NLP problem about the continuous variable of the original problem is estimated, instead of solving the global optimal solution, the optimal target value is used as the rough evaluation of the value of the corresponding discrete variable of the original problem, and only a satisfactory discrete variable candidate solution is ensured to be obtained with a high probability. Thus, after a certain number of iteration steps, a refined evaluation is performed to obtain a final satisfactory solution. In addition, constraint order optimization is proposed as a constraint processing means, and the core idea of the constraint order optimization is to add a feasibility rough judgment model, firstly roughly and quickly judge the feasibility of a solution before the solution evaluation, and for the solution which is infeasible to estimate, the evaluation is not continued. Therefore, the method can eliminate infeasible solutions in a probability sense, and further saves the calculation amount.
For ease of presentation, the general form of the MINLP model of the scheduling optimization problem to which the present invention is directed is first described:
min x , y Z = f ( x , y )
s.t.gp(x,y)≤0 p=1,...,P(1)
hq(x,y)=0 q=1,...,Q
x∈X,y∈Y
wherein X ∈ X represents a continuous variable, Y ∈ Y represents a discrete variable, gp(. h) is an inequality constraint expressionq(. DEG) is an equality constraint expression, f (. DEG), gp(. and h)qThere is at least one non-linear function. For the sake of illustration, the MINLP model is abbreviated
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> <mo>,</mo> <mi>s</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein, the optimization variables are respectively discrete quantities s ═ s1,...,st,...,sT]And the continuous quantity f ═ f1,...,fp,...,fP]And S and F are solution spaces of two groups of variables respectively. Aiming at the model, the two-layer nested optimization structure is
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> <mo>,</mo> <mi>s</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>&DoubleLeftRightArrow;</mo> <munder> <mi>min</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </munder> <mrow> <mo>(</mo> <munder> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
In the case of a fixed discrete variable s, the original modelDegenerating into a relatively simple inner-layer non-convex NLP sub-modelOnly the continuous variable f remains as the decision variable to be optimized. Solving the sub-model, wherein the optimal target value is the extreme optimal value corresponding to the discrete variable s
<math> <mrow> <msup> <mi>e</mi> <mo>*</mo> </msup> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
Through the inner-layer optimization solution, each discrete variable candidate solution in the outer-layer discrete space can obtain an extreme optimal value. Solving the outer layer combination optimization problem based on the extreme optimal value by using a discrete random optimization algorithm to obtain a global optimal solution s*
<math> <mrow> <msup> <mi>s</mi> <mo>*</mo> </msup> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </munder> <mrow> <mo>(</mo> <msup> <mi>e</mi> <mo>*</mo> </msup> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Optimal solution of its corresponding sub-modelI.e. the global optimum corresponding to the continuous variable.
However, it is time-consuming to solve the optimal solution of the inner-layer submodel, and each discrete variable candidate solution corresponds to one inner-layer submodel, so that the direct solution of the two-layer optimization problem does not have practical operability. If the inner continuous variable optimization problem is regarded as the evaluation problem of the outer discrete variable, and then the scheduling optimization problem can be converted into the inner evaluation problem and the outer optimization problem of the discrete variable, the optimization efficiency can be improved by properly reducing the evaluation accuracy through the order optimization thought. Specifically, with the rough evaluation model, only a small amount of computing resources are consumed to obtain an estimate of the optimal value of the sub-optimization problem, i.e., the rough "evaluation". By comparing the rough evaluation of different discrete variable candidate solutions, the "preference" of the candidate solutions is performed. Further, to deal with constraints, a feasibility determination model is established, the feasibility of the discrete variable candidate solution is determined roughly and quickly before the evaluation, and the evaluation is not continued for the candidate solution which is not feasible to estimate. Therefore, the infeasible candidate solution can be eliminated in the probability sense, and the calculation amount is further saved.
The optimization algorithm based on the two-layer nested optimization structure provided by the invention comprises two main stages of probability optimization and accurate evaluation. The probability optimization stage is a key innovation of the method, the probability optimization stage is provided with an inner layer optimization and an outer layer optimization, coarse evaluation and screening are carried out on each discrete variable candidate solution in the inner layer optimization, and the discrete variable candidate solution set is continuously updated in the outer layer optimization. And (4) performing calculation by nesting the inner layer and the outer layer of the algorithm until the updating calculation times of the outer layer reach a preset value, and entering an accurate evaluation stage by the algorithm. And in the accurate evaluation stage, a common continuous optimization method is utilized, continuous variable optimization is carried out on the basis of a limited number of determined discrete variable candidate solutions, the NLP subproblems corresponding to the discrete variable candidate solutions are accurately solved, and then the final solution of the scheduling optimization problem is found out from the optimal solution obtained by each problem.
The flow chart of the optimization algorithm based on the two-layer nested structure is shown in figure 1.
The method comprises the following specific steps:
step 1) initializing parameters.
Setting parameters of an outer GA algorithm and an inner PSO algorithm, including a population size Ns,NfPSO velocity update weighting factor ω, φpgGA crossover probability pcAnd the probability of variation pmMaximum number of iteration steps
Step 2) probability preference stage.
Step 2.1 setting outer optimization iteration step number ksInitializing a discrete variable candidate solution s (1, k) when equal to 0s),…,s(ns,ks),…,s(Ns,ks)。
Step 2.2 if the outer layer optimizes the iteration step number ksNot exceedFor s (n)s,ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step and record historical optimization data <math> <mrow> <mi>e</mi> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msubsup> <mi>K</mi> <mi>f</mi> <mi>FM</mi> </msubsup> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
Step 2.3 for s (n)s,ks)(ns=1,...,Ns) Using its historical minimum penalty value And feasibility coarse decision model estimationAnd then determining s (n)s,ks) Determination of feasibility of
Step 2.4 if s (n)s,ks) Infeasible, reserveAs a coarse penalty valueIf s (n)s,ks) Feasible, iterative optimization submodel with inner layer optimization algorithm (PSO algorithm)ToStep, recording historical optimization dataEstimation using a coarse evaluation modelPreserving estimatesAs roughness evaluation value
Step 2.5 roughness penalty according to the obtained roughness evaluation value and roughnessPenalty value, updating rough guarantee set by using outer layer optimization algorithm (GA algorithm) and feasibility rule method
In the step, according to a feasibility rough judgment model of a discrete variable candidate solution, a rough penalty value corresponding to a discrete variable combination s is quickly obtainedAnd feasibility estimationAnd further judging whether to continue evaluating s. If the evaluation is feasible, further performing rough evaluation to obtain a rough evaluation valueBased on the above evaluation results, the outer layer random optimization algorithm optimizes the problem by solving the following combinations:
before obtaining by outer layer random optimization algorithmA candidate solution of a discrete variable that estimates the best and feasible, i.e.
Store it in the rough keep-good setIn (1). And executing the optimization-preserving updating operation of the outer-layer random optimization algorithm under the rough evaluation condition by a feasibility rule method.
Step 2.6 outer optimization iteration step number ks=ks+1. If the outer layer optimizes the iteration step number ksIs less thanAnd returning to the step 2.2. If the outer layer optimizes the iteration step number ksExceedStep 3) is entered.
And 3) a precise evaluation stage.
Solving for s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step of obtaining an accurate evaluation valueAnd corresponding optimal continuous variable solution f * | s ( n s , K s ) ( n s = 1 , . . . , N s ) .
And 4) determining a final satisfactory solution.
Comparing the accurate evaluation values of each NLP optimization submodelAnd taking an optimal group as a satisfactory solution of the scheduling model according to the following feasibility rule method:
f g = f * | s g - - - ( 9 )
the final satisfactory solution is s g = s ( G , K s max ) , f g = f * | s ( G , K s max ) , Wherein <math> <mrow> <mi>G</mi> <mo>=</mo> <mi>arg</mi> <msub> <mi>min</mi> <mrow> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>&Element;</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> <mo>}</mo> </mrow> </msub> <msup> <mi>e</mi> <mo>*</mo> </msup> <msub> <mo>|</mo> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>K</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>.</mo> </mrow> </math>
In the step 2.2), the key step of the inner layer optimization is to use a feasible rough judgment model and a rough evaluation model to perform solution screening and evaluation, and essentially predict the convergence results of the penalty value and the evaluation value, that is, the numerical information (inner layer optimization historical data) obtained by finite step iteration is used to estimate the accurate convergence value obtained after infinite step iteration calculation. The specific implementation means is to fit an iterative reduction process of the penalty value (evaluation value) by a negative power function curve to obtain a final convergence result. The method comprises the following steps:
a.1 acquisition of inner-layer optimization historical data
The process of obtaining the inner optimization historical data is a process of iterative calculation of an inner optimization algorithm. The invention adopts Particle Swarm Optimization (PSO) algorithm to solveThe cost of any individual f in the population is defined as <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> </mrow> </msub> <mo>,</mo> </mrow> </math> The algorithm calculation process is as follows:
wherein, KfOptimizing iteration steps, w, phi, for a maximum inner layer for obtaining historical datapgThe weighting coefficients are updated for the speed of the PSO algorithm.
A.2 solving of the coarse evaluation model
Because the negative power function can better describe and predict e (k)f)|sThus the present invention utilizes the optimized historical data e (k)f)|s,kf=1,2,...,KfFitting a negative power function to estimate e (∞) luminances
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mover> <mi>e</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>,</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>=</mo> <mi>A</mi> <mo>&CenterDot;</mo> <msup> <msub> <mi>k</mi> <mi>f</mi> </msub> <mrow> <mo>-</mo> <mi>B</mi> </mrow> </msup> <mo>+</mo> <mi>C</mi> <mo>,</mo> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>&infin;</mo> </mtd> </mtr> <mtr> <mtd> <mi>&theta;</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>C</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
Further adopting a recursion fitting algorithm, and obtaining new data e (k) in each iteration of the inner-layer random optimization algorithmf)|sThe fitting parameter θ is then analytically updated with minimal computation by the following recursions (11) and (12) until the inner-layer algorithm iterates to the number of steps KfSo that the data amount satisfies the fitting accuracy:
wherein,to representThe weight coefficient of (1);indicates when the algorithm iterates to kfStep time, fit function valueAnd the observed value e (k)f)|sMean square error between:
the initial value of the recursion algorithm is set according to the following method:
θ0=(A0,B0,C0)=(0,0.5,e(1)|s) (14)
<math> <mrow> <msup> <mrow> <mo>{</mo> <mfrac> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <mi>&theta;</mi> </mrow> </mfrac> <msub> <mi>J</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>}</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>&epsiv;</mi> <mo>&CenterDot;</mo> <mi>I</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
in which a very small constant is represented.
Using optimized historical data e (k)f)|s,kf=1,2,...,KfObtaining the current information of e (∞)sThe estimated values of (c) are:
<math> <mrow> <mover> <mi>e</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <msub> <mi>k</mi> <mi>f</mi> </msub> </msub> <mo>,</mo> <mo>&infin;</mo> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mi>s</mi> </msub> <mo>=</mo> <msub> <mi>C</mi> <msub> <mi>K</mi> <mi>f</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
the rough evaluation process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer submodel is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical optimized data e (k)f)|s,Fitting the negative power function (10) recurrently using equations (11) and (12)Parameter theta, andfinal evaluation value as s
In said step 2.3), the key step is to calculate the historical minimum penalty value p' (k)f)|sAnd calculating by using a feasibility rough judgment modelAnd then determining s (n)s,ks) The specific implementation steps are as follows.
B. Determination of a feasibility rough decision model
Note e (k)f)|sSolving the sub-model by the inner-layer random optimization algorithm under the condition that the fixed discrete variable S belongs to SAnd iterate to the kthfAnd (4) evaluating the steps. E (k)f)|sThe sum of the target value and the penalty value is decomposed as:
e(kf)|s=o(kf)|s+p(kf)|s (17)
wherein, o (k)f)|sAnd p (k)f)|sRespectively representing the inner layer random optimization iteration to the k < th >fObjective function value and penalty value of step.
When the parameters are selected properly and the iteration step number tends to be infinite, the random optimization algorithm converges to the global optimal solution with the probability of 1. Thus, e (∞) ventilationsAs an optimal target value for the submodel, i.e. an accurate evaluation value e for a discrete variable s*|s
Defining a historical minimum penalty value p' (k)f)|s,kf=1,2,...,KfComprises the following steps:
historical minimum penalty value for discrete variable s is p' (∞) & gtsBy solving for historical minimum penalty value p' (∞) & gtluminancesAnd the feasibility judgment of the optimal solution in the solution subspace corresponding to the discrete variable s can be carried out, and the feasibility judgment is the working process of the feasibility judgment model. Coarse particles of p' (∞)sCalculating process of (1) and e (∞) luminancesIs similar to but only willThe evaluation value in the expression is replaced by a historical minimum penalty value, and the modified expression (13) is as follows:
the feasibility rough judgment process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer sub-model is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical minimum penalty value data p' (k)f)|s,Fitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal history minimum penalty as sPenalty p' (∞) luminancesThen, the feasibility of the test piece is determined.
In the step 2.5, the key step of outer layer optimization is to update the rough goodness-keeping set by using an outer layer optimization algorithm (GA algorithm) and a feasibility rule method, and the specific steps are as follows:
C. calculation of skin optimization
The outer layer optimization and the inner layer optimization are used as evaluation tools of discrete variable candidate solutions, and a basic genetic algorithm (GA algorithm) is adopted to solve mins∈SJ(s), the cost of any individual s in the population is defined asContinuously updating the population by iterative calculation of fixed step number to finally obtainAnd (4) solving the high-quality discrete variable candidate solution. The algorithm calculation process is as follows:
wherein, KsOptimizing the number of iteration steps, p, for the maximum skinc,pmCross probability and mutation probability of the GA algorithm.
In the step 3), the key step is to solve s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodel <math> <mrow> <msub> <mi>min</mi> <mrow> <mi>f</mi> <mo>&Element;</mo> <mi>F</mi> </mrow> </msub> <mi>J</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <msub> <mo>|</mo> <mrow> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>K</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> </mrow> </math> The method comprises the following specific steps:
D. precise evaluation solving algorithm
Solving by adopting Particle Swarm Optimization (PSO) algorithm in the accurate evaluation stageThe cost of any individual f in the population is defined asThe algorithm calculation process is as follows:
wherein,maximum number of iteration steps, w, phi, for the precise evaluation phasepgThe weighting coefficients are updated for the speed of the PSO algorithm.
Accurately solving the current rough optimization-preserving set by utilizing the algorithmNLP submodel Further obtainSolution to continuous variableAnd a corresponding optimum evaluation value of

Claims (5)

1. A scheduling optimization method of a nested structure is characterized by comprising the following steps:
step 1) initializing parameters:
setting parameters of an outer GA algorithm and an inner PSO algorithm, including a population size Ns,NfPSO velocity update weighting factor ω, φpgGA crossover probability pcAnd the probability of variation pmMaximum number of iteration stepsKs
Step 2) probability optimization stage;
step 2.1 setting outer optimization iteration step number ksInitializing a discrete variable candidate solution s (1, k) when equal to 0s),…,s(ns,ks),…,s(Ns,ks);
Step 2.2 if the outer layer optimizes the iteration step number ksNot exceedFor s (n)s,ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step and record historical optimization data
Step 2.3 for s (n)s,ks)(ns=1,...,Ns) Using its historical minimum penalty value And feasibility coarse decision model estimationAnd then determining s (n)s,ks) Determination of feasibility of
Step 2.4 ifs(ns,ks) Infeasible, reserveAs a coarse penalty valueIf s (n)s,ks) Feasible, iterative optimization submodel with inner layer optimization algorithm (PSO algorithm)ToStep, recording historical optimization dataEstimation using a coarse evaluation modelPreserving estimatesAs roughness evaluation value
Step 2.5, according to the obtained rough evaluation value and rough penalty value, updating a rough guarantee set by utilizing an outer layer optimization algorithm (GA algorithm) and a feasibility rule method
In the step, according to a feasibility rough judgment model of a discrete variable candidate solution, a rough penalty value corresponding to a discrete variable combination s is quickly obtainedAnd feasibility estimationFurther judging whether to continue evaluating s; if the evaluation is feasible, further performing rough evaluation to obtain a rough evaluation valueBased on the above evaluation results, the outer layer random optimization algorithm optimizes the problem by solving the following combinations:
(6)
before obtaining by outer layer random optimization algorithmA candidate solution of a discrete variable that estimates the best and feasible, i.e.
(7)
And is
Store it in the rough keep-good setIn (1). Is carried out under rough evaluation conditions by a feasibility rule methodPerforming optimization updating operation on the outer layer random optimization algorithm;
step 2.6 outer optimization iteration step number ks=ks+1 if outer optimization iteration step number ksIs less thanReturning to step 2.2, if the outer layer optimizes the iteration step number ksExceedStep 3) is entered;
step 3) accurate evaluation stage:
solving for s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step of obtaining an accurate evaluation valueAnd corresponding optimal continuous variable solution
Step 4) determining a final satisfactory solution:
comparing the accurate evaluation values of each NLP optimization submodelAnd taking an optimal group as a satisfactory solution of the scheduling model according to the following feasibility rule method:
and is
(8)
Wherein,
the final satisfactory solution isWherein
2. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.2), a key step of inner layer optimization is to utilize a feasible rough judgment model and a rough evaluation model to perform solution screening and evaluation, and essentially predict convergence results of penalty values and evaluation values, that is, numerical information (inner layer optimization historical data) obtained by finite step number iteration is used to estimate an accurate convergence value obtained after infinite step iteration calculation, and a specific implementation means is to fit an iterative descending process of the penalty values (evaluation values) with a negative power function curve to obtain a final convergence result, and the specific steps are as follows:
a.1 acquisition of inner-layer optimization historical data
The process of obtaining the inner optimization historical data is a process of iterative calculation of an inner optimization algorithm. The invention adopts Particle Swarm Optimization (PSO) algorithm to solveCost definition of any individual f in a populationIs composed ofThe algorithm calculation process is as follows:
wherein, KfOptimizing iteration steps, w, phi, for a maximum inner layer for obtaining historical datapgUpdating the weighting coefficient for the speed of the PSO algorithm;
a.2 solving of the coarse evaluation model
Because the negative power function can better describe and predict e (k)f)|sUsing the optimization history data e (k)f)|s,kf=1,2,...,KfFitting a negative power function to estimate e (∞) luminances
(10)
θ=(A,B,C)
Further adopting a recursion fitting algorithm, and obtaining new data e (k) in each iteration of the inner-layer random optimization algorithmf)|sThe fitting parameter θ is then analytically updated with minimal computation by the following recursions (11) and (12) until the inner-layer algorithm iterates to the number of steps KfSo that the data amount satisfies the fitting accuracy:
kf=1,2,...,Kf
kf=1,2,...,Kf
wherein,to representThe weight coefficient of (1);indicates when the algorithm iterates to kfStep time, fit function valueAnd the observed value e (k)f)|sMean square error between:
the initial value of the recursion algorithm is set according to the following method:
θ0=(A0,B0,C0)=(0,0.5,e(1)|s) (14)
wherein, a very small constant is represented;
using optimized historical data e (k)f)|s,kf=1,2,...,KfObtaining the current information of e (∞)sThe estimated values of (c) are:
aiming at the S-belonged discrete variable candidate solution S of the outer layer to be evaluatedThe rough evaluation process can be summarized as that the corresponding inner layer submodel is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical optimization dataFitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal evaluation value as s
3. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in said step 2.3), the key step is to calculate the historical minimum penalty value p' (k)f)|sAnd calculating by using a feasibility rough judgment modelAnd then determining s (n)s,ks) The method comprises the following specific implementation steps:
B. determination of feasibility rough decision model:
note e (k)f)|sSolving the sub-model by the inner-layer random optimization algorithm under the condition that the fixed discrete variable S belongs to SAnd iterate to the kthfEvaluation value of step e(kf)|sThe sum of the target value and the penalty value is decomposed as:
e(kf)|s=o(kf)|s+p(kf)|s (17)
wherein, o (k)f)|sAnd p (k)f)|sRespectively representing the inner layer random optimization iteration to the k < th >fObjective function value and penalty value of step.
When the parameters are selected properly and the iteration step number tends to be infinite, the random optimization algorithm converges to the global optimal solution with the probability of 1. Thus, e (∞) ventilationsAs an optimal target value for the submodel, i.e. an accurate evaluation value e for a discrete variable s*|s
Defining a historical minimum penalty value p' (k)f)|s,kf=1,2,...,KfComprises the following steps:
historical minimum penalty value for discrete variable s is p' (∞) & gtsBy solving for historical minimum penalty value p' (∞) & gtluminancesThe feasibility determination of the optimal solution in the solution subspace corresponding to the discrete variable s can be carried out, namely the working process of the feasibility determination model, p' (∞) · survivalsCalculating process of (1) and e (∞) luminancesIs similar to but only willThe evaluation value in the expression is replaced by a historical minimum penalty value, and the modified expression (13) is as follows:
the feasibility rough judgment process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer sub-model is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical minimum penalty value dataFitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal historical minimum penalty value p' (∞) as ssThen, the feasibility of the test piece is determined.
4. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.5), the key step of outer layer optimization is to update a rough guarantee set by using an outer layer optimization algorithm (GA algorithm) and a feasibility rule method, and the specific steps are as follows:
C. calculation of skin optimization
The outer layer optimization and the inner layer optimization are used as evaluation tools of discrete variable candidate solutions, and a basic genetic algorithm (GA algorithm) is adopted to solve mins∈SJ(s), the cost of any individual s in the population is defined asContinuously updating the population by iterative calculation of fixed step number to finally obtainAnd (4) solving the high-quality discrete variable candidate solution. The algorithm calculation process is as follows:
wherein, KsOptimizing the number of iteration steps, p, for the maximum skinc,pmCross probability and mutation probability of the GA algorithm.
5. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 3), the key step is to solve s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelThe method comprises the following specific steps:
D. precise evaluation solving algorithm
Solving by adopting Particle Swarm Optimization (PSO) algorithm in the accurate evaluation stageThe cost of any individual f in the population is defined asThe algorithm calculation process is as follows:
wherein,maximum number of iteration steps, w, phi, for the precise evaluation phasepgUpdating the weighting coefficient for the speed of the PSO algorithm;
accurately solving the current rough optimization-preserving set by utilizing the algorithmNLP submodel Further obtainSolution to continuous variableAnd a corresponding optimum evaluation value of
CN201410601666.2A 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure Pending CN104460594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410601666.2A CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410601666.2A CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Publications (1)

Publication Number Publication Date
CN104460594A true CN104460594A (en) 2015-03-25

Family

ID=52906819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410601666.2A Pending CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Country Status (1)

Country Link
CN (1) CN104460594A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105629937A (en) * 2016-02-19 2016-06-01 浙江大学 Variable load dispatching method for combined production air separation unit
CN107133694A (en) * 2017-04-27 2017-09-05 浙江大学 Tower type solar thermo-power station mirror optimization method dispatching cycle
JP2018503884A (en) * 2015-08-11 2018-02-08 サビック グローバル テクノロジーズ ビー.ブイ. Multi-ply laminated composites with low unit area weight
CN107864071A (en) * 2017-11-02 2018-03-30 江苏物联网研究发展中心 A kind of dynamic measuring method, apparatus and system towards active safety
CN109118011A (en) * 2018-08-28 2019-01-01 摩佰尔(天津)大数据科技有限公司 The intelligent dispatching method and system of pier storage yard
CN117669930A (en) * 2023-11-21 2024-03-08 清华大学 Cooperative scheduling method, apparatus, device, storage medium and program product
CN118365013A (en) * 2024-02-20 2024-07-19 西安电子科技大学广州研究院 Multi-objective intelligent compiling optimization method for comprehensive multi-factor construction organization design

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018503884A (en) * 2015-08-11 2018-02-08 サビック グローバル テクノロジーズ ビー.ブイ. Multi-ply laminated composites with low unit area weight
CN105629937A (en) * 2016-02-19 2016-06-01 浙江大学 Variable load dispatching method for combined production air separation unit
CN105629937B (en) * 2016-02-19 2018-05-25 浙江大学 A kind of method for the scheduling of coproduction air separation unit varying duty
CN107133694A (en) * 2017-04-27 2017-09-05 浙江大学 Tower type solar thermo-power station mirror optimization method dispatching cycle
CN107133694B (en) * 2017-04-27 2020-07-21 浙江大学 Tower type solar thermal power station mirror field scheduling period optimization method
CN107864071A (en) * 2017-11-02 2018-03-30 江苏物联网研究发展中心 A kind of dynamic measuring method, apparatus and system towards active safety
CN109118011A (en) * 2018-08-28 2019-01-01 摩佰尔(天津)大数据科技有限公司 The intelligent dispatching method and system of pier storage yard
CN117669930A (en) * 2023-11-21 2024-03-08 清华大学 Cooperative scheduling method, apparatus, device, storage medium and program product
CN118365013A (en) * 2024-02-20 2024-07-19 西安电子科技大学广州研究院 Multi-objective intelligent compiling optimization method for comprehensive multi-factor construction organization design

Similar Documents

Publication Publication Date Title
CN104460594A (en) Dispatching optimization method based on two-layer nest structure
CN101551884B (en) A fast CVR electric load forecast method for large samples
WO2015166637A1 (en) Maintenance period determination device, deterioration estimation system, deterioration estimation method, and recording medium
CN106056127A (en) GPR (gaussian process regression) online soft measurement method with model updating
CN112184391A (en) Recommendation model training method, medium, electronic device and recommendation model
JP2004086896A (en) Method and system for constructing adaptive prediction model
CN116526450A (en) Error compensation-based two-stage short-term power load combination prediction method
CN106600001B (en) Glass furnace Study of Temperature Forecasting method based on Gaussian mixtures relational learning machine
JP6451735B2 (en) Energy amount estimation device, energy amount estimation method, and energy amount estimation program
JP6477703B2 (en) CM planning support system and sales forecast support system
WO2015145979A1 (en) Price estimation device, price estimation method, and recording medium
Gadakh et al. Selection of cutting parameters in side milling operation using graph theory and matrix approach
CN114004065A (en) Transformer substation engineering multi-objective optimization method based on intelligent algorithm and environmental constraints
Parwita et al. Optimization of COCOMO II coefficients using Cuckoo optimization algorithm to improve the accuracy of effort estimation
CN114564787B (en) Bayesian optimization method, device and storage medium for target related airfoil design
CN110807508A (en) Bus peak load prediction method considering complex meteorological influence
CN117829322A (en) Associated data prediction method based on periodic time sequence and multiple dimensions
CN112288187A (en) Big data-based electricity sales amount prediction method
CN112801356A (en) Power load prediction method based on MA-LSSVM
CN107909202A (en) A kind of oilwell produced fluid amount integrated prediction method based on time series
Naik et al. Project cost and duration optimization using soft computing techniques
CN116681157A (en) Power load multi-step interval prediction method based on prediction interval neural network
Batkovsky et al. Simulation of strategy development production in defense-industrial complex
CN109376957A (en) A kind of prediction technique of thermal power plant&#39;s load
Alajlan et al. Optimization of COCOMO-II Model for Effort and Development Time Estimation using Genetic Algorithms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325

WD01 Invention patent application deemed withdrawn after publication