CN104460594A - Dispatching optimization method based on two-layer nest structure - Google Patents

Dispatching optimization method based on two-layer nest structure Download PDF

Info

Publication number
CN104460594A
CN104460594A CN201410601666.2A CN201410601666A CN104460594A CN 104460594 A CN104460594 A CN 104460594A CN 201410601666 A CN201410601666 A CN 201410601666A CN 104460594 A CN104460594 A CN 104460594A
Authority
CN
China
Prior art keywords
optimization
algorithm
evaluation
rough
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410601666.2A
Other languages
Chinese (zh)
Inventor
江永亨
付骁鑫
王京春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201410601666.2A priority Critical patent/CN104460594A/en
Publication of CN104460594A publication Critical patent/CN104460594A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41865Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by job scheduling, process planning, material flow
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a dispatching optimization method based on a two-layer nest structure and belongs to the technical field of industrial automation. A nest optimization algorithm structure is adopted for the problem of mixed integral non-linear programming (MINLP) existing in the production dispatching of most of manufacturing enterprises, global search is conducted by means of continuous updating of multiple solutions without the gradient information of a solution space, a feasible coarse judgment model is added and constructed, the feasibility of a solution is coarsely and fast judged before the solution evaluation, so that it is ensured that the true satisfactory solution of the dispatching model is obtained with a high probability, the calculated amount is small, the optimizing process is simple, and the application range is wide.

Description

一种基于两层嵌套结构的调度优化方法A Scheduling Optimization Method Based on Two-Layer Nested Structure

技术领域technical field

本发明涉及一种基于两层嵌套结构的调度优化方法,适用于解决生产调度遇到的混合整数非线性规划(Mixed Integral Non-Linear Programming,以下简称MINLP)问题,属于工业生产自动化技术领域。The invention relates to a scheduling optimization method based on a two-layer nested structure, which is suitable for solving mixed integer non-linear programming (Mixed Integral Non-Linear Programming, hereinafter referred to as MINLP) problems encountered in production scheduling, and belongs to the technical field of industrial production automation.

背景技术Background technique

随着制造业生产规模日益扩大,生产工艺日趋复杂,市场竞争日趋激烈,生产调度是一个提高企业管理水平、获取更大的经济效益的重要工具。一般化的生产调度问题是针对可分解的生产流程,在满足约束条件的情况下,如何安排各分解流程所占用的原料资源、加工控制变量、加工时间及先后顺序,以实现生产效益(对生产成本、产品质量的综合评价)的最大化。调度优化问题与一般的优化问题相似,但也存在新特点,如问题规模大、生产过程的描述复杂,约束条件和目标函数难以处理等。调度优化问题的数学模型主要采用数学规划描述法描述,即用离散变量表示排列顺序、生产方案选择等离散决策状态,用连续变量表示连续操作条件,用代数等式或不等式描述目标函数和约束条件,因此问题被抽象化为MINLP模型,这样的描述方式直观、易懂,同时有利于衡量模型的复杂度。With the increasing production scale of the manufacturing industry, the increasingly complex production process, and the increasingly fierce market competition, production scheduling is an important tool to improve the management level of enterprises and obtain greater economic benefits. The generalized production scheduling problem is aimed at the decomposable production process. Under the condition of satisfying the constraint conditions, how to arrange the raw material resources, processing control variables, processing time and sequence occupied by each decomposition process to achieve production benefits (for production Comprehensive evaluation of cost and product quality) maximization. Scheduling optimization problems are similar to general optimization problems, but they also have new characteristics, such as large-scale problems, complex descriptions of production processes, difficult constraints and objective functions, and so on. The mathematical model of the scheduling optimization problem is mainly described by the mathematical programming description method, that is, using discrete variables to represent discrete decision states such as arrangement order and production plan selection, using continuous variables to represent continuous operating conditions, and using algebraic equations or inequalities to describe the objective function and constraints , so the problem is abstracted into a MINLP model. This description is intuitive and easy to understand, and it is also conducive to measuring the complexity of the model.

两层优化方法是一类特殊的求解MINLP问题的算法,它利用优化变量的类型特点,采用两层结构求解。两层优化算法可分为“区间逼近”算法和“嵌套优化”算法。前者通过迭代求解一系列MILP主问题和NLP子问题获取原MINLP优化问题的下边界和上边界,直至两个边界的区间差小于设定的范围,因此称为“区间逼近”算法。后者针对每个固定的离散变量候选解,将模型简化为一个NLP模型,因此可将NLP优化作为离散变量取值已确定情况下的子优化问题,将子优化问题得到的最优值作为对离散变量解的评价,视为离散变量的寻优依据。这样,将原MINLP优化问题转化为外层MIP模型和内层NLP模型的嵌套优化问题,因此称为“嵌套优化”算法。如何提高求解效率是“嵌套优化”算法面临的主要问题,因为外层组合优化过程中的每一个离散变量候选解,都对应一个内层NLP优化问题。若要求取这些NLP模型的最优值,需要耗费大量的求解时间,目前大多数“嵌套优化”算法均采用随机–精确的两层结构,因为当内层NLP模型为凸优化问题时,精确搜索算法在求解效率上比随机搜索算法具有明显的优势。The two-level optimization method is a special algorithm for solving MINLP problems. It utilizes the type characteristics of the optimization variables and uses a two-level structure to solve the problem. The two-level optimization algorithm can be divided into "interval approximation" algorithm and "nested optimization" algorithm. The former obtains the lower boundary and upper boundary of the original MINLP optimization problem by iteratively solving a series of MILP main problems and NLP sub-problems until the interval difference between the two boundaries is less than the set range, so it is called "interval approximation" algorithm. The latter simplifies the model into an NLP model for each fixed discrete variable candidate solution, so NLP optimization can be regarded as a sub-optimization problem when the value of the discrete variable has been determined, and the optimal value obtained by the sub-optimization problem is used as a pair The evaluation of discrete variable solutions is regarded as the basis for optimizing discrete variables. In this way, the original MINLP optimization problem is transformed into a nested optimization problem of the outer MIP model and the inner NLP model, so it is called the "nested optimization" algorithm. How to improve the solution efficiency is the main problem faced by the "nested optimization" algorithm, because each discrete variable candidate solution in the outer combinatorial optimization process corresponds to an inner NLP optimization problem. If it is required to obtain the optimal value of these NLP models, it will take a lot of time to solve. At present, most "nested optimization" algorithms use a random-accurate two-layer structure, because when the inner NLP model is a convex optimization problem, the exact The search algorithm has obvious advantages in solving efficiency than the random search algorithm.

发明内容Contents of the invention

本发明的目的是提出一种基于两层嵌套结构的调度优化方法,该方法对于大多数制造企业生产调度中存在的MINLP问题,能确保以较高的概率获得调度模型的真实满意解,计算量小,而且寻优过程简单,应用范围广。The purpose of the present invention is to propose a scheduling optimization method based on a two-layer nested structure. For the MINLP problems existing in the production scheduling of most manufacturing enterprises, the method can ensure that the real satisfactory solution of the scheduling model is obtained with a higher probability. The quantity is small, and the optimization process is simple, and the application range is wide.

本发明提出的嵌套结构的调度优化方法,包括以下步骤:The scheduling optimization method of the nested structure that the present invention proposes, comprises the following steps:

步骤1)初始化参数:Step 1) Initialize parameters:

设置外层GA算法和内层PSO算法的参数,包括种群大小Ns,Nf,PSO速度更新加权系数ω,φpg,GA交叉概率pc和变异概率pm,最大迭代步数 Set the parameters of the outer GA algorithm and the inner PSO algorithm, including population size N s , N f , PSO speed update weighting coefficients ω, φ p , φ g , GA crossover probability p c and mutation probability p m , and the maximum number of iteration steps

步骤2)概率择优阶段;Step 2) probability selection stage;

步骤2.1设置外层优化迭代步数ks=0,初始化离散变量候选解s(1,ks),…,s(ns,ks),…,s(Ns,ks);Step 2.1 Set the outer layer optimization iteration step k s =0, initialize discrete variable candidate solutions s(1,k s ),…,s(n s ,k s ),…,s(N s ,k s );

步骤2.2如果外层优化迭代步数ks未超过对于s(ns,ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型利用内层优化算法(PSO算法)迭代计算至步,并记录历史优化数据 e ( k f ) | s ( n s , k s ) , p ′ ( k f ) | s ( n s , k s ) ( k f = 1 , . . . , K f FM ) ; Step 2.2 If the number of iteration steps k s of the outer layer optimization does not exceed For s(n s ,k s )(n s =1,...,N s ) corresponding N s NLP optimization sub-models Using the inner layer optimization algorithm (PSO algorithm) to iteratively calculate to step, and record historical optimization data e ( k f ) | the s ( no the s , k the s ) , p ′ ( k f ) | the s ( no the s , k the s ) ( k f = 1 , . . . , K f FM ) ;

步骤2.3针对s(ns,ks)(ns=1,...,Ns),利用其历史最小惩罚值 和可行性粗糙判定模型估计进而确定s(ns,ks)的可行性,即确定 Step 2.3 For s(n s ,k s )(n s =1,...,N s ), use its historical minimum penalty value and Feasibility Rough Decision Model Estimation Then determine the feasibility of s(n s , k s ), that is, to determine

步骤2.4如果s(ns,ks)不可行,保留作为粗糙惩罚值如果s(ns,ks)可行,用内层优化算法(PSO算法)迭代优化子模型步,记录历史优化数据利用粗糙评价模型估计保留估计值作为粗糙评价值 Step 2.4 If s(n s ,k s ) is not feasible, keep as rough penalty If s(n s , k s ) is feasible, iteratively optimize the sub-model with the inner optimization algorithm (PSO algorithm) to step, record historical optimization data Estimated by rough evaluation model Keep estimates rough evaluation

步骤2.5根据获得的粗糙评价值和粗糙惩罚值,利用外层优化算法(GA算法)和可行性规则法更新粗糙保优集合 Step 2.5 According to the obtained rough evaluation value and rough penalty value, use the outer layer optimization algorithm (GA algorithm) and the feasibility rule method to update the rough optimal set

在该步骤中,根据离散变量候选解的可行性粗糙判定模型,快速获取对应离散变量组合s的粗糙惩罚值和可行性估计进而判断是否继续对s实施评价;若可行,则进一步实施粗糙评价获取粗糙评价值基于以上评价结果,外层随机优化算法通过求解以下组合优化问题:In this step, according to the feasibility rough judgment model of the discrete variable candidate solution, the rough penalty value corresponding to the discrete variable combination s is quickly obtained and feasibility estimates Then judge whether to continue to evaluate s; if feasible, further implement rough evaluation to obtain rough evaluation value Based on the above evaluation results, the outer stochastic optimization algorithm solves the following combinatorial optimization problems:

通过外层随机优化算法获取前个估计最优且可行的离散变量候选解,即Obtain the former through the outer stochastic optimization algorithm An estimated optimal and feasible discrete variable candidate solution, namely

将其保存在粗糙保优集合中。通过可行性规则法,在粗糙评价条件下来执行外层随机优化算法的保优更新操作;Save it in the rough premium collection middle. Through the feasibility rule method, the optimal update operation of the outer stochastic optimization algorithm is performed under rough evaluation conditions;

步骤2.6外层优化迭代步数ks=ks+1,如果外层优化迭代步数ks小于返回步骤2.2,如果外层优化迭代步数ks超过则进入步骤3);Step 2.6 The number of iteration steps of outer layer optimization k s =k s +1, if the number of iteration steps of outer layer optimization k s is less than Return to step 2.2, if the number of iteration steps k s of the outer layer optimization exceeds Then go to step 3);

步骤3)精确评价阶段:Step 3) Accurate evaluation stage:

求解s(ns,Ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型利用内层优化算法(PSO算法)迭代计算至步,获取精确评价值和对应的最优连续变量解 f * | s ( n s , K s ) ( n s = 1 , . . . , N s ) ; Solve N s NLP optimization sub-models corresponding to s(n s ,K s )(n s =1,...,N s ) Using the inner layer optimization algorithm (PSO algorithm) to iteratively calculate to Step to get accurate evaluation value and the corresponding optimal continuous variable solution f * | the s ( no the s , K the s ) ( no the s = 1 , . . . , N the s ) ;

步骤4)确定最终满意解:Step 4) determine the final satisfactory solution:

比较各NLP优化子模型的精确评价值根据如下可行性规则法取其中最优的一组作为调度模型的满意解:Compare the exact evaluation values of each NLP optimization sub-model According to the following feasibility rules, the optimal group is selected as the satisfactory solution of the scheduling model:

ff gg == ff ** || sthe s gg -- -- -- (( 99 ))

最终满意解为 s g = s ( G , K s max ) , f g = f * | s ( G , K s max ) , 其中 G = arg min n s ∈ { 1 , . . . , N s } e * | s ( n s , K s ) . The final satisfactory solution is the s g = the s ( G , K the s max ) , f g = f * | the s ( G , K the s max ) , in G = arg min no the s ∈ { 1 , . . . , N the s } e * | the s ( no the s , K the s ) .

3、如权利要求1所述的一种基于两层嵌套结构的调度优化方法,其特征在于:所述步骤2.2)中,内层优化的关键步骤是利用可行性粗糙判定模型和粗糙评价模型进行解的筛选和评价,本质上是对惩罚值和评价值的收敛结果进行预测,即利用有限步数迭代得到的数值信息(内层优化历史数据)来估计无穷步迭代计算后获得的精确收敛值,具体实现手段是,以负幂函数曲线拟合惩罚值(评价值)的迭代下降过程,来获得最终的收敛结果,具体步骤如下:3. A scheduling optimization method based on a two-layer nested structure as claimed in claim 1, characterized in that: in said step 2.2), the key step of inner layer optimization is to use a feasibility rough judgment model and a rough evaluation model The screening and evaluation of the solution is essentially to predict the convergence results of the penalty value and the evaluation value, that is, to use the numerical information (internal optimization history data) obtained by the limited number of iterations to estimate the exact convergence obtained after the infinite iteration calculation The specific implementation method is to use the negative power function curve to fit the iterative descent process of the penalty value (evaluation value) to obtain the final convergence result. The specific steps are as follows:

A.1内层优化历史数据的获取A.1 The inner layer optimizes the acquisition of historical data

内层优化历史数据的获取过程,是内层优化算法迭代计算的过程。本发明采用粒子群优化算法(PSO算法)求解种群中任一个体f的代价定义为 f ( f ) = J ( f , s ‾ ) | s ‾ = s , 算法计算过程为:The acquisition process of inner optimization historical data is the iterative calculation process of inner optimization algorithm. The present invention adopts particle swarm optimization algorithm (PSO algorithm) to solve The cost of any individual f in the population is defined as f ( f ) = J ( f , the s ‾ ) | the s ‾ = the s , The calculation process of the algorithm is:

其中,Kf为用于获取历史数据的最大内层优化迭代步数,w,φpg为PSO算法的速度更新加权系数;Among them, K f is the maximum number of inner layer optimization iteration steps used to obtain historical data, w, φ p , φ g are the speed update weighting coefficients of the PSO algorithm;

A.2粗糙评价模型的求解A.2 Solution of Rough Evaluation Model

由于负幂函数能够较好地描述和预测e(kf)|s的变化趋势,利用优化历史数据e(kf)|s,kf=1,2,...,Kf拟合负幂函数,进而估计e(∞)|sSince the negative power function can better describe and predict the change trend of e(k f )| s , use the optimized historical data e(k f ) | Power function, and then estimate e(∞)| s :

ee ^^ (( θθ ,, kk ff )) || sthe s == AA ·· kk ff -- BB ++ CC ,, kk ff == 11 ,, .. .. .. ,, ∞∞ θθ == (( AA ,, BB ,, CC )) -- -- -- (( 1010 ))

进一步采用递推拟合算法,当内层随机优化算法每迭代一步获取新数据e(kf)|s后,通过下面的递推式(11)和(12)以最少的计算量解析更新拟合参数θ,直至内层算法迭代到步数Kf,以使数据量满足拟合精度:Further adopt the recursive fitting algorithm, when the inner stochastic optimization algorithm obtains new data e(k f ) | Parameter θ, until the inner layer algorithm iterates to the number of steps K f , so that the amount of data meets the fitting accuracy:

其中,表示中的权重系数;表示当算法迭代到kf步时,拟合函数值与观测值e(kf)|s之间的均方误差:in, express The weight coefficient in; Indicates that when the algorithm iterates to k f steps, the fitting function value and the mean square error between observations e(k f )| s :

该递推算法的初值按照如下方法设置:The initial value of the recursive algorithm is set as follows:

θ0=(A0,B0,C0)=(0,0.5,e(1)|s)    (14)θ 0 =(A 0 ,B 0 ,C 0 )=(0,0.5,e(1)| s ) (14)

{{ ∂∂ 22 ∂∂ 22 θθ JJ 00 (( θθ )) || θθ 00 }} -- 11 == ϵϵ ·&Center Dot; II -- -- -- (( 1515 ))

其中,ε表示一个很小的常数;Among them, ε represents a very small constant;

利用优化历史数据e(kf)|s,kf=1,2,...,Kf,获得对e(∞)|s的估计值为:Using optimized historical data e(k f )| s , k f =1,2,...,K f , the estimated value of e(∞)| s is obtained as:

ee ^^ (( θθ kk ff ,, ∞∞ )) || sthe s == CC KK ff -- -- -- (( 1616 ))

针对外层待评价的离散变量候选解s∈S的粗糙评价过程可概括为,利用PSO算法求解其对应的内层子模型至第以获得历史优化数据e(kf)|s再利用式(11)和式(12)递推拟合负幂函数(10)的参数θ,并将作为s的最终评价值 The rough evaluation process for the discrete variable candidate solution s ∈ S to be evaluated in the outer layer can be summarized as, using the PSO algorithm to solve its corresponding inner sub-model to No. step To obtain historical optimization data e(k f )| s , Then use formula (11) and formula (12) to recursively fit the parameter θ of the negative power function (10), and as the final evaluation value of s

4、如权利要求1所述的一种基于两层嵌套结构的调度优化方法,其特征在于:所述步骤2.3)中,关键步骤是计算历史最小惩罚值p′(kf)|s,并采用可行性粗糙判定模型来计算进而确定s(ns,ks)的可行性,具体实施步骤如下:4. A scheduling optimization method based on a two-layer nested structure as claimed in claim 1, characterized in that: in said step 2.3), the key step is to calculate the historical minimum penalty value p'(k f )| s , And use the feasibility rough judgment model to calculate Then determine the feasibility of s(n s ,k s ), the specific implementation steps are as follows:

B.可行性粗糙判定模型的确定:B. Determination of the feasibility rough judgment model:

记e(kf)|s为固定离散变量s∈S的条件下,内层随机优化算法求解子模型并迭代到第kf步的评价值,将e(kf)|s分解为目标值与惩罚值之和:Note that e(k f )| s is a fixed discrete variable s∈S, the inner stochastic optimization algorithm solves the sub-model And iterate to the evaluation value of step k f , decompose e(k f )| s into the sum of target value and penalty value:

e(kf)|s=o(kf)|s+p(kf)|s    (17)e(k f )| s =o(k f )| s +p(k f )| s (17)

其中,o(kf)|s和p(kf)|s分别表示内层随机优化迭代至第kf步的目标函数值和惩罚值。Among them, o(k f )| s and p(k f )| s represent the objective function value and penalty value of the inner stochastic optimization iteration to the k fth step, respectively.

当参数选择适当、迭代步数趋于无穷时,随机优化算法会以概率1收敛至全局最优解。因此,e(∞)|s为子模型的最优目标值,即对离散变量s的精确评价值e*|sWhen the parameters are selected properly and the number of iteration steps tends to infinity, the stochastic optimization algorithm will converge to the global optimal solution with probability 1. Therefore, e(∞)| s is the optimal target value of the sub-model, that is, the exact evaluation value e * | s of the discrete variable s:

定义历史最小惩罚值p′(kf)|s,kf=1,2,...,Kf为:Define the historical minimum penalty value p′(k f )| s ,k f =1,2,...,K f as:

对于离散变量s的历史最小惩罚值为p′(∞)|s,通过求出历史最小惩罚值p′(∞)|s,可进行离散变量s对应的解子空间内最优解的可行性判定,此即为可行性判定模型的工作过程,p′(∞)|s的计算过程与e(∞)|s相似,只是将表达式中的评价值替换为历史最小惩罚值,修改式(13)为:For the historical minimum penalty value of the discrete variable s p′(∞)| s , by finding the historical minimum penalty value p′(∞)| s , the feasibility of the optimal solution in the solution subspace corresponding to the discrete variable s can be determined This is the working process of the feasibility determination model. The calculation process of p′(∞)| s is similar to that of e(∞)| s , except that The evaluation value in the expression is replaced by the historical minimum penalty value, and the modified formula (13) is:

针对外层待评价的离散变量候选解s∈S的可行性粗糙判定过程可概括为,利用PSO算法求解其对应的内层子模型至第以获得历史最小惩罚值数据p′(kf)|s,再利用式(11)和式(12)递推拟合负幂函数(10)的参数θ,并将作为s的最终历史最小惩罚值p′(∞)|s,再判定其可行性。The rough decision process for the feasibility of the discrete variable candidate solution s∈S to be evaluated in the outer layer can be summarized as, using the PSO algorithm to solve the corresponding inner sub-model to No. step To obtain historical minimum penalty value data p′(k f )| s , Then use formula (11) and formula (12) to recursively fit the parameter θ of the negative power function (10), and As the final historical minimum penalty value p′(∞)| s of s, then judge its feasibility.

5、如权利要求1所述的一种基于两层嵌套结构的调度优化方法,其特征在于:所述步骤2.5)中,外层优化的关键步骤是利用外层优化算法(GA算法)和可行性规则法更新粗糙保优集,具体步骤如下:5, a kind of scheduling optimization method based on two-layer nested structure as claimed in claim 1, is characterized in that: in described step 2.5), the key step of outer layer optimization is to utilize outer layer optimization algorithm (GA algorithm) and Feasibility rule method updates the rough preservation set, the specific steps are as follows:

C.外层优化的计算C. Calculation of Outer Layer Optimization

外层优化以内层优化作为离散变量候选解的评价工具,采用基本的遗传算法(GA算法)求解mins∈SJ(s),种群中任一个体s的代价定义为通过固定步数的迭代计算不断更新种群,最终获得个优质的离散变量候选解。其算法计算过程为:The outer layer optimization uses the inner layer optimization as an evaluation tool for candidate solutions of discrete variables, and uses the basic genetic algorithm (GA algorithm) to solve min s∈S J(s). The cost of any individual s in the population is defined as The population is continuously updated through iterative calculations with a fixed number of steps, and finally obtained A high-quality discrete variable candidate solution. Its algorithm calculation process is:

其中,Ks为最大外层优化迭代步数,pc,pm为GA算法的交叉概率和变异概率。Among them, K s is the maximum number of iteration steps of outer layer optimization, p c , pm are the crossover probability and mutation probability of GA algorithm.

6、如权利要求1所述的一种基于两层嵌套结构的调度优化方法,其特征在于:所述步骤3)中,关键步骤是求解s(ns,Ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型 min f ∈ F J ( f , s ‾ ) | s ‾ = s ( n s , K s ) , 其具体步骤如下:6. A scheduling optimization method based on a two-layer nested structure as claimed in claim 1, characterized in that: in said step 3), the key step is to solve s(n s , K s )(n s =1 ,...,N s ) corresponding to N s NLP optimization sub-models min f ∈ f J ( f , the s ‾ ) | the s ‾ = the s ( no the s , K the s ) , The specific steps are as follows:

D.精确评价求解算法D. Accurately evaluate the solution algorithm

在精确评价阶段采用粒子群优化算法(PSO算法)求解种群中任一个体f的代价定义为算法计算过程为:In the stage of accurate evaluation, particle swarm optimization algorithm (PSO algorithm) is used to solve the problem The cost of any individual f in the population is defined as The calculation process of the algorithm is:

其中,为精确评价阶段的最大优化迭代步数,w,φpg为PSO算法的速度更新加权系数;in, is the maximum number of optimization iteration steps in the accurate evaluation stage, w, φ p , φ g are the speed update weighting coefficients of the PSO algorithm;

利用上述算法精确求解当前粗糙保优集合中个NLP子模型 进而得到个连续变量解以及对应的最优评价值为 Using the above algorithm to accurately solve the current rough optimal set NLP submodel And then get continuous variable solution And the corresponding optimal evaluation value is

本发明提出基于两层嵌套结构的调度优化方法,具有以下特点和优点:The present invention proposes a scheduling optimization method based on a two-layer nested structure, which has the following characteristics and advantages:

(1)本发明方法采用两层嵌套结构,外层组合优化问题采用离散GA算法进行求解;内层非凸优化问题,采用连续PSO算法求解。(1) The method of the present invention adopts a two-layer nested structure, and the combined optimization problem of the outer layer is solved by the discrete GA algorithm; the non-convex optimization problem of the inner layer is solved by the continuous PSO algorithm.

(2)本发明方法采用两阶段步骤。在粗糙评价阶段,外层离散GA算法基于内层粗糙评价模型获取的评价值,从离散变量解空间中找到若干估计最优的候选解,并将其保存于粗糙保优集合中。在精确评价阶段,内层连续PSO算法通过充分迭代,对粗糙保优集合中的每个候选解进行精确评价,并取其中评价值最优的一组为问题的最终解。(2) The method of the present invention adopts two-stage steps. In the rough evaluation stage, the outer discrete GA algorithm finds several optimal candidate solutions from the discrete variable solution space based on the evaluation values obtained by the inner rough evaluation model, and saves them in the rough optimal set. In the precise evaluation stage, the inner continuous PSO algorithm performs precise evaluation on each candidate solution in the rough preservation set through sufficient iterations, and takes the group with the best evaluation value as the final solution of the problem.

(3)本发明方法要求粗糙模型有较快的评价速度,其评价结果允许一定范围的噪声或偏差。因此,递归拟合算法利用内层连续PSO算法的有限迭代数据对内层子模型最优目标值的估计过程,可以视为对外层相应离散变量候选解的粗糙评价过程,且迭代数据量越充分,评价结果越准确但评价效率越低。(3) The method of the present invention requires the rough model to have a faster evaluation speed, and the evaluation results allow a certain range of noise or deviation. Therefore, the recursive fitting algorithm uses the limited iterative data of the continuous PSO algorithm in the inner layer to estimate the optimal target value of the inner layer sub-model, which can be regarded as a rough evaluation process of the candidate solution of the corresponding discrete variable in the outer layer, and the more sufficient the amount of iterative data , the evaluation result is more accurate but the evaluation efficiency is lower.

(4)通过保优粗糙保优集合替代常规优化算法保优单一最优解,来提高获得满意解的概率。保优集合的规模越大,该概率值越高,但与此同时会增加精确评价阶段的计算量。(4) The probability of obtaining a satisfactory solution is improved by replacing the conventional optimization algorithm with a single optimal solution with a high-quality rough set. The larger the size of the preserved set, the higher the probability value, but at the same time it will increase the amount of calculation in the accurate evaluation stage.

(5)本发明方法采用可行性粗糙判定模型能以很高概率快速剔除不可行解,结合基于序优化思想的粗糙“评价”和“择优”手段,本发明提出的基于两层结构的优化算法能确保以较高的概率获得调度模型的真实满意解。(5) The method of the present invention adopts the rough judgment model of feasibility and can quickly eliminate infeasible solutions with a high probability, combined with the rough "evaluation" and "optimum" means based on the order optimization thought, the optimization algorithm based on the two-layer structure proposed by the present invention It can ensure that the real satisfactory solution of the scheduling model is obtained with a high probability.

附图说明Description of drawings

图1是本发明提出的基于两层嵌套结构的调度优化方法流程图。FIG. 1 is a flow chart of a scheduling optimization method based on a two-level nested structure proposed by the present invention.

具体实施方式Detailed ways

所提出的两层结构优化方法属于“嵌套优化”算法的一种。这类方法的外层求解的是MIP问题,适合采用随机优化方法,其特点是利用多个解的不断更新进行全局搜索,不依靠解空间的梯度信息。解的更新过程依赖于内层结构求解NLP问题得到的最优值,作为旧解筛选和新解生成的依据。本质上,筛选过程的目标是在目标群体中找出较优个体,并不关心每个个体的精确评价值,只需保证筛选结果正确即可。因此,本发明引入序优化思想,利用粗糙评价,只以较高的概率获取满意的筛选结果。序优化是一种针对随机仿真优化问题提出的软优化方法,是为了求解评价非常耗时的随机仿真优化问题而提出的,目的是在粗糙评价条件下如何以较高的概率获取满意解。序优化算法的设计思想是,构造一个粗糙评价模型,不精确但快速地评价从解空间中抽样的候选解,并基于粗糙评价值和“序”比较的思想,挑选出排在前几位的估计满意解作为候选解。由于“序”以指数速度收敛,且对偏差和噪声有较强的鲁棒性,因此在候选解中会以很高的概率至少存在一个真实满意解。The proposed two-level structure optimization method belongs to a kind of "nested optimization" algorithm. The outer layer of this type of method solves the MIP problem, which is suitable for the stochastic optimization method. Its characteristic is to use the continuous update of multiple solutions for global search without relying on the gradient information of the solution space. The update process of the solution depends on the optimal value obtained by solving the NLP problem of the inner structure, as the basis for screening old solutions and generating new solutions. Essentially, the goal of the screening process is to find better individuals in the target group, and does not care about the precise evaluation value of each individual, only to ensure that the screening results are correct. Therefore, the present invention introduces the idea of order optimization and uses rough evaluation to obtain satisfactory screening results with a relatively high probability. Ordinal optimization is a soft optimization method proposed for stochastic simulation optimization problems. It is proposed to solve stochastic simulation optimization problems that are very time-consuming to evaluate. The purpose is to obtain a satisfactory solution with a higher probability under rough evaluation conditions. The design idea of the ordinal optimization algorithm is to construct a rough evaluation model, imprecisely but quickly evaluate the candidate solutions sampled from the solution space, and select the top few solutions based on the rough evaluation value and the idea of "order" comparison. Satisfactory solutions are estimated as candidate solutions. Since the "sequence" converges at an exponential speed and is robust to bias and noise, there will be at least one true satisfactory solution among the candidate solutions with a high probability.

在本发明所提出的方法中,仅估计关于原问题连续变量的NLP问题的最优目标值,代替求出其全局最优解,作为对原问题相应离散变量取值的粗糙评价,只保证以较高的概率至少得到一个满意的离散变量候选解。这样,在一定的迭代步数之后,再进行精确评价以获得最终的满意解。除此之外,约束序优化作为约束处理手段被提出,它的核心思想是增加构造一个可行性粗糙判定模型,在解评价之前首先粗糙但快速地判定解的可行性,对于估计不可行的解,不再继续评价。因此,该方法能够从概率意义上剔除不可行解,从而进一步节省计算量。In the method proposed by the present invention, only the optimal target value of the NLP problem about the continuous variable of the original problem is estimated, instead of finding its global optimal solution, as a rough evaluation of the value of the corresponding discrete variable of the original problem, only the A higher probability results in at least one satisfactory candidate solution for the discrete variable. In this way, after a certain number of iterative steps, the precise evaluation is carried out to obtain the final satisfactory solution. In addition, constrained order optimization was proposed as a means of constraint processing. Its core idea is to construct a rough feasibility judgment model. Before the solution evaluation, the feasibility of the solution is roughly but quickly judged. For the estimated infeasible solution , do not continue to evaluate. Therefore, this method can eliminate infeasible solutions in a probabilistic sense, thereby further saving computation.

为便于表述,首先说明本发明所针对的调度优化问题的MINLP模型的一般形式:For ease of expression, at first illustrate the general form of the MINLP model of the scheduling optimization problem that the present invention aims at:

minmin xx ,, ythe y ZZ == ff (( xx ,, ythe y ))

s.t.gp(x,y)≤0 p=1,...,P(1)stg p (x,y)≤0 p=1,...,P(1)

hq(x,y)=0 q=1,...,Qh q (x,y)=0 q=1,...,Q

x∈X,y∈Yx∈X,y∈Y

其中,x∈X表示连续变量,y∈Y表示离散变量,gp(·)为不等式约束表达式,hq(·)为等式约束表达式,f(·),gp(·)和hq(·)中至少有一个非线性函数。为便于说明,MINLP模型被简记为Among them, x∈X represents a continuous variable, y∈Y represents a discrete variable, g p ( ) is an inequality constraint expression, h q ( ) is an equality constraint expression, f( ), g p ( ) and There is at least one nonlinear function in h q (·). For the convenience of explanation, the MINLP model is abbreviated as

minmin ff ∈∈ Ff ,, sthe s ∈∈ SS JJ (( ff ,, sthe s )) -- -- -- (( 22 ))

其中,优化变量分别为离散量s=[s1,...,st,...,sT]以及连续量f=[f1,...,fp,...,fP],S和F分别为两组变量的解空间。针对上述模型,两层嵌套优化结构为Among them, the optimization variables are discrete quantities s=[s 1 ,...,s t ,...,s T ] and continuous quantities f=[f 1 ,...,f p ,...,f P ], S and F are the solution spaces of two groups of variables respectively. For the above model, the two-layer nested optimization structure is

minmin ff ∈∈ Ff ,, sthe s ∈∈ SS JJ (( ff ,, sthe s )) ⇔⇔ minmin sthe s ∈∈ SS (( minmin ff ∈∈ Ff JJ (( ff ,, sthe s ‾‾ )) || sthe s ‾‾ == sthe s )) -- -- -- (( 33 ))

在固定离散变量s的情况下,则原模型退化为一个相对简单的内层非凸NLP子模型仅剩连续变量f作为待优化决策变量。求解该子模型,其最优目标值为离散变量s对应的极限最优值In the case of a fixed discrete variable s, the original model degenerates into a relatively simple inner non-convex NLP submodel Only the continuous variable f is left as the decision variable to be optimized. To solve the submodel, the optimal target value is the extreme optimal value corresponding to the discrete variable s

ee ** || sthe s == minmin ff ∈∈ Ff JJ (( ff ,, sthe s ‾‾ )) || sthe s ‾‾ == sthe s -- -- -- (( 44 ))

通过内层优化求解,外层离散空间中的每个离散变量候选解能获得一个极限最优值。利用离散随机优化算法基于极限最优值求解外层组合优化问题,便可获得全局最优解s*Through the inner optimization solution, each discrete variable candidate solution in the outer discrete space can obtain a limit optimal value. Using the discrete stochastic optimization algorithm to solve the outer combinatorial optimization problem based on the limit optimal value, the global optimal solution s * can be obtained:

sthe s ** == minmin sthe s ∈∈ SS (( ee ** || sthe s )) -- -- -- (( 55 ))

其对应的子模型的最优解即为连续变量对应的全局最优值。The optimal solution of its corresponding sub-model That is, the global optimal value corresponding to the continuous variable.

然而求取内层子模型的最优解十分耗时,且每个离散变量候选解都对应一个内层子模型,使得直接求解两层优化问题不具备实际可操作性。如果将内层连续变量优化问题视为外层离散变量的评价问题,进而调度优化问题可转化为离散变量的内层“评价”和外层“择优”问题,可以借助序优化思想通过适当降低“评价”精确度,来提升“择优”效率。具体地指,利用粗糙评价模型,只消耗少量的计算资源以获得对子优化问题最优值的估计,即粗糙“评价”。通过比较不同离散变量候选解的粗糙评价,进行候选解的“择优”。进一步地,为处理约束,建立可行性判定模型,在离散变量候选解评价之前,粗糙但快速地判定其可行性,对于估计不可行的候选解,不再继续评价。因此能够从概率意义上剔除不可行候选解,从而进一步节省计算量。However, finding the optimal solution of the inner sub-model is very time-consuming, and each discrete variable candidate solution corresponds to an inner sub-model, which makes it impractical to directly solve the two-level optimization problem. If the inner continuous variable optimization problem is regarded as the evaluation problem of the outer discrete variable, then the scheduling optimization problem can be transformed into the inner "evaluation" and outer "optimum" problem of the discrete variable. "Evaluation" accuracy to improve the efficiency of "excellent selection". Specifically, using a rough evaluation model, only a small amount of computing resources are consumed to obtain an estimate of the optimal value of the sub-optimization problem, that is, a rough "evaluation". By comparing the rough evaluations of different discrete variable candidate solutions, the candidate solutions are "optimized". Furthermore, in order to deal with the constraints, a feasibility judgment model is established. Before the evaluation of the discrete variable candidate solutions, the feasibility is roughly but quickly judged. For the candidate solutions that are estimated to be infeasible, the evaluation is not continued. Therefore, infeasible candidate solutions can be eliminated in a probabilistic sense, thereby further saving computation.

本发明提出的基于两层嵌套优化结构的优化算法,包含概率择优和精确评价两个主要阶段。概率择优阶段是本发明的重点创新之处,在概率择优阶段设置了内、外两层优化,在内层优化中针对各个离散变量候选解进行粗糙评价和筛选,而在外层优化中不断更新离散变量候选解集合。算法的内、外两层嵌套进行计算,直至外层更新计算次数到达预设值,算法进入精确评价阶段。精确评价阶段利用常用的连续优化方法,在有限个确定的离散变量候选解基础上,进行连续变量优化,精确求解各离散变量候选解对应的NLP子问题,再从各问题得到的最优解中找出调度优化问题的最终解。The optimization algorithm based on the two-layer nested optimization structure proposed by the present invention includes two main stages of probability selection and accurate evaluation. The probabilistic optimal stage is the key innovation of the present invention. In the probabilistic optimal stage, inner and outer layers of optimization are set. In the inner layer optimization, rough evaluation and screening are carried out for each discrete variable candidate solution, while in the outer layer optimization, the discrete variables are continuously updated. A collection of variable candidate solutions. The inner and outer layers of the algorithm are nested for calculation until the number of update calculations in the outer layer reaches the preset value, and the algorithm enters the stage of precise evaluation. In the accurate evaluation stage, the commonly used continuous optimization method is used to optimize the continuous variables on the basis of a limited number of determined discrete variable candidate solutions, and accurately solve the NLP sub-problems corresponding to each discrete variable candidate solution. Find the final solution to the scheduling optimization problem.

本发明提出的基于两层嵌套结构的优化算法流程图如图1所示。The flow chart of the optimization algorithm based on the two-layer nested structure proposed by the present invention is shown in FIG. 1 .

具体步骤如下:Specific steps are as follows:

步骤1)初始化参数。Step 1) Initialize parameters.

设置外层GA算法和内层PSO算法的参数,包括种群大小Ns,Nf,PSO速度更新加权系数ω,φpg,GA交叉概率pc和变异概率pm,最大迭代步数 Set the parameters of the outer GA algorithm and the inner PSO algorithm, including population size N s , N f , PSO speed update weighting coefficients ω, φ p , φ g , GA crossover probability p c and mutation probability p m , and the maximum number of iteration steps

步骤2)概率择优阶段。Step 2) Probability selection stage.

步骤2.1设置外层优化迭代步数ks=0,初始化离散变量候选解s(1,ks),…,s(ns,ks),…,s(Ns,ks)。Step 2.1 Set the outer layer optimization iteration step k s =0, initialize discrete variable candidate solutions s(1,k s ),...,s(n s ,k s ),...,s(N s ,k s ).

步骤2.2如果外层优化迭代步数ks未超过对于s(ns,ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型利用内层优化算法(PSO算法)迭代计算至步,并记录历史优化数据 e ( k f ) | s ( n s , k s ) , p ′ ( k f ) | s ( n s , k s ) ( k f = 1 , . . . , K f FM ) . Step 2.2 If the number of iteration steps k s of the outer layer optimization does not exceed For s(n s ,k s )(n s =1,...,N s ) corresponding N s NLP optimization sub-models Using the inner layer optimization algorithm (PSO algorithm) to iteratively calculate to step, and record historical optimization data e ( k f ) | the s ( no the s , k the s ) , p ′ ( k f ) | the s ( no the s , k the s ) ( k f = 1 , . . . , K f FM ) .

步骤2.3针对s(ns,ks)(ns=1,...,Ns),利用其历史最小惩罚值 和可行性粗糙判定模型估计进而确定s(ns,ks)的可行性,即确定 Step 2.3 For s(n s ,k s )(n s =1,...,N s ), use its historical minimum penalty value and Feasibility Rough Decision Model Estimation Then determine the feasibility of s(n s , k s ), that is, to determine

步骤2.4如果s(ns,ks)不可行,保留作为粗糙惩罚值如果s(ns,ks)可行,用内层优化算法(PSO算法)迭代优化子模型步,记录历史优化数据利用粗糙评价模型估计保留估计值作为粗糙评价值 Step 2.4 If s(n s ,k s ) is not feasible, keep as rough penalty If s(n s , k s ) is feasible, iteratively optimize the sub-model with the inner optimization algorithm (PSO algorithm) to step, record historical optimization data Estimated by rough evaluation model Keep estimates rough evaluation

步骤2.5根据获得的粗糙评价值和粗糙惩罚值,利用外层优化算法(GA算法)和可行性规则法更新粗糙保优集合 Step 2.5 According to the obtained rough evaluation value and rough penalty value, use the outer layer optimization algorithm (GA algorithm) and the feasibility rule method to update the rough optimal set

在该步骤中,根据离散变量候选解的可行性粗糙判定模型,快速获取对应离散变量组合s的粗糙惩罚值和可行性估计进而判断是否继续对s实施评价。若可行,则进一步实施粗糙评价获取粗糙评价值基于以上评价结果,外层随机优化算法通过求解以下组合优化问题:In this step, according to the feasibility rough judgment model of the discrete variable candidate solution, the rough penalty value corresponding to the discrete variable combination s is quickly obtained and feasibility estimates Then it is judged whether to continue to evaluate s. If feasible, further implement rough evaluation to obtain rough evaluation value Based on the above evaluation results, the outer stochastic optimization algorithm solves the following combinatorial optimization problems:

通过外层随机优化算法获取前个估计最优且可行的离散变量候选解,即Obtain the former through the outer stochastic optimization algorithm An estimated optimal and feasible discrete variable candidate solution, namely

将其保存在粗糙保优集合中。通过可行性规则法,在粗糙评价条件下来执行外层随机优化算法的保优更新操作。Save it in the rough premium collection middle. Through the feasibility rule method, the optimal update operation of the outer stochastic optimization algorithm is performed under rough evaluation conditions.

步骤2.6外层优化迭代步数ks=ks+1。如果外层优化迭代步数ks小于返回步骤2.2。如果外层优化迭代步数ks超过则进入步骤3)。Step 2.6 The number of iterative steps for outer layer optimization k s =k s +1. If the number of iteration steps k s of the outer layer optimization is less than Return to step 2.2. If the number of iteration steps k s of the outer layer optimization exceeds Then go to step 3).

步骤3)精确评价阶段。Step 3) Accurate evaluation stage.

求解s(ns,Ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型利用内层优化算法(PSO算法)迭代计算至步,获取精确评价值和对应的最优连续变量解 f * | s ( n s , K s ) ( n s = 1 , . . . , N s ) . Solve N s NLP optimization sub-models corresponding to s(n s ,K s )(n s =1,...,N s ) Using the inner layer optimization algorithm (PSO algorithm) to iteratively calculate to Step to get accurate evaluation value and the corresponding optimal continuous variable solution f * | the s ( no the s , K the s ) ( no the s = 1 , . . . , N the s ) .

步骤4)确定最终满意解。Step 4) Determine the final satisfactory solution.

比较各NLP优化子模型的精确评价值根据如下可行性规则法取其中最优的一组作为调度模型的满意解:Compare the exact evaluation values of each NLP optimization sub-model According to the following feasibility rules, the optimal group is selected as the satisfactory solution of the scheduling model:

ff gg == ff ** || sthe s gg -- -- -- (( 99 ))

最终满意解为 s g = s ( G , K s max ) , f g = f * | s ( G , K s max ) , 其中 G = arg min n s ∈ { 1 , . . . , N s } e * | s ( n s , K s ) . The final satisfactory solution is the s g = the s ( G , K the s max ) , f g = f * | the s ( G , K the s max ) , in G = arg min no the s ∈ { 1 , . . . , N the s } e * | the s ( no the s , K the s ) .

所述步骤2.2)中,内层优化的关键步骤是利用可行性粗糙判定模型和粗糙评价模型进行解的筛选和评价,本质上是对惩罚值和评价值的收敛结果进行预测,即利用有限步数迭代得到的数值信息(内层优化历史数据)来估计无穷步迭代计算后获得的精确收敛值。具体实现手段是,以负幂函数曲线拟合惩罚值(评价值)的迭代下降过程,来获得最终的收敛结果。其步骤如下:In the step 2.2), the key step of the inner layer optimization is to use the feasibility rough judgment model and the rough evaluation model to screen and evaluate the solutions, which is essentially to predict the convergence results of the penalty value and the evaluation value, that is, to use the finite step The numerical information (inner optimization history data) obtained by several iterations is used to estimate the precise convergence value obtained after infinite step iterative calculation. The specific implementation method is to use the negative power function curve to fit the iterative descent process of the penalty value (evaluation value) to obtain the final convergence result. The steps are as follows:

A.1内层优化历史数据的获取A.1 The inner layer optimizes the acquisition of historical data

内层优化历史数据的获取过程,是内层优化算法迭代计算的过程。本发明采用粒子群优化算法(PSO算法)求解种群中任一个体f的代价定义为 f ( f ) = J ( f , s ‾ ) | s ‾ = s , 算法计算过程为:The acquisition process of inner optimization historical data is the iterative calculation process of inner optimization algorithm. The present invention adopts particle swarm optimization algorithm (PSO algorithm) to solve The cost of any individual f in the population is defined as f ( f ) = J ( f , the s ‾ ) | the s ‾ = the s , The calculation process of the algorithm is:

其中,Kf为用于获取历史数据的最大内层优化迭代步数,w,φpg为PSO算法的速度更新加权系数。Among them, K f is the maximum number of inner layer optimization iteration steps used to obtain historical data, and w, φ p , φ g are the speed update weighting coefficients of the PSO algorithm.

A.2粗糙评价模型的求解A.2 Solution of Rough Evaluation Model

由于负幂函数能够较好地描述和预测e(kf)|s的变化趋势,因此本发明利用优化历史数据e(kf)|s,kf=1,2,...,Kf拟合负幂函数,进而估计e(∞)|sSince the negative power function can better describe and predict the variation trend of e ( k f ) | Fit a negative power function to estimate e(∞)| s :

ee ^^ (( θθ ,, kk ff )) || sthe s == AA ·&Center Dot; kk ff -- BB ++ CC ,, kk ff == 11 ,, .. .. .. ,, ∞∞ θθ == (( AA ,, BB ,, CC )) -- -- -- (( 1010 ))

进一步采用递推拟合算法,当内层随机优化算法每迭代一步获取新数据e(kf)|s后,通过下面的递推式(11)和(12)以最少的计算量解析更新拟合参数θ,直至内层算法迭代到步数Kf,以使数据量满足拟合精度:Further adopt the recursive fitting algorithm, when the inner stochastic optimization algorithm obtains new data e(k f ) | Parameter θ, until the inner layer algorithm iterates to the number of steps K f , so that the amount of data meets the fitting accuracy:

其中,表示中的权重系数;表示当算法迭代到kf步时,拟合函数值与观测值e(kf)|s之间的均方误差:in, express The weight coefficient in; Indicates that when the algorithm iterates to k f steps, the fitting function value and the mean square error between observations e(k f )| s :

该递推算法的初值按照如下方法设置:The initial value of the recursive algorithm is set as follows:

θ0=(A0,B0,C0)=(0,0.5,e(1)|s)    (14)θ 0 =(A 0 ,B 0 ,C 0 )=(0,0.5,e(1)| s ) (14)

{{ ∂∂ 22 ∂∂ 22 θθ JJ 00 (( θθ )) || θθ 00 }} -- 11 == ϵϵ ·&Center Dot; II -- -- -- (( 1515 ))

其中,ε表示一个很小的常数。Among them, ε represents a very small constant.

利用优化历史数据e(kf)|s,kf=1,2,...,Kf,获得对e(∞)|s的估计值为:Using optimized historical data e(k f )| s , k f =1,2,...,K f , the estimated value of e(∞)| s is obtained as:

ee ^^ (( θθ kk ff ,, ∞∞ )) || sthe s == CC KK ff -- -- -- (( 1616 ))

针对外层待评价的离散变量候选解s∈S的粗糙评价过程可概括为,利用PSO算法求解其对应的内层子模型至第以获得历史优化数据e(kf)|s,再利用式(11)和式(12)递推拟合负幂函数(10)的参数θ,并将作为s的最终评价值 The rough evaluation process for the discrete variable candidate solution s ∈ S to be evaluated in the outer layer can be summarized as, using the PSO algorithm to solve its corresponding inner sub-model to No. step To obtain historical optimization data e(k f )| s , Then use formula (11) and formula (12) to recursively fit the parameter θ of the negative power function (10), and as the final evaluation value of s

所述步骤2.3)中,关键步骤是计算历史最小惩罚值p′(kf)|s,并采用可行性粗糙判定模型来计算进而确定s(ns,ks)的可行性,具体实施步骤如下。In the step 2.3), the key step is to calculate the historical minimum penalty value p′(k f )| s , and use the feasibility rough judgment model to calculate Then determine the feasibility of s(n s , k s ), the specific implementation steps are as follows.

B.可行性粗糙判定模型的确定B. Determination of Feasibility Rough Judgment Model

记e(kf)|s为固定离散变量s∈S的条件下,内层随机优化算法求解子模型并迭代到第kf步的评价值。将e(kf)|s分解为目标值与惩罚值之和:Note that e(k f )| s is a fixed discrete variable s∈S, the inner stochastic optimization algorithm solves the sub-model And iterate to the evaluation value of step k f . Decompose e(k f )| s into the sum of target value and penalty value:

e(kf)|s=o(kf)|s+p(kf)|s    (17)e(k f )| s =o(k f )| s +p(k f )| s (17)

其中,o(kf)|s和p(kf)|s分别表示内层随机优化迭代至第kf步的目标函数值和惩罚值。Among them, o(k f )| s and p(k f )| s represent the objective function value and penalty value of the inner stochastic optimization iteration to the k fth step, respectively.

当参数选择适当、迭代步数趋于无穷时,随机优化算法会以概率1收敛至全局最优解。因此,e(∞)|s为子模型的最优目标值,即对离散变量s的精确评价值e*|sWhen the parameters are selected properly and the number of iteration steps tends to infinity, the stochastic optimization algorithm will converge to the global optimal solution with probability 1. Therefore, e(∞)| s is the optimal target value of the sub-model, that is, the exact evaluation value e * | s of the discrete variable s.

定义历史最小惩罚值p′(kf)|s,kf=1,2,...,Kf为:Define the historical minimum penalty value p′(k f )| s ,k f =1,2,...,K f as:

对于离散变量s的历史最小惩罚值为p′(∞)|s,通过求出历史最小惩罚值p′(∞)|s,可进行离散变量s对应的解子空间内最优解的可行性判定,此即为可行性判定模型的工作过程。p′(∞)|s的计算过程与e(∞)|s相似,只是将表达式中的评价值替换为历史最小惩罚值,修改式(13)为:For the historical minimum penalty value of the discrete variable s p′(∞)| s , by finding the historical minimum penalty value p′(∞)| s , the feasibility of the optimal solution in the solution subspace corresponding to the discrete variable s can be determined Judgment, this is the working process of the feasibility judgment model. The calculation process of p′(∞)| s is similar to that of e(∞)| s , except that The evaluation value in the expression is replaced by the historical minimum penalty value, and the modified formula (13) is:

针对外层待评价的离散变量候选解s∈S的可行性粗糙判定过程可概括为,利用PSO算法求解其对应的内层子模型至第以获得历史最小惩罚值数据p′(kf)|s,再利用式(11)和式(12)递推拟合负幂函数(10)的参数θ,并将作为s的最终历史最小惩罚值p′(∞)|s,再判定其可行性。The rough decision process for the feasibility of the discrete variable candidate solution s∈S to be evaluated in the outer layer can be summarized as, using the PSO algorithm to solve the corresponding inner sub-model to No. step To obtain historical minimum penalty value data p′(k f )| s , Then use formula (11) and formula (12) to recursively fit the parameter θ of the negative power function (10), and As the final historical minimum penalty value p′(∞)| s of s, then judge its feasibility.

所述步骤2.5中,外层优化的关键步骤是利用外层优化算法(GA算法)和可行性规则法更新粗糙保优集,具体步骤如下:In described step 2.5, the key step of outer layer optimization is to utilize outer layer optimization algorithm (GA algorithm) and feasibility rule method to update the rough optimization set, and concrete steps are as follows:

C.外层优化的计算C. Calculation of Outer Layer Optimization

外层优化以内层优化作为离散变量候选解的评价工具,采用基本的遗传算法(GA算法)求解mins∈SJ(s),种群中任一个体s的代价定义为通过固定步数的迭代计算不断更新种群,最终获得个优质的离散变量候选解。其算法计算过程为:The outer layer optimization uses the inner layer optimization as an evaluation tool for candidate solutions of discrete variables, and uses the basic genetic algorithm (GA algorithm) to solve min s∈S J(s). The cost of any individual s in the population is defined as The population is continuously updated through iterative calculations with a fixed number of steps, and finally obtained A high-quality discrete variable candidate solution. Its calculation process is as follows:

其中,Ks为最大外层优化迭代步数,pc,pm为GA算法的交叉概率和变异概率。Among them, K s is the maximum number of iteration steps of outer layer optimization, p c , pm are the crossover probability and mutation probability of GA algorithm.

所述步骤3)中,关键步骤是求解s(ns,Ks)(ns=1,...,Ns)对应的Ns个NLP优化子模型 min f ∈ F J ( f , s ‾ ) | s ‾ = s ( n s , K s ) , 其具体步骤如下:In the step 3), the key step is to solve the N s NLP optimization sub-models corresponding to s(n s ,K s )(n s =1,...,N s ) min f ∈ f J ( f , the s ‾ ) | the s ‾ = the s ( no the s , K the s ) , The specific steps are as follows:

D.精确评价求解算法D. Accurately evaluate the solution algorithm

在精确评价阶段采用粒子群优化算法(PSO算法)求解种群中任一个体f的代价定义为算法计算过程为:In the stage of accurate evaluation, particle swarm optimization algorithm (PSO algorithm) is used to solve the problem The cost of any individual f in the population is defined as The calculation process of the algorithm is:

其中,为精确评价阶段的最大优化迭代步数,w,φpg为PSO算法的速度更新加权系数。in, is the maximum optimization iteration steps in the accurate evaluation stage, w, φ p , φ g are the speed update weighting coefficients of the PSO algorithm.

利用上述算法精确求解当前粗糙保优集合中个NLP子模型 进而得到个连续变量解以及对应的最优评价值为 Using the above algorithm to accurately solve the current rough optimal set NLP submodel And then get continuous variable solution And the corresponding optimal evaluation value is

Claims (5)

1. A scheduling optimization method of a nested structure is characterized by comprising the following steps:
step 1) initializing parameters:
setting parameters of an outer GA algorithm and an inner PSO algorithm, including a population size Ns,NfPSO velocity update weighting factor ω, φpgGA crossover probability pcAnd the probability of variation pmMaximum number of iteration stepsKs
Step 2) probability optimization stage;
step 2.1 setting outer optimization iteration step number ksInitializing a discrete variable candidate solution s (1, k) when equal to 0s),…,s(ns,ks),…,s(Ns,ks);
Step 2.2 if the outer layer optimizes the iteration step number ksNot exceedFor s (n)s,ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step and record historical optimization data
Step 2.3 for s (n)s,ks)(ns=1,...,Ns) Using its historical minimum penalty value And feasibility coarse decision model estimationAnd then determining s (n)s,ks) Determination of feasibility of
Step 2.4 ifs(ns,ks) Infeasible, reserveAs a coarse penalty valueIf s (n)s,ks) Feasible, iterative optimization submodel with inner layer optimization algorithm (PSO algorithm)ToStep, recording historical optimization dataEstimation using a coarse evaluation modelPreserving estimatesAs roughness evaluation value
Step 2.5, according to the obtained rough evaluation value and rough penalty value, updating a rough guarantee set by utilizing an outer layer optimization algorithm (GA algorithm) and a feasibility rule method
In the step, according to a feasibility rough judgment model of a discrete variable candidate solution, a rough penalty value corresponding to a discrete variable combination s is quickly obtainedAnd feasibility estimationFurther judging whether to continue evaluating s; if the evaluation is feasible, further performing rough evaluation to obtain a rough evaluation valueBased on the above evaluation results, the outer layer random optimization algorithm optimizes the problem by solving the following combinations:
(6)
before obtaining by outer layer random optimization algorithmA candidate solution of a discrete variable that estimates the best and feasible, i.e.
(7)
And is
Store it in the rough keep-good setIn (1). Is carried out under rough evaluation conditions by a feasibility rule methodPerforming optimization updating operation on the outer layer random optimization algorithm;
step 2.6 outer optimization iteration step number ks=ks+1 if outer optimization iteration step number ksIs less thanReturning to step 2.2, if the outer layer optimizes the iteration step number ksExceedStep 3) is entered;
step 3) accurate evaluation stage:
solving for s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelIterative computation to by inner layer optimization algorithm (PSO algorithm)Step of obtaining an accurate evaluation valueAnd corresponding optimal continuous variable solution
Step 4) determining a final satisfactory solution:
comparing the accurate evaluation values of each NLP optimization submodelAnd taking an optimal group as a satisfactory solution of the scheduling model according to the following feasibility rule method:
and is
(8)
Wherein,
the final satisfactory solution isWherein
2. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.2), a key step of inner layer optimization is to utilize a feasible rough judgment model and a rough evaluation model to perform solution screening and evaluation, and essentially predict convergence results of penalty values and evaluation values, that is, numerical information (inner layer optimization historical data) obtained by finite step number iteration is used to estimate an accurate convergence value obtained after infinite step iteration calculation, and a specific implementation means is to fit an iterative descending process of the penalty values (evaluation values) with a negative power function curve to obtain a final convergence result, and the specific steps are as follows:
a.1 acquisition of inner-layer optimization historical data
The process of obtaining the inner optimization historical data is a process of iterative calculation of an inner optimization algorithm. The invention adopts Particle Swarm Optimization (PSO) algorithm to solveCost definition of any individual f in a populationIs composed ofThe algorithm calculation process is as follows:
wherein, KfOptimizing iteration steps, w, phi, for a maximum inner layer for obtaining historical datapgUpdating the weighting coefficient for the speed of the PSO algorithm;
a.2 solving of the coarse evaluation model
Because the negative power function can better describe and predict e (k)f)|sUsing the optimization history data e (k)f)|s,kf=1,2,...,KfFitting a negative power function to estimate e (∞) luminances
(10)
θ=(A,B,C)
Further adopting a recursion fitting algorithm, and obtaining new data e (k) in each iteration of the inner-layer random optimization algorithmf)|sThe fitting parameter θ is then analytically updated with minimal computation by the following recursions (11) and (12) until the inner-layer algorithm iterates to the number of steps KfSo that the data amount satisfies the fitting accuracy:
kf=1,2,...,Kf
kf=1,2,...,Kf
wherein,to representThe weight coefficient of (1);indicates when the algorithm iterates to kfStep time, fit function valueAnd the observed value e (k)f)|sMean square error between:
the initial value of the recursion algorithm is set according to the following method:
θ0=(A0,B0,C0)=(0,0.5,e(1)|s) (14)
wherein, a very small constant is represented;
using optimized historical data e (k)f)|s,kf=1,2,...,KfObtaining the current information of e (∞)sThe estimated values of (c) are:
aiming at the S-belonged discrete variable candidate solution S of the outer layer to be evaluatedThe rough evaluation process can be summarized as that the corresponding inner layer submodel is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical optimization dataFitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal evaluation value as s
3. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in said step 2.3), the key step is to calculate the historical minimum penalty value p' (k)f)|sAnd calculating by using a feasibility rough judgment modelAnd then determining s (n)s,ks) The method comprises the following specific implementation steps:
B. determination of feasibility rough decision model:
note e (k)f)|sSolving the sub-model by the inner-layer random optimization algorithm under the condition that the fixed discrete variable S belongs to SAnd iterate to the kthfEvaluation value of step e(kf)|sThe sum of the target value and the penalty value is decomposed as:
e(kf)|s=o(kf)|s+p(kf)|s (17)
wherein, o (k)f)|sAnd p (k)f)|sRespectively representing the inner layer random optimization iteration to the k < th >fObjective function value and penalty value of step.
When the parameters are selected properly and the iteration step number tends to be infinite, the random optimization algorithm converges to the global optimal solution with the probability of 1. Thus, e (∞) ventilationsAs an optimal target value for the submodel, i.e. an accurate evaluation value e for a discrete variable s*|s
Defining a historical minimum penalty value p' (k)f)|s,kf=1,2,...,KfComprises the following steps:
historical minimum penalty value for discrete variable s is p' (∞) & gtsBy solving for historical minimum penalty value p' (∞) & gtluminancesThe feasibility determination of the optimal solution in the solution subspace corresponding to the discrete variable s can be carried out, namely the working process of the feasibility determination model, p' (∞) · survivalsCalculating process of (1) and e (∞) luminancesIs similar to but only willThe evaluation value in the expression is replaced by a historical minimum penalty value, and the modified expression (13) is as follows:
the feasibility rough judgment process aiming at the discrete variable candidate solution S belonging to S to be evaluated at the outer layer can be summarized as that the corresponding inner layer sub-model is solved by utilizing the PSO algorithmTo the firstStep by stepTo obtain historical minimum penalty value dataFitting parameters theta of the negative power function (10) by recursion of an equation (11) and an equation (12), and performing the fittingFinal historical minimum penalty value p' (∞) as ssThen, the feasibility of the test piece is determined.
4. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 2.5), the key step of outer layer optimization is to update a rough guarantee set by using an outer layer optimization algorithm (GA algorithm) and a feasibility rule method, and the specific steps are as follows:
C. calculation of skin optimization
The outer layer optimization and the inner layer optimization are used as evaluation tools of discrete variable candidate solutions, and a basic genetic algorithm (GA algorithm) is adopted to solve mins∈SJ(s), the cost of any individual s in the population is defined asContinuously updating the population by iterative calculation of fixed step number to finally obtainAnd (4) solving the high-quality discrete variable candidate solution. The algorithm calculation process is as follows:
wherein, KsOptimizing the number of iteration steps, p, for the maximum skinc,pmCross probability and mutation probability of the GA algorithm.
5. The scheduling optimization method based on the two-layer nested structure as claimed in claim 1, wherein: in the step 3), the key step is to solve s (n)s,Ks)(ns=1,...,Ns) Corresponding to NsNLP optimization submodelThe method comprises the following specific steps:
D. precise evaluation solving algorithm
Solving by adopting Particle Swarm Optimization (PSO) algorithm in the accurate evaluation stageThe cost of any individual f in the population is defined asThe algorithm calculation process is as follows:
wherein,maximum number of iteration steps, w, phi, for the precise evaluation phasepgUpdating the weighting coefficient for the speed of the PSO algorithm;
accurately solving the current rough optimization-preserving set by utilizing the algorithmNLP submodel Further obtainSolution to continuous variableAnd a corresponding optimum evaluation value of
CN201410601666.2A 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure Pending CN104460594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410601666.2A CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410601666.2A CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Publications (1)

Publication Number Publication Date
CN104460594A true CN104460594A (en) 2015-03-25

Family

ID=52906819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410601666.2A Pending CN104460594A (en) 2014-10-30 2014-10-30 Dispatching optimization method based on two-layer nest structure

Country Status (1)

Country Link
CN (1) CN104460594A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105629937A (en) * 2016-02-19 2016-06-01 浙江大学 Variable load dispatching method for combined production air separation unit
CN107133694A (en) * 2017-04-27 2017-09-05 浙江大学 Tower type solar thermo-power station mirror optimization method dispatching cycle
JP2018503884A (en) * 2015-08-11 2018-02-08 サビック グローバル テクノロジーズ ビー.ブイ. Multi-ply laminated composites with low unit area weight
CN107864071A (en) * 2017-11-02 2018-03-30 江苏物联网研究发展中心 A kind of dynamic measuring method, apparatus and system towards active safety
CN109118011A (en) * 2018-08-28 2019-01-01 摩佰尔(天津)大数据科技有限公司 The intelligent dispatching method and system of pier storage yard
CN117669930A (en) * 2023-11-21 2024-03-08 清华大学 Collaborative scheduling methods, devices, equipment, storage media and program products
CN118365013A (en) * 2024-02-20 2024-07-19 西安电子科技大学广州研究院 A multi-objective intelligent optimization method for construction organization design based on comprehensive multi-factors

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018503884A (en) * 2015-08-11 2018-02-08 サビック グローバル テクノロジーズ ビー.ブイ. Multi-ply laminated composites with low unit area weight
CN105629937A (en) * 2016-02-19 2016-06-01 浙江大学 Variable load dispatching method for combined production air separation unit
CN105629937B (en) * 2016-02-19 2018-05-25 浙江大学 A kind of method for the scheduling of coproduction air separation unit varying duty
CN107133694A (en) * 2017-04-27 2017-09-05 浙江大学 Tower type solar thermo-power station mirror optimization method dispatching cycle
CN107133694B (en) * 2017-04-27 2020-07-21 浙江大学 Optimization method of mirror field scheduling period for tower solar thermal power station
CN107864071A (en) * 2017-11-02 2018-03-30 江苏物联网研究发展中心 A kind of dynamic measuring method, apparatus and system towards active safety
CN109118011A (en) * 2018-08-28 2019-01-01 摩佰尔(天津)大数据科技有限公司 The intelligent dispatching method and system of pier storage yard
CN117669930A (en) * 2023-11-21 2024-03-08 清华大学 Collaborative scheduling methods, devices, equipment, storage media and program products
CN118365013A (en) * 2024-02-20 2024-07-19 西安电子科技大学广州研究院 A multi-objective intelligent optimization method for construction organization design based on comprehensive multi-factors

Similar Documents

Publication Publication Date Title
CN104460594A (en) Dispatching optimization method based on two-layer nest structure
CN107578124B (en) Short-term power load prediction method based on multilayer improved GRU neural network
CN102073586B (en) Gray generalized regression neural network-based small sample software reliability prediction method
CN104969216B (en) It is classified latent variable model and estimates equipment
CN111738477B (en) Prediction method of new energy consumption capacity of power grid based on deep feature combination
WO2015166637A1 (en) Maintenance period determination device, deterioration estimation system, deterioration estimation method, and recording medium
CN112434848A (en) Nonlinear weighted combination wind power prediction method based on deep belief network
CN106022517A (en) Risk prediction method and device based on nucleus limit learning machine
WO2023020257A1 (en) Data prediction method and apparatus, and storage medium
JP6451735B2 (en) Energy amount estimation device, energy amount estimation method, and energy amount estimation program
CN106530082A (en) Stock predication method and stock predication system based on multi-machine learning
CN114678080A (en) Prediction model and construction method of phosphorus content at the end point of converter, and phosphorus content prediction method
Wang Application of BPN with feature-based models on cost estimation of plastic injection products
CN111832817A (en) Small-world echo state network time series prediction method based on MCP penalty function
CN101587154A (en) Quick mode estimation mode estimating method suitable for complicated node and large scale metric data
CN117851781A (en) A method, device, computer equipment and storage medium for predicting household appliance demand
CN114004065A (en) Multi-objective optimization method of substation engineering based on intelligent algorithm and environmental constraints
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
CN105005623A (en) Power demand prediction method based on keyword retrieval index correlation analysis
CN104217296A (en) Listed company performance comprehensive evaluation method
CN114564787A (en) Bayesian optimization method, device and storage medium for target-related airfoil design
CN112801356A (en) Power load prediction method based on MA-LSSVM
CN114442557B (en) Quick identification method and system for machine tool temperature field
Zhu et al. Selection of criteria for multi-criteria decision making of reservoir flood control operation
CN118040647A (en) Short-term power load prediction method, system and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325