CN108564213A - Parallel reservoir group flood control optimal scheduling method based on GPU acceleration - Google Patents

Parallel reservoir group flood control optimal scheduling method based on GPU acceleration Download PDF

Info

Publication number
CN108564213A
CN108564213A CN201810314891.6A CN201810314891A CN108564213A CN 108564213 A CN108564213 A CN 108564213A CN 201810314891 A CN201810314891 A CN 201810314891A CN 108564213 A CN108564213 A CN 108564213A
Authority
CN
China
Prior art keywords
reservoir
particle
gpu
parallel
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810314891.6A
Other languages
Chinese (zh)
Other versions
CN108564213B (en
Inventor
曾志强
雷晓辉
杨明祥
蒋云钟
王浩
权锦
刘珂
田雨
张梦婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Water Resources and Hydropower Research
Original Assignee
China Institute of Water Resources and Hydropower Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Water Resources and Hydropower Research filed Critical China Institute of Water Resources and Hydropower Research
Priority to CN201810314891.6A priority Critical patent/CN108564213B/en
Publication of CN108564213A publication Critical patent/CN108564213A/en
Application granted granted Critical
Publication of CN108564213B publication Critical patent/CN108564213B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a GPU acceleration-based parallel reservoir group flood control optimal scheduling method, and relates to the technical field of reservoir scheduling. The method comprises the steps of firstly constructing an optimized scheduling problem of the parallel reservoir group, determining an optimized variable, the number of the optimized variables, a constraint condition and a flood control scheduling objective function, then optimizing a reservoir group scheduling process by adopting a PSO algorithm, and carrying out accelerated solution on the PSO algorithm by adopting a CUDA (compute unified device architecture) as a programming framework and adopting a GPU (graphics processing unit). Therefore, the optimized scheduling method provided by the embodiment of the invention not only solves the problems of dimension obstacle and scheduling efficiency, but also avoids the problems of communication among a large number of processes and complex management loss, and greatly reduces the hardware cost.

Description

A kind of parallel reservoir group's Flood Optimal Scheduling method accelerated based on GPU
Technical field
The present invention relates to reservoir operation technical field more particularly to a kind of parallel reservoir group flood control accelerated based on GPU are excellent Change dispatching method.
Background technology
Reservoir operation is the important non-engineering measure of flood control and disaster reduction, and efficient reservoir operation is to flood control, shipping, power generation, confession Water etc. has positive facilitation.With the active demand of increase and the flood control of reservoir quantity, multi-reservoir combined dispatching becomes A kind of trend.Multi-reservoir combined dispatching is not only to realize the basis of sustainable utilization of water resource, and realizes that the hydrology is mended in basin It repays, storage capacity compensates, the necessary condition of electric power compensation and comprehensive utilization benefit.However, the huge increasing of reservoir quantity also brings multi-reservoir " dimension obstacle " problem of scheduling, i.e. the calculating variable of model becomes much larger, it is more difficult to solve.Meanwhile efficient multi-reservoir flood control Optimized Operation could try to gain time precious to one for flood control.
Currently, in order to solve the problems, such as " dimension obstacle " and dispatching efficiency, mainly promoted in terms of two from algorithm improvement and hardware It is improved.Wherein, for algorithm improvement mainly by Algorithm parallelization, hardware promotes the performance for mainly enhancing CPU.But For a complicated scheduling model, in order to meet the requirement of parallel computation, needs to start thousands of threads and calculated, passing The communication between a large amount of processes and complex management damage will be led to by starting so many thread on the parallel tables based on CPU of system Consumption;Moreover, the hardware cost needed for Traditional parallel calculating is excessively high.
Invention content
The purpose of the present invention is to provide a kind of parallel reservoir group's Flood Optimal Scheduling methods accelerated based on GPU, thus Solve foregoing problems existing in the prior art.
To achieve the goals above, the technical solution adopted by the present invention is as follows:
A kind of parallel reservoir group's Flood Optimal Scheduling method accelerated based on GPU, the method includes:
S1 determines parallel reservoir group;
S2 obtains the constraints of each reservoir, optimized variable, optimized variable number and flood control in the parallel reservoir group Regulation goal function;
S3, according to constraints, optimized variable and optimized variable number, using PSO algorithms to Flood Control Dispatch object function It solves, obtains parallel reservoir group downstream maximum inflow-rate of water turbine, wherein acceleration solution is carried out to particle swarm optimization algorithm using GPU.
Preferably, in S2, the Flood Control Dispatch object function is:
In formula,It controls flood jointly for parallel reservoir group downstream the maximum stream flow at control point;Qn(t) it is t period ends n-th A reservoir letdown flow calculates the average flow rate for reaching downstream flood control control point by advance of freshet;N is water in parallel reservoir group Library number;T is scheduling slot sum.
Preferably, in S2, the optimized variable is the stage hydrograph of parallel reservoir group each moment Mo in schedule periods Zn(t) (t=1,2 ..., T;N=1,2 .., N), the optimized variable number is the product of reservoir number and scheduling slot sum, The product NT of as reservoir number N and scheduling slot sum T.
Preferably, S2, the constraints include:
I, reservoir level-capacity curve shown in following formula:
V=Sn(z) (3)
In formula, the water level z of reservoir is the water level of n-th of reservoir, and V is the storage capacity of n-th of reservoir;
II, reservoir water Constraints of Equilibrium shown in following formula:
In formula, Vn(t-1) indicate n-th of reservoir in the storage capacity at the beginning of the t periods, Vn(t) indicate n-th of reservoir in t respectively The storage capacity of period Mo,Indicate the reservoir inflow of n-th of reservoir of t period Mos,Indicate n-th of t period ends The letdown flow of reservoir, Δ t are the length of calculation interval;
III, reservoir capacity shown in following formula constraint:
In formula,Indicate the minimum allowable storage capacity of n-th of reservoir of t period Mos,Indicate t period Mos The maximum allowable storage capacity of n-th of reservoir;
IV, reservoir discharge capacity shown in following formula constraint:
In formula,Be n-th reservoir capacity it is VnWhen corresponding reservoir maximum letdown flow;
V, boundary condition shown in following formula:
In formula, Vn(0) the corresponding storage capacity of n-th of reservoir operation beginning starting-point detection is indicated;Vn(T) n-th of reservoir tune is indicated Spend the corresponding storage capacity of end of term water level;WithIt, need to be given according to the actual conditions of reservoir for definite value;
VI, letdown flow luffing shown in following formula constraint:
In formula, Δ qnIndicate the maximum luffing of n-th of reservoir storage outflow;N-th of reservoir at the beginning of the t periods Letdown flow;
VII, downstream river course safety discharge shown in following formula constraint:
In formula, Qn(t) indicate that n-th of reservoir storage outflow is calculated through advance of freshet to the t period Mos at flood control control point Flow;qsafeIndicate the safe traffic at flood control control point;
VIII, the concentration of channel shown in following formula constraint:
In formula,It is the evolution parameter of k-th of Muskingum calculation section.
Preferably, S3 includes the following steps:
S31 carries out PSO algorithms following mathematical description in conjunction with parallel reservoir group's Flood Optimal Scheduling problem:
It regard the combination of all optimized variables as decision variable sequence, and by element and the PSO algorithms in decision variable sequence In the position vector element of particle be arranged in a one-to-one correspondence;
By the velocity vector element one in the fluctuation speed of each reservoir day part end water level in parallel reservoir group and PSO algorithms One is correspondingly arranged;
Optimized variable number and the search space dimension of PSO algorithms are correspondingly arranged;
The fitness value of PSO algorithms is calculated by formula (1);
Wherein, it sets:D is the search space dimension of PSO algorithms, numerically equal to optimized variable number NT;M is particle Population scale, i.e. total number of particles;K is the maximum times of algorithm iteration;UmaxAllow maximum speed for particle;For particle i (i= 1,2 ..., m) position vector in jth (j=1,2 ..., k) secondary iterative calculation, For particle i Velocity vector in iteration j calculating,Pbest(i, j) is that particle i is calculated in iteration j Middle lived through optimum position, referred to as individual extreme value;Gbest(j) it is particle i in iteration j calculating in all particles Optimum position, abbreviation global extremum;F (i, j) passes through formula (1) calculated fitness for particle i in iteration j calculating Value;
S32, as j=0,
The Initialize installation of following content is carried out to PSO algorithms at the ends CPU:Population scale, iterations and particle allow Maximum speed, and the initial position and speed of given particle i at random in the range of meeting the constraints;
The individual extreme value and global extremum of each particle are initialized at the ends CPU;
Thread Count identical with total number of particles is established at the ends GPU, and independent calculating space is set for each particle, it will be every Particle on a thread, which calculates, is used as a calculating task, i.e. particle to be arranged in a one-to-one correspondence with thread;
CPU telomere sub-informations are transferred to GPU video memorys, obtain multiple the needing parallel computation of the tasks;
By the calling of the ends CPU function, the needing parallel computation of the task on GPU is executed;
J is increased by 1 by S33, and judges whether j+1 is less than K, if it is, executing S34, otherwise returns to S31;
S34 carries out the iterative calculation of particle at the ends GPU parallel, obtains the parallel reservoir group downstream most serious offense when time iteration Flow;
S35 judges whether be less than parallel reservoir group downstream when the parallel reservoir group downstream maximum inflow-rate of water turbine that time iteration obtains The safe inflow-rate of water turbine Q of flood control sectionsafe, if it is, iteration terminates, optimal global extremum is obtained to get to the water after optimization Reservoir level graph, into S36;Otherwise S33 is returned to, until obtaining optimal global extremum;
The information at the ends GPU is transmitted back to the ends CPU, discharges the ends CPU and the allocated variable space in the ends GPU, complete to make by S36 Solution of the method that PSO algorithms are accelerated with GPU to Flood Control Dispatch object function.
Preferably, S34 includes the following steps:
S341, according to formula (10), (11) and (12) more new particle i iteration j calculating in speed and position;
Wherein, ω (j) is the inertia coeffeicent that iteration j calculates;C1And C2It indicates Studying factors, is constant, can use 2; R1And R2It is the random number on [0,1], rand () is random function, generates the random number of [0,1];
S342, by position vectors of any one particle i in iteration j calculatingIt brings into formula (1), while to grain Sub- i carries out the calculating of constraints, if meeting institute's Prescribed Properties simultaneously, calculates particle i in iteration j calculating Fitness value f (i, j);If being unsatisfactory for any one of constraints, by adaptations of the particle i in iteration j calculating Angle value f (i, j) is set to 0;
S343, as follows more new particle individual extreme value:
Fitness value f (i, j) and particle is of the particle i obtained in iteration j calculating will be calculated by S342 the Individual extreme value P in j-1 iterative calculationbest(i, j-1) is compared, if f (i, j) > Pbest(i, j-1), then particle i exist Individual extreme value P in iteration j calculatingbest(i, j) is numerically equal to f (i, j);If f (i, j)≤Pbest(i, j-1), then Individual extreme value Ps of the particle i in iteration j calculatingbest(i, j) is numerically equal to Pbest(i,j-1);
S344 updates global extremum as follows:
By individual extreme value Ps of the particle i in iteration j calculatingbestThe overall situation in -1 iterative calculation of (i, j) and jth Extreme value Gbest(j-1) it is compared, if Pbest(i, j) > Gbest(j-1), then the global extremum G during iteration j calculatesbest (j) it is numerically equal to Pbest(i,j);If Pbest(i,j)≤Gbest(j-1), then the global extremum during iteration j calculates Gbest(j) it is numerically equal to Gbest(j-1)。
Preferably, in S36, the information by the ends GPU is transmitted back to the ends CPU, specifically, being called at the ends CPU CudaMemcpy () function realizes the transmission of information.
Preferably, in S36, the allocated variable space in the ends release CPU and the ends GPU, specifically, being called at the ends CPU Free () function and cudaFree () function release ends CPU and the allocated variable space in the ends GPU.
The beneficial effects of the invention are as follows:The parallel reservoir group provided in an embodiment of the present invention accelerated based on GPU, which is controlled flood, to be optimized Dispatching method, first build parallel reservoir group's Optimal Scheduling, determine optimized variable, optimized variable number, constraints and Then Flood Control Dispatch object function optimizes multi-reservoir scheduling process using PSO algorithms, and using CUDA as programming framework, Acceleration solution is carried out to PSO algorithms using GPU.So using Optimization Scheduling provided in an embodiment of the present invention, not only solving Certainly in " dimension obstacle " and dispatching efficiency problem, the appearance of the communication and complex management loss problem between a large amount of processes is avoided, And greatly reduce hardware cost.
Description of the drawings
Fig. 1 is that the parallel reservoir group's Flood Optimal Scheduling method flow provided in an embodiment of the present invention accelerated based on GPU is shown It is intended to.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with attached drawing, to the present invention into Row is further described.It should be appreciated that the specific embodiments described herein are only used to explain the present invention, it is not used to Limit the present invention.
With the development of GPU technologies, there is GPU very strong computation capability, Floating-point Computation ability can reach same For 10 times or more of CPU.In addition, compared to computer cluster, multi-core CPU or other professional LPT devices are used, accelerated parallel using GPU Algorithm has the low most significant advantage of hardware cost.Especially NVIDA companies were proposed unified computing architecture in 2007 (CUDA), GPU is allowed to have good programmability.In addition, particle swarm optimization algorithm (PSO algorithms) is a kind of particulate of classics Parallel computational model is spent, and is suitble to realize on GPU, therefore, the present invention provides a kind of Optimization Schedulings, using PSO Algorithm optimizes multi-reservoir scheduling process, and using CUDA as programming framework, is accelerated to PSO algorithms using GPU, not only In solving the problems, such as " dimension obstacle " and dispatching efficiency, going out for communication and complex management between a large amount of processes loss problem is avoided It is existing, and greatly reduce hardware cost.
As shown in Figure 1, an embodiment of the present invention provides a kind of parallel reservoir group Flood Optimal Scheduling sides accelerated based on GPU Method, the method includes:
S1 determines parallel reservoir group;
S2 obtains the constraints of each reservoir, optimized variable, optimized variable number and flood control in the parallel reservoir group Regulation goal function;
S3, according to constraints, optimized variable and optimized variable number, using PSO algorithms to Flood Control Dispatch object function It solves, obtains parallel reservoir group downstream maximum inflow-rate of water turbine, wherein acceleration solution is carried out to particle swarm optimization algorithm using GPU.
The above method can be implemented in accordance with the following steps:
Step 1 constructs parallel reservoir group's Flood Optimal Scheduling problem, specifically includes S1 and S2.
Wherein, S1 is specially, it would be desirable to carry out the reservoir composition parallel reservoir group of Flood Optimal Scheduling, it is determined that water in parallel After the group of library, constraints, optimized variable, optimized variable number and the Flood Control Dispatch mesh of each reservoir in parallel reservoir group are obtained Scalar functions.
In S2, the Flood Control Dispatch object function is:
In formula,It controls flood jointly for parallel reservoir group downstream the maximum stream flow at control point;Qn(t) it is t period ends n-th A reservoir letdown flow calculates the average flow rate for reaching downstream flood control control point by advance of freshet;N is water in parallel reservoir group Library number;T is scheduling slot sum.
The optimized variable is the stage hydrograph Z of parallel reservoir group each moment Mo in schedule periodsn(t) (t=1, 2,…,T;N=1,2 .., N), the optimized variable number is the product of reservoir number and scheduling slot sum, as reservoir number The product NT of N and scheduling slot sum T.
The constraints includes:
I, reservoir level-capacity curve shown in following formula:
V=Sn(z) (3)
In formula, the water level z of reservoir is the water level of n-th of reservoir, and V is the storage capacity of n-th of reservoir;
II, reservoir water Constraints of Equilibrium shown in following formula:
In formula, Vn(t-1) indicate n-th of reservoir in the storage capacity at the beginning of the t periods, Vn(t) indicate n-th of reservoir in t respectively The storage capacity of period Mo,Indicate the reservoir inflow of n-th of reservoir of t period Mos,Indicate n-th of t period ends The letdown flow of reservoir, Δ t are the length of calculation interval;
III, reservoir capacity shown in following formula constraint:
In formula,Indicate the minimum allowable storage capacity of n-th of reservoir of t period Mos,Indicate t period Mos The maximum allowable storage capacity of n-th of reservoir;
IV, reservoir discharge capacity shown in following formula constraint:
In formula,Be n-th reservoir capacity it is VnWhen corresponding reservoir maximum letdown flow;
V, boundary condition shown in following formula:
In formula, Vn(0) the corresponding storage capacity of n-th of reservoir operation beginning starting-point detection is indicated;Vn(T) n-th of reservoir tune is indicated Spend the corresponding storage capacity of end of term water level;WithIt, need to be given according to the actual conditions of reservoir for definite value;
VI, letdown flow luffing shown in following formula constraint:
In formula, Δ qnIndicate the maximum luffing of n-th of reservoir storage outflow;N-th of reservoir at the beginning of the t periods Letdown flow;
VII, downstream river course safety discharge shown in following formula constraint:
In formula, Qn(t) indicate that n-th of reservoir storage outflow is calculated through advance of freshet to the t period Mos at flood control control point Flow;qsafeIndicate the safe traffic at flood control control point;
VIII, the concentration of channel shown in following formula constraint:
In formula,It is the evolution parameter of k-th of Muskingum calculation section.
It, can be in the hope of target function value according to optimized variable and constraints, it is assumed that optimized variable in the embodiment of the present invention It is known that the stage hydrograph Z of i.e. known parallel reservoir group each moment Mo in schedule periodsn(t) (t=1,2 ..., T;N=1, 2 .., N), then steps are as follows for the calculating of target function value:
1. when known reservoir n t moments end water level, it is right to calculate reservoir n t moments end water level institute using formula (3) The Reservoir capacitance V answeredn(t), the Reservoir capacitance V at any reservoir any moment end in schedule periods therefore can similarly be calculatedn (t) (t=1,2 ..., T;N=1,2 .., N);
2. when known reservoir n t moments end storage capacity, calculated corresponding to reservoir n t moments end using formula (4) Letdown flowTherefore the letdown flow at any reservoir any moment end in schedule periods can similarly be calculated
3. when the letdown flow of known reservoir n t moments, n-th of reservoir of t period Mos is calculated using formula (10) Letdown flow calculates the average flow rate Q for reaching downstream flood control control point by advance of freshetn(t), it therefore can similarly calculate and take the post as The reservoir letdown flow at one reservoir any moment end in schedule periods is calculated by advance of freshet reaches downstream flood control control point Average flow rate Qn(t) (t=1,2 ..., T;N=1,2 .., N);
4. by Qn(t) (t=1,2 ..., T;N=1,2 .., N) it brings formula (1) into and can calculate f (t).
Step 2 solves parallel reservoir group's Flood Optimal Scheduling problem, including S3.The present invention uses particle swarm optimization algorithm (Particle Swarm Optimization), hereinafter referred to as PSO algorithms, solve Optimal Scheduling, and utilize GPU carries out acceleration solution to PSO algorithms, to efficiently acquire parallel reservoir group Flood Optimal Scheduling as a result, being reservoir in turn Group's scheduling provides decision support.
Wherein, PSO algorithms are a kind of evolutionary computation method that the principle preyed on according to birds puts forward, each optimization problem Solution be exactly a bird in search space, bird is flown in search space with certain speed, this speed is according to bird itself Flying experience and companion flying experience dynamic adjust.Bird is conceptualized as the particle of no quality and volume, uses position vector Spatial position and the flying speed of particle are indicated with velocity vector.There are one determined each particle by optimised object function Fixed adaptive value, and know the desired positions oneself up to the present found and present position, this is particle itself Flying experience.In addition, each particle also knows the desired positions that all particles are found in up to the present entire group, this It is the flying experience of particle companion.
In the embodiment of the present invention, S3 may include steps of:
S31 carries out PSO algorithms following mathematical description in conjunction with parallel reservoir group's Flood Optimal Scheduling problem:
It regard the combination of all optimized variables as decision variable sequence, and by element and the PSO algorithms in decision variable sequence In the position vector element of particle be arranged in a one-to-one correspondence;
By the velocity vector element one in the fluctuation speed of each reservoir day part end water level in parallel reservoir group and PSO algorithms One is correspondingly arranged;
Optimized variable number and the search space dimension of PSO algorithms are correspondingly arranged;
The fitness value of PSO algorithms is calculated by formula (1);
Wherein, it sets:D is the search space dimension of PSO algorithms, numerically equal to optimized variable number NT;M is particle Population scale, i.e. total number of particles;K is the maximum times of algorithm iteration;UmaxAllow maximum speed for particle;For particle i (i= 1,2 ..., m) position vector in jth (j=1,2 ..., k) secondary iterative calculation, For particle i Velocity vector in iteration j calculating,Pbest(i, j) is that particle i is calculated in iteration j Middle lived through optimum position, referred to as individual extreme value;Gbest(j) it is particle i in iteration j calculating in all particles Optimum position, abbreviation global extremum;F (i, j) passes through formula (1) calculated fitness for particle i in iteration j calculating Value;
S32, as j=0,
The Initialize installation of following content is carried out to PSO algorithms at the ends CPU:Population scale, iterations and particle allow Maximum speed, and the initial position and speed of given particle i at random in the range of meeting the constraints;
The individual extreme value and global extremum of each particle are initialized at the ends CPU;
Thread Count identical with total number of particles is established at the ends GPU, and independent calculating space is set for each particle, it will be every Particle on a thread, which calculates, is used as a calculating task, i.e. particle to be arranged in a one-to-one correspondence with thread;Such as:Set total number of particles M then needs to establish M thread at the ends GPU, is that each particle distributes the variable space by calling cudaMalloc () function;
CPU telomere sub-informations are transferred to GPU video memorys, obtain multiple the needing parallel computation of the tasks;
By the calling of the ends CPU function, the needing parallel computation of the task on GPU is executed;
J is increased by 1 by S33, and judges whether j+1 is less than K, if it is, executing S34, otherwise returns to S31;
S34 carries out the iterative calculation of particle at the ends GPU parallel, obtains the parallel reservoir group downstream most serious offense when time iteration Flow;
S35 judges whether be less than parallel reservoir group downstream when the parallel reservoir group downstream maximum inflow-rate of water turbine that time iteration obtains The safe inflow-rate of water turbine Q of flood control sectionsafe, if it is, iteration terminates, optimal global extremum is obtained to get to the water after optimization Reservoir level graph, into S36;Otherwise S33 is returned to, until obtaining optimal global extremum;
The information at the ends GPU is transmitted back to the ends CPU, discharges the ends CPU and the allocated variable space in the ends GPU, complete to make by S36 Solution of the method that PSO algorithms are accelerated with GPU to Flood Control Dispatch object function.
It can be seen that using above-mentioned steps S31--S36, completes and flood control is adjusted using the GPU methods accelerated to PSO algorithms The solution for spending object function obtains optimal global extremum to get to the reservoir level graph after optimization, obtains parallel reservoir group Flood Optimal Scheduling result.
It wherein,, can be in conjunction with parallel reservoir group's Flood Optimal Scheduling practical problem, water in parallel in S31 in practical operation What the optimized variable of library group's Flood Optimal Scheduling problem was made of the reservoir level of each period Mo in all reservoir operation phases Stage hydrograph regard the combination of all optimized variables as decision variable sequence;In S32, if setting total number of particles M, needs M thread is established at the ends GPU, is that each particle distributes the variable space by calling cudaMalloc () function;It is adopted at the ends GPU With cudaMemcpy () function, the particle information that the ends CPU are arranged is passed into GPU;Started using kernel functions at the ends CPU Parallel computation task on GPU.
In a preferred embodiment of the invention, S34 may include steps of:
S341, according to formula (10), (11) and (12) more new particle i iteration j calculating in speed and position;
Wherein, ω (j) is the inertia coeffeicent that iteration j calculates;C1And C2It indicates Studying factors, is constant, can use 2; R1And R2It is the random number on [0,1], rand () is random function, generates the random number of [0,1];
S342, by position vectors of any one particle i in iteration j calculatingIt brings into formula (1), while to grain Sub- i carries out the calculating of constraints, if meeting institute's Prescribed Properties simultaneously, calculates particle i in iteration j calculating Fitness value f (i, j);If being unsatisfactory for any one of constraints, by adaptations of the particle i in iteration j calculating Angle value f (i, j) is set to 0;
S343, as follows more new particle individual extreme value:
Fitness value f (i, j) and particle is of the particle i obtained in iteration j calculating will be calculated by S342 the Individual extreme value P in j-1 iterative calculationbest(i, j-1) is compared, if f (i, j) > Pbest(i, j-1), then particle i exist Individual extreme value P in iteration j calculatingbest(i, j) is numerically equal to f (i, j);If f (i, j)≤Pbest(i, j-1), then Individual extreme value Ps of the particle i in iteration j calculatingbest(i, j) is numerically equal to Pbest(i,j-1);
S344 updates global extremum as follows:
By individual extreme value Ps of the particle i in iteration j calculatingbestThe overall situation in -1 iterative calculation of (i, j) and jth Extreme value Gbest(j-1) it is compared, if Pbest(i, j) > Gbest(j-1), then the global extremum G during iteration j calculatesbest (j) it is numerically equal to Pbest(i,j);If Pbest(i,j)≤Gbest(j-1), then the global extremum during iteration j calculates Gbest(j) it is numerically equal to Gbest(j-1)。
As it can be seen that in the above method, speed and position of any one particle i in iteration j calculating are updated first;Again Using newer speed and position, fitness value is calculated;According to the individual extreme value of fitness value more new particle;Finally, by a Body extreme value updates global extremum.
In the embodiment of the present invention, in S36, the information by the ends GPU is transmitted back to the ends CPU, is specifically as follows, at the ends CPU CudaMemcpy () function is called to realize the transmission of information.
The allocated variable space in the ends release CPU and the ends GPU, is specifically as follows, and free () function is called at the ends CPU The ends CPU and the allocated variable space in the ends GPU are discharged with cudaFree () function.
By using above-mentioned technical proposal disclosed by the invention, following beneficial effect has been obtained:The embodiment of the present invention carries The parallel reservoir group's Flood Optimal Scheduling method accelerated based on GPU supplied, builds parallel reservoir group's Optimal Scheduling, really first Determine optimized variable, optimized variable number, constraints and Flood Control Dispatch object function, then uses PSO algorithms to multi-reservoir tune It spends journey to optimize, and using CUDA as programming framework, acceleration solution is carried out to PSO algorithms using GPU.So using this hair The Optimization Scheduling that bright embodiment provides, not only in solving the problems, such as " dimension obstacle " and dispatching efficiency, avoid largely into The appearance of communication and complex management loss problem between journey, and greatly reduce hardware cost.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered Depending on protection scope of the present invention.

Claims (8)

1. a kind of parallel reservoir group's Flood Optimal Scheduling method accelerated based on GPU, which is characterized in that the method includes:
S1 determines parallel reservoir group;
S2 obtains the constraints of each reservoir, optimized variable, optimized variable number and Flood Control Dispatch in the parallel reservoir group Object function;
S3 solves Flood Control Dispatch object function using PSO algorithms according to constraints, optimized variable and optimized variable number, Obtain parallel reservoir group downstream maximum inflow-rate of water turbine, wherein acceleration solution is carried out to particle swarm optimization algorithm using GPU.
2. the parallel reservoir group's Flood Optimal Scheduling method according to claim 1 accelerated based on GPU, which is characterized in that In S2, the Flood Control Dispatch object function is:
In formula,It controls flood jointly for parallel reservoir group downstream the maximum stream flow at control point;Qn(t) it is n-th of water of t period Mos Library letdown flow calculates the average flow rate for reaching downstream flood control control point by advance of freshet;N is reservoir in parallel reservoir group Number;T is scheduling slot sum.
3. the parallel reservoir group's Flood Optimal Scheduling method according to claim 2 accelerated based on GPU, which is characterized in that In S2, the optimized variable is the stage hydrograph Z of parallel reservoir group each moment Mo in schedule periodsn(t) (t=1,2 ..., T;N=1,2 .., N), the optimized variable number is the product of reservoir number and scheduling slot sum, as reservoir number N and tune Spend the product NT of period sum T.
4. the parallel reservoir group's Flood Optimal Scheduling method according to claim 3 accelerated based on GPU, which is characterized in that S2, the constraints include:
I, reservoir level-capacity curve shown in following formula:
V=Sn(z) (3)
In formula, the water level z of reservoir is the water level of n-th of reservoir, and V is the storage capacity of n-th of reservoir;
II, reservoir water Constraints of Equilibrium shown in following formula:
In formula, Vn(t-1) n-th of reservoir is indicated in the storage capacity at the beginning of the t periods,Indicate n-th of reservoir in the t periods respectively The storage capacity at end,Indicate the reservoir inflow of n-th of reservoir of t period Mos,Indicate n-th of reservoir of t period Mos Letdown flow, Δ t be calculation interval length;
III, reservoir capacity shown in following formula constraint:
In formula,Indicate the minimum allowable storage capacity of n-th of reservoir of t period Mos,Indicate n-th of t period ends The maximum allowable storage capacity of reservoir;
IV, reservoir discharge capacity shown in following formula constraint:
In formula,Be n-th reservoir capacity it is VnWhen corresponding reservoir maximum letdown flow;
V, boundary condition shown in following formula:
In formula, Vn(0) the corresponding storage capacity of n-th of reservoir operation beginning starting-point detection is indicated;Vn(T) n-th of reservoir operation phase is indicated The corresponding storage capacity of last water level;WithIt, need to be given according to the actual conditions of reservoir for definite value;
VI, letdown flow luffing shown in following formula constraint:
In formula, Δ qnIndicate the maximum luffing of n-th of reservoir storage outflow;Under n-th of reservoir at the beginning of the t periods Vent flow;
VII, downstream river course safety discharge shown in following formula constraint:
In formula, Qn(t) indicate that n-th of reservoir storage outflow is calculated through advance of freshet to the t period later and decadent stage of a school of thought amounts at flood control control point; qsafeIndicate the safe traffic at flood control control point;
VIII, the concentration of channel shown in following formula constraint:
In formula,It is the evolution parameter of k-th of Muskingum calculation section.
5. the parallel reservoir group's Flood Optimal Scheduling method accelerated according to claim 4 based on GPU, which is characterized in that S3 Include the following steps:
S31 carries out PSO algorithms following mathematical description in conjunction with parallel reservoir group's Flood Optimal Scheduling problem:
It regard the combination of all optimized variables as decision variable sequence, and will be in the element and PSO algorithms in decision variable sequence The position vector element of particle is arranged in a one-to-one correspondence;
Velocity vector element one in the fluctuation speed of each reservoir day part end water level in parallel reservoir group and PSO algorithms is a pair of It should be arranged;
Optimized variable number and the search space dimension of PSO algorithms are correspondingly arranged;
The fitness value of PSO algorithms is calculated by formula (1);
Wherein, it sets:D is the search space dimension of PSO algorithms, numerically equal to optimized variable number NT;M is the population of particle Scale, i.e. total number of particles;K is the maximum times of algorithm iteration;UmaxAllow maximum speed for particle;For particle i (i=1, 2 ..., m) position vector in jth (j=1,2 ..., k) secondary iterative calculation, Exist for particle i Velocity vector in iteration j calculating,Pbest(i, j) is particle i in iteration j calculating The optimum position lived through, referred to as individual extreme value;Gbest(j) be particle i in iteration j calculating in all particles most Best placement, abbreviation global extremum;F (i, j) passes through formula (1) calculated fitness for particle i in iteration j calculating Value;
S32, as j=0,
The Initialize installation of following content is carried out to PSO algorithms at the ends CPU:Population scale, iterations and particle allow maximum Speed, and the initial position and speed of given particle i at random in the range of meeting the constraints;
The individual extreme value and global extremum of each particle are initialized at the ends CPU;
Thread Count identical with total number of particles is established at the ends GPU, and independent calculating space is set for each particle, by each line Particle in journey, which calculates, is used as a calculating task, i.e. particle to be arranged in a one-to-one correspondence with thread;
CPU telomere sub-informations are transferred to GPU video memorys, obtain multiple the needing parallel computation of the tasks;
By the calling of the ends CPU function, the needing parallel computation of the task on GPU is executed;
J is increased by 1 by S33, and judges whether j+1 is less than K, if it is, executing S34, otherwise returns to S31;
S34 carries out the iterative calculation of particle at the ends GPU parallel, obtains the parallel reservoir group downstream maximum inflow-rate of water turbine when time iteration;
S35 judges control flood when whether the parallel reservoir group downstream maximum inflow-rate of water turbine that time iteration obtains is less than parallel reservoir group downstream The safe inflow-rate of water turbine Q of sectionsafe, if it is, iteration terminates, optimal global extremum is obtained to get to the reservoir water after optimization Position graph, into S36;Otherwise S33 is returned to, until obtaining optimal global extremum;
The information at the ends GPU is transmitted back to the ends CPU, discharges the ends CPU and the allocated variable space in the ends GPU by S36, completes to use GPU Solution of the method that PSO algorithms are accelerated to Flood Control Dispatch object function.
6. the parallel reservoir group's Flood Optimal Scheduling method according to claim 5 accelerated based on GPU, which is characterized in that S34 includes the following steps:
S341, according to formula (10), (11) and (12) more new particle i iteration j calculating in speed and position;
Wherein, ω (j) is the inertia coeffeicent that iteration j calculates;C1And C2It indicates Studying factors, is constant, can use 2;R1With R2It is the random number on [0,1], rand () is random function, generates the random number of [0,1];
S342, by position vectors of any one particle i in iteration j calculatingIt brings into formula (1), while to particle i It is suitable in iteration j calculating to calculate particle i if meeting institute's Prescribed Properties simultaneously for the calculating for carrying out constraints Answer angle value f (i, j);If being unsatisfactory for any one of constraints, by fitness of the particle i in iteration j calculating Value f (i, j) is set to 0;
S343, as follows more new particle individual extreme value:
Fitness value f (i, j) and particle is of the particle i obtained in iteration j calculating will be calculated in jth -1 by S342 Individual extreme value P in secondary iterative calculationbest(i, j-1) is compared, if f (i, j) > Pbest(i, j-1), then particle i is in jth Individual extreme value P in secondary iterative calculationbest(i, j) is numerically equal to f (i, j);If f (i, j)≤Pbest(i, j-1), then particle Individual extreme value Ps of the i in iteration j calculatingbest(i, j) is numerically equal to Pbest(i,j-1);
S344 updates global extremum as follows:
By individual extreme value Ps of the particle i in iteration j calculatingbestGlobal extremum in -1 iterative calculation of (i, j) and jth Gbest(j-1) it is compared, if Pbest(i, j) > Gbest(j-1), then the global extremum G during iteration j calculatesbest(j) Numerically equal to Pbest(i,j);If Pbest(i,j)≤Gbest(j-1), then the global extremum G during iteration j calculatesbest (j) it is numerically equal to Gbest(j-1)。
7. the parallel reservoir group's Flood Optimal Scheduling method according to claim 5 accelerated based on GPU, which is characterized in that In S36, the information by the ends GPU is transmitted back to the ends CPU, specifically, calling cudaMemcpy () function to realize letter at the ends CPU The transmission of breath.
8. the parallel reservoir group's Flood Optimal Scheduling method according to claim 5 accelerated based on GPU, which is characterized in that In S36, the allocated variable space in the ends release CPU and the ends GPU, specifically, the ends CPU call free () function and CudaFree () function discharges the ends CPU and the allocated variable space in the ends GPU.
CN201810314891.6A 2018-04-10 2018-04-10 GPU acceleration-based parallel reservoir group flood control optimal scheduling method Expired - Fee Related CN108564213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810314891.6A CN108564213B (en) 2018-04-10 2018-04-10 GPU acceleration-based parallel reservoir group flood control optimal scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810314891.6A CN108564213B (en) 2018-04-10 2018-04-10 GPU acceleration-based parallel reservoir group flood control optimal scheduling method

Publications (2)

Publication Number Publication Date
CN108564213A true CN108564213A (en) 2018-09-21
CN108564213B CN108564213B (en) 2022-05-13

Family

ID=63534546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810314891.6A Expired - Fee Related CN108564213B (en) 2018-04-10 2018-04-10 GPU acceleration-based parallel reservoir group flood control optimal scheduling method

Country Status (1)

Country Link
CN (1) CN108564213B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214726A (en) * 2018-11-27 2019-01-15 中国水利水电科学研究院 More reservoir water dispatching methods, terminal, storage medium and heterogeneous computing system
CN109214110A (en) * 2018-09-27 2019-01-15 中国水利水电科学研究院 A kind of long range water lift engineering optimization dispatching method
CN111969602A (en) * 2020-08-14 2020-11-20 山东大学 Day-ahead random optimization scheduling method and device for comprehensive energy system
CN112966445A (en) * 2021-03-15 2021-06-15 河海大学 Reservoir flood control optimal scheduling method based on reinforcement learning model FQI
CN113704520A (en) * 2021-10-27 2021-11-26 天津(滨海)人工智能军民融合创新中心 Method and device for accelerating Anchor-based data processing by using cuda in parallel and electronic equipment
CN113807667A (en) * 2021-08-30 2021-12-17 南昌大学 Reservoir flood control forecast optimal scheduling method for downstream flood control point
CN113971362A (en) * 2021-10-26 2022-01-25 中国水利水电科学研究院 Hybrid reservoir flood control optimal scheduling scheme generation method based on energy criterion
CN116227800A (en) * 2022-11-09 2023-06-06 中国水利水电科学研究院 Parallel reservoir group flood control optimal scheduling scheme generation method based on flood control pressure value
CN116757446A (en) * 2023-08-14 2023-09-15 华中科技大学 Cascade hydropower station scheduling method and system based on improved particle swarm optimization

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999756A (en) * 2012-11-09 2013-03-27 重庆邮电大学 Method for recognizing road signs by PSO-SVM (particle swarm optimization-support vector machine) based on GPU (graphics processing unit)
CN105427052A (en) * 2015-12-08 2016-03-23 国家电网公司 Reference line-based parallel reservoir certainty optimized dispatching method
CN105718998A (en) * 2016-01-21 2016-06-29 上海斐讯数据通信技术有限公司 Particle swarm optimization method based on mobile terminal GPU operation and system thereof
CN106056267A (en) * 2016-05-12 2016-10-26 中国水利水电科学研究院 Parallel reservoir group optimal scheduling method
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999756A (en) * 2012-11-09 2013-03-27 重庆邮电大学 Method for recognizing road signs by PSO-SVM (particle swarm optimization-support vector machine) based on GPU (graphics processing unit)
CN105427052A (en) * 2015-12-08 2016-03-23 国家电网公司 Reference line-based parallel reservoir certainty optimized dispatching method
CN105718998A (en) * 2016-01-21 2016-06-29 上海斐讯数据通信技术有限公司 Particle swarm optimization method based on mobile terminal GPU operation and system thereof
CN106056267A (en) * 2016-05-12 2016-10-26 中国水利水电科学研究院 Parallel reservoir group optimal scheduling method
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴松: "改进粒子群算法在并联水库群联合防洪优化调度中的应用", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
覃金帛 等: "GPU 并行优化技术在水利计算中的应用综述", 《计算机工程与应用》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214110B (en) * 2018-09-27 2022-12-27 中国水利水电科学研究院 Long-distance water lifting project optimized scheduling method
CN109214110A (en) * 2018-09-27 2019-01-15 中国水利水电科学研究院 A kind of long range water lift engineering optimization dispatching method
CN109214726A (en) * 2018-11-27 2019-01-15 中国水利水电科学研究院 More reservoir water dispatching methods, terminal, storage medium and heterogeneous computing system
CN111969602A (en) * 2020-08-14 2020-11-20 山东大学 Day-ahead random optimization scheduling method and device for comprehensive energy system
CN112966445A (en) * 2021-03-15 2021-06-15 河海大学 Reservoir flood control optimal scheduling method based on reinforcement learning model FQI
CN112966445B (en) * 2021-03-15 2022-10-14 河海大学 Reservoir flood control optimal scheduling method based on reinforcement learning model FQI
CN113807667A (en) * 2021-08-30 2021-12-17 南昌大学 Reservoir flood control forecast optimal scheduling method for downstream flood control point
CN113807667B (en) * 2021-08-30 2024-06-07 南昌大学 Reservoir flood control forecast optimal scheduling method for downstream flood control points
CN113971362A (en) * 2021-10-26 2022-01-25 中国水利水电科学研究院 Hybrid reservoir flood control optimal scheduling scheme generation method based on energy criterion
CN113971362B (en) * 2021-10-26 2022-05-10 中国水利水电科学研究院 Hybrid reservoir group flood control optimal scheduling scheme generation method based on energy criterion
CN113704520A (en) * 2021-10-27 2021-11-26 天津(滨海)人工智能军民融合创新中心 Method and device for accelerating Anchor-based data processing by using cuda in parallel and electronic equipment
CN116227800A (en) * 2022-11-09 2023-06-06 中国水利水电科学研究院 Parallel reservoir group flood control optimal scheduling scheme generation method based on flood control pressure value
CN116227800B (en) * 2022-11-09 2023-12-22 中国水利水电科学研究院 Parallel reservoir group flood control optimal scheduling scheme generation method based on flood control pressure value
CN116757446A (en) * 2023-08-14 2023-09-15 华中科技大学 Cascade hydropower station scheduling method and system based on improved particle swarm optimization
CN116757446B (en) * 2023-08-14 2023-10-31 华中科技大学 Cascade hydropower station scheduling method and system based on improved particle swarm optimization

Also Published As

Publication number Publication date
CN108564213B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN108564213A (en) Parallel reservoir group flood control optimal scheduling method based on GPU acceleration
Seide et al. On parallelizability of stochastic gradient descent for speech DNNs
Babaeizadeh et al. Reinforcement learning through asynchronous advantage actor-critic on a gpu
Sun et al. An improved vector particle swarm optimization for constrained optimization problems
Yassen et al. Meta-harmony search algorithm for the vehicle routing problem with time windows
CN105975342A (en) Improved cuckoo search algorithm based cloud computing task scheduling method and system
CN110515735A (en) A kind of multiple target cloud resource dispatching method based on improvement Q learning algorithm
CN106650925A (en) Deep learning framework Caffe system and algorithm based on MIC cluster
CN108446789A (en) A kind of intelligent optimization method towards cascade pumping station group's daily optimal dispatch
CN109976901A (en) A kind of resource regulating method, device, server and readable storage medium storing program for executing
CN112132469B (en) Reservoir group scheduling method and system based on multiple group cooperation particle swarm algorithm
CN114675975B (en) Job scheduling method, device and equipment based on reinforcement learning
CN106502632A (en) A kind of GPU parallel particle swarm optimization methods based on self-adaptive thread beam
CN115085202A (en) Power grid multi-region intelligent power collaborative optimization method, device, equipment and medium
CN111507474A (en) Neural network distributed training method for dynamically adjusting Batch-size
CN114217974A (en) Resource management method and system in cloud computing environment
Yabo et al. A control-theory approach for cluster autonomic management: maximizing usage while avoiding overload
CN115714820A (en) Distributed micro-service scheduling optimization method
CN115271437A (en) Water resource configuration method and system based on multi-decision-making main body
Chan et al. Learning network architectures of deep CNNs under resource constraints
CN104537224B (en) Multi-state System Reliability analysis method and system based on adaptive learning algorithm
Vahidipour et al. Priority assignment in queuing systems with unknown characteristics using learning automata and adaptive stochastic Petri nets
CN109936141A (en) A kind of Economic Dispatch method and system
Bendavid et al. Predetermined intervals for start times of activities in the stochastic project scheduling problem
CN114489966A (en) Job scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220513