CN107015861A - A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method - Google Patents
A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method Download PDFInfo
- Publication number
- CN107015861A CN107015861A CN201611005315.0A CN201611005315A CN107015861A CN 107015861 A CN107015861 A CN 107015861A CN 201611005315 A CN201611005315 A CN 201611005315A CN 107015861 A CN107015861 A CN 107015861A
- Authority
- CN
- China
- Prior art keywords
- parallel
- population
- particle
- join
- fork
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Design method is calculated the invention discloses a kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks, is comprised the following steps:(1) the parallel frameworks of Fork/Join are built;(2) the parallel frameworks of Fork/Join are realized;(3) typical intelligent method paralell design under coarseness pattern;(4) exemplary dynamic planing method paralell design under fine granularity pattern.By PSCWAGA, PAHPSO, PDP, PDDDP method example test result, using Fork/Join multi-core parallel concurrent frameworks, multi-core CPU parallel performance can be given full play to, significantly Reduction Computation takes, and significantly improves algorithm computational efficiency;The calculation scale of parallel method is bigger, and Reduction Computation is time-consuming more, and the advantage of parallel computation is more obvious;And as calculation scale gradually increases, when parallel efficiency is incrementally increased for acceleration, and speed-up ratio is more nearly preferable speed-up ratio.
Description
Technical field
Become more meticulous scheduling field, specifically a kind of step based on Fork/Join frameworks the present invention relates to Cascade Reservoirs
Optimal Scheduling of Multi-reservoir System multi-core parallel concurrent calculates design method.
Background technology
In recent years, the development of China's hydropower is very swift and violent.Ended for the end of the year 2015, national water power total installed capacity total installed capacity of hydropower rule
Mould reaches 3.2 hundred million kW, accounts for the 1/4 of global water power total installed capacity, is undisputed World super water power big country.Currently, China
The main integrated distribution of HYDROELECTRIC ENERGY is in 13 big Hydropower Bases, and the exploitations of these Hydropower Bases is just gradually forming large quantities of especially big dry
Cascade Reservoirs are flowed, the features such as step reservoir quantity is more, installed capacity is big are generally presented, especially southwestern rich water current field,
Easily ten what even tens grades of power plant scales.Such as, Wujiang River Basin plans 11 power stations, built 9;Plan 15 grades in the Lancang River
Power station, built 6;Plan 10 grades of power stations, built 9 in Hongsuihe River;Jinsha jiang River middle and lower reaches plan 12 grades of power stations, built 6;Cross greatly
28 grades of power stations are planned in river, are completed more than 10.For this kind of Cascade Reservoirs joint optimal operation problem in large scale, face
The solution difficulty faced is mainly reflected in following 3 aspects:
(1) extensive higher-dimension:High problem of dimension is the main of restriction Hydropower Stations in Large Scale group's Optimal Scheduling solution
Difficult point.As hydroelectric system calculation scale constantly expands, often increase by 1 grade of power station, solve the calculation times needed, computer storage
Amount and calculating, which take, to be increased rapidly, and for some methods, exponentially mode rises growth rate, causes Algorithm for Solving non-
Often difficult, computational efficiency is remarkably decreased, and the scalability of algorithm is made a big impact, and this problem is in Dynamic Programming
It is referred to as " dimension calamity ".Actually in other derivation algorithms, limitation of the system scale to solution equally exists to some extent,
And the main bottleneck of engineer applied is dispatched as restriction optimization of hydroelectric generation.
(2) multistage dynamic optimization:GROUP OF HYDROPOWER STATIONS Optimized Operation belongs to multistage optimization problems.Generally in time dimension
Discrete on degree is multiple periods, needs to meet reservoir water Constraints of Equilibrium between day part or water level exerts oneself the connectings such as luffing about
Beam, and above the period method of operation directly affects period productive head and generated energy below, thus each stage is closely connected;
Dispatching the equation control conditions of overall importance such as end of term water level, the limitation of hydroelectric system gross capability simultaneously causes system decomposition and multivariable to close
Connection constraint relaxation difficulty is very big.
(3) complicated hydraulic power contact:GROUP OF HYDROPOWER STATIONS Optimized Operation needs to consider the hydraulic connection between power station, positioned at upper
Swim and natural water is played regulatory role with the power station compared with top adjustment performance, thus change the reservoir inflow of lower station.
When dispatching upstream power station generation storage outflow, the reservoir inflow in power station is then by upstream power station storage outflow and two power stations downstream
Between interval flow collectively form, and reservoir inflow directly affects the tune of current plant as the main input of Optimized Operation
Spend decision-making.Therefore, the reservoir inflow in each power station is influenceed by it directly under the scheduling of upstream power station.If current plant has many
Individual upstream power station, optimization problem is more complicated.Further, since water power plays very important scheduler task in power system,
GROUP OF HYDROPOWER STATIONS Optimized Operation is caused to consider the constraints related to power system, such as what balancing electric power load need to be met is
System minimum load, safe power transmission limit of transmission cross-section etc., cause GROUP OF HYDROPOWER STATIONS to there is close power communication, while also increasing
The solution of GROUP OF HYDROPOWER STATIONS Optimized Operation is difficult.
In face of the situation of the so extensive storehouse group combined dispatching of China's hydroelectric system, solved and presented using traditional optimization
Certain limitation, can not meet the dispatching requirement that becomes more meticulous of power network, hydro power plant and watershed routing model, explore rationally efficient
Derivation algorithm is water power traffic control important scientific issues urgently to be resolved hurrily.
At present, Dynamic Programming class method and emerging intelligent algorithm be that Cascade Reservoirs Optimized Operation field is mainly applied two
Major class method.But, dynamic programming method is by size limit is solved, and extensive problem is easily caused dimension calamity problem, calculation scale
As the increase of power station number of computations is exponentially increased;There is certain competition with calculation scale and close in the solving precision of colony's algorithm
System, i.e. population scale or iterations is bigger, and bigger in the probability of Searching Resolution Space to global optimization solution, solving precision is higher,
But also implying that calculating is time-consuming more simultaneously, solution efficiency is lower, is particularly applied to the current extensive cascade hydropower of China
Stand group's Optimized Operation when, solution efficiency decline highly significant.Therefore, in order that solving Hydropower Stations in Large Scale group in finite time
Optimal Scheduling is possibly realized, and ensures the quality of optimum results simultaneously, and how research, which improves this two class, is often used optimization method
Computational efficiency there is important scientific meaning.
Parallel computation is always the study hotspot of computer science.Multi-core parallel concurrent technology based on polycaryon processor with
Its Parallel Implementation is easy, running environment is stable, with low cost etc., and unique advantage is widely used in Practical Project, for water power
For system, the point of penetration sought between Optimal Scheduling, derivation algorithm and concurrent technique, design suitable crude is fine-grained simultaneously
Row optimal way will be a kind of feasible way for improving system solution efficiency.At present, in Optimal operation of cascade hydropower stations field
The achievement in research of parallel optimization method is more, and method parallelization mode mainly designs two kinds of moulds by coarseness design and fine granularity
Formula is realized, and achievement generally make use of mature OpenMP, MPI or .NET etc. being capable of effectively compatibility Fortran, C++, C#
It is that the serial approach parallelization of the language developments such as Fortran, C++, C# is provided convenience Deng the parallel schema or framework of language.
But these frameworks can not effectively adapt to the serial approach of Java language (one of assembler language of current main flow) exploitation, it is difficult to
Applied to the J ava program parallelization exploitations encoded and transformation.The parallel frameworks of Fork/Join are a kind of based on the exploitation of Java source codes
Multi-core parallel concurrent framework, and be integrated into as standardization program in the concurrent program bag of Java versions 7, can be simple in maximum efficiency
Change the programing work of developer, be easy to the serial approach parallelization transformation of Java coding exploitations, be additionally, since framework proposition
Time is later, and the achievement applied to Cascade Reservoirs Optimization Scheduling parallelization calculating is less at present, has no systemic elaboration
The related ends of its implementation and Performance Influence Factor.Therefore, applicant's development language based on Java language, application
Fork/Join multi-core parallel concurrent frameworks, are respectively adopted coarseness design pattern and fine granularity design pattern, realize a variety of methods
Parallelization is calculated, and has been invented a kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks and has been calculated design side
Method.
The content of the invention
It is an object of the invention to provide a kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks
Design method is calculated, the technique can give full play to the acceleration of multinuclear configuration on the basis of active computer configuration is not changed
Performance, significantly the Reduction Computation time, improves solution efficiency.
To achieve the above object, the present invention provides following technical scheme:
A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent calculating design method based on Fork/Join frameworks, including with
Lower step:
(1) the parallel frameworks of Fork/Join are built:The core of the parallel frameworks of Fork/Join inherits the characteristic of " divide and conquer ",
By the former problem of recursive subdivision, formed several scales it is smaller, separate and can parallel computation subproblem;When each subproblem
Carry out after independent parallel calculating, the sub- result for combining each subproblem is the final result of exportable former problem;Fork/Join frameworks
The Thread Pool Technology of uniqueness is devised, when program starts to perform, acquiescence is created and the same number of activity of available processor
Thread Count;In " divide and conquer " implementation procedure, the threshold value for defining the control that can freely set a subproblem scale is made
When being less than or equal to threshold value for the higher limit of subproblem scale, i.e. group problem scale, " divide and conquer ", which is performed, to be terminated, each subproblem
It is averaged to be assigned in different threads and starts parallel computation;In addition, during subproblem parallel computation, Fork/Join designs
Unique deque sequencing model, and employ " work is stolen " algorithm, i.e., when the queue calculating task of a thread is
Space-time, " will steal " calculating task from other in running order thread queue afterbodys;
(2) the parallel frameworks of Fork/Join are realized:1) code word that algorithm is realized need to inherit Fork/Join application interface classes
Ja va.util.concurrent.RecursiveAction or java.util.concurrent.RecursiveTask;
2) selection threshold value divides task;3) void compute () method of Fork/Join interface classes is realized;4) task is set to divide
Mode;
(3) typical intelligent method paralell design under coarseness pattern:1) parallel adaptive chaos integrally annealed heredity is calculated
Method:
The parameter initializations of Step 1.Set population scale m, chaos sequence number d, population maximum iteration Kmax, just
Beginning temperature T0 and auto-adaptive parameter Pc1, Pc2, Pm1, Pm2;
The initialization of population of Step 2.According to Logistic mapping equations, n group chaos sequences are generated at random in chaotic space,
Be mapped to the different individual of generation m in solution space and constitute populations, population at individual by each power station different periods water level value (zil,
Zi2 ..., zin) constitute;
Step 3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously;
Step 4 starts parallel computation flow;
It is smaller that 1. father population is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Sub- population;
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
A evaluates individual adaptation degree;
B Selecting operations:Using integrally annealed selection mechanism, it is allowed to which parent participates in competition, and filters out individual of future generation;
C crossing operations:Crossover operation is performed using arithmetic crossover method;
D mutation operators:Mutation operation is performed using non-uniform mutation mode;
E calculates parent and offspring individual fitness:When parent individuality Xi is by the generation filial generation X ' that intersects, makes a variationiIf, f
(X′i) > f (Xi), then with X 'iInstead of Xi;Otherwise, with probability exp [(f (X 'i)-f(Xi))/Tk] retain Xi;
F parameters update.Iterations k=k+1, temperature Tk=1/ln (k/T0+1).
G judges whether sub- population iteration terminates;According to annealing temperature Tk or maximum iteration as the condition of convergence, when two
When any one of person reaches the condition of convergence of initial setting up, A is returned;Otherwise, sub- population calculates convergence and terminated;
4. Parallel Step, which collect, merges the optimization solution of each sub- population and collects as a result, returns to main thread.
Step 5 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
2) parallel adaptive Hybrid Particle Swarm (PAHPSO)
Step1 parameter initializations;
Step2 initialization of population;
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously;
Step4 starts parallel computation flow;
It is smaller that 1. father population is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Sub- population;
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
A calculates particle fitness, particle individual optimal solution and population globally optimal solution;Particle fitness is individual most with it
Excellent solution compares, if particle fitness is more excellent than its individual optimal solution, current particle position is used as personal best particle;Particle is fitted
Response is compared with population globally optimal solution, if particle fitness is more excellent than population globally optimal solution, the conduct of current particle position
Population global optimum position;
B calculates particle energy and particle energy threshold value;If particle energy is less than current particle energy threshold, particle is worked as
Front position performs mutation operation with speed;
C calculates particle similarity and particle similarity threshold;If it is similar that two adjacent particles similarities are less than current particle
Threshold value is spent, then mutation operation is performed to the history optimal location of poor particle;
D introduces the greedy random searching strategy based on neighborhood and the personal best particle of particle is updated;If in neighborhood
The current location searched is bigger than particle fitness before search, then instead of particle body position before search, then with grain after searching for
A sub body position is made comparisons with particle history optimal location and population global optimum position, more new particle history optimal location with
And population optimal location;
E updates speed and the position of particle populations;
F judges whether sub- population iteration terminates;If current iteration number of times is less than maximum iteration, return (1);Otherwise,
Sub- population calculates convergence and terminated;
4. Parallel Step, which collect, merges the result of calculation of each sub- population and collects as a result, returns to main thread;
Step5 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
3) Fork/Join frameworks coarse grain parallelism design pattern and method
According to PSCWAGA and PAHPSO method calculation process, induction and conclusion Fork/Join frameworks coarse grain parallelism design mould
Formula and method:
Step1 initialization algorithms parameter and population scale;
Step2, which creates thread pool and set, calculates threshold value;
Father's population dividing is many sub- populations by recursive schema using the parallel frameworks of Fork/Join by Step3, and average mark
It is fitted on each sub-line journey;
Each sub- populations of Step4 press former serial approach process optimization and calculated, until each sub- population iteration terminates;
Step5, which collects, to be merged the result of calculation of each sub- population and collects as a result, returns to main thread;
Step6 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
(4) exemplary dynamic planing method paralell design under fine granularity pattern:1) parallel dynamic programming method:
Step1 data prepare;Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature
Water level, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St;
Step2 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously;
Step3 starts parallel computation flow;
1. Parallel Step build father's task of parallel computation;By all retaining state variable combinations in iteration cycle
Solution in Bt (St, It, Nt) is calculated as father's task, and for water level in calculating process, exert oneself, the index such as generated energy is created
Memory space;
It is smaller that 2. father's task is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Subtask;
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., all shapes in subtask
State discrete combination solves calculating in recurrence formula;
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread;
Step4 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
2) parallel discrete differential dynamic programming method:
Step1 data prepare;Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature
Water level, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St;
Step2 sets iteration gallery number, and generates initial feasible solution as calculating initial track;
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously;
Step4 chooses maximum width of corridor and is used as current gallery;
Step5 starts parallel computation flow;
1. Parallel Step build father's task of parallel computation in current gallery;By all retaining states in gallery
Solution of the variable combination in Bt (St, It, Nt) is calculated as father's task, and for water level in calculating process, exert oneself, generated energy etc.
Index creates memory space;
It is smaller that 2. father's task is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Subtask;
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread;In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., all shapes in subtask
State discrete combination solves calculating in recurrence formula;
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread;
Step 5 exports current optimal trajectory according to result set, and judge current optimal trajectory whether with initial track phase
Together;If identical, into Step 6;If differing, into Step 7;
Step 6 judges whether current gallery is last gallery;If it is not, setting next smaller width of corridor
As current width of corridor, into Step 7;Terminated if so, calculating, output optimal trajectory is as calculating optimal solution and destroys line
Cheng Chi;
Step 7 sets current optimal trajectory as the initial track of next iteration, returns to Step 5;
3) Fork/Join frameworks fine grained parallel design pattern and method:
According to PDP and PDDDP method calculation process, induction and conclusion Fork/Join frameworks fine grained parallel design pattern and
Method:
Step1 data prepare.Counted including initiation parameter and setting state discrete;
Step2, which creates thread pool and set, calculates threshold value;
Step3 is performed by serial flow until starting to calculate the return value of state discrete point combination, into parallel computation;
Father's task is divided into multiple subtasks, and average mark by Step4 using the parallel frameworks of Fork/Join by recursive schema
It is fitted on each sub-line journey;
Dynamic Programming recurrence formula derivation is pressed in state discrete point combination in each subtasks of Step5, until each son is appointed
Business calculating terminates;
Step6, which collects, to be merged the result of calculation of each subtask and collects as a result, returns to main thread;
Step7 filters out current optimal solution from result set, and continues until calculating to terminate by the serial flow execution of method,
And destroying threads pond.
It is used as further scheme of the invention:It is specially, when threshold value sets less than normal, to mean that threshold value, which divides task, in step (2)
The scale for subproblem is smaller and dividing number is more, easily causes resource management consumption excessive;When threshold value sets bigger than normal, it is meant that
The larger and dividing number of subproblem is less, or even group problematic amount is when being less than active line number of passes, is easily caused part work
Make thread and be in idle state.Therefore, in order to ensure that all working thread can be assigned to subproblem, general threshold value sets public
Formula (I) is as follows:
In public formula (I):Upper integer is taken for result of calculation;μ is scale domination threshold value;ScFor former problem scale;W is multinuclear
Processor logic line number of passes.
It is used as further scheme of the invention:Setting task dividing mode is specially that division can per subtask in step (2)
Father's task is divided into multiple subtasks, and the subtask number divided is write code by developer and realized.
Compared with prior art, the beneficial effects of the invention are as follows:
The present invention is by PSCWAGA, PAHPSO, PDP, PDDDP method example test result, using Fork/Join multinuclears
Parallel framework, can give full play to multi-core CPU parallel performance, and significantly Reduction Computation takes, and significantly improves algorithm computational efficiency;And
The calculation scale of row method is bigger, and Reduction Computation is time-consuming more, and the advantage of parallel computation is more obvious;And with calculation scale by
Cumulative big, when parallel efficiency is incrementally increased for acceleration, and speed-up ratio is more nearly preferable speed-up ratio.
Brief description of the drawings
Fig. 1 is " divide and conquer " schematic diagram;
Fig. 2 is threshold value control mode schematic diagram;
Fig. 3 is " work is stolen " algorithm schematic diagram;
Fig. 4 is Fork/Join false code schematic diagrames;
Fig. 5 is that Fork/Join frameworks coarse grain parallelism designs pattern and method;
Fig. 6 is that Fork/Join frameworks fine grained parallel designs pattern and method;
Fig. 7 is Hongsuihe River step reservoir topology diagram;
Fig. 8 is 3 discrete state point gallery schematic diagrames.
Embodiment
Below in conjunction with the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described,
Obviously, described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based in the present invention
Embodiment, the every other embodiment that those of ordinary skill in the art are obtained under the premise of creative work is not made, all
Belong to the scope of protection of the invention.
First, Optimal operation of cascade hydropower stations model
(1) object function
By taking the maximum model of Cascade Reservoirs gross generation as an example.The model can be described as:Each power station in known schedule periods
Under reservoir inflow process condition, it is considered to water balance, reservoir storage, discharge, the various constraints such as exert oneself, flood control, irrigation, shipping are taken into account
Deng composite request, optimal scheduling decision process is formulated, makes multi-reservoir Energy Maximization in schedule periods.Object function mathematic(al) representation
It is as follows:
In formula:E is GROUP OF HYDROPOWER STATIONS gross generation (hundred million kW.h);N is power station number;I is power station sequence number;Hop count when T is;t
For period sequence number;For the average output of power station i t periods;ΔtFor the hourage of t periods.
(2) constraints
Water balance is constrained:
Pondage is constrained:
Generating flow is constrained:
Storage outflow is constrained:
Units limits:
Last restriction of water level:
Hydroelectric system gross capability is constrained:
In formula:The pondage (m3) at the beginning of the power station i t periods;For power station i t period reservoir inflows (m3/
s);For power station i t periods storage outflows (m3/s);For power station i t periods interval flow (m3/s);KiFor water power
Stand i upstream power stations number;K is upstream power station sequence number;For the upper pond k of power station i t periods storage outflow (m3/
s);For the generating flow (m3/s) of power station i t periods;Water-carrying capacity (m3/s) is abandoned for the power station i t periods;Respectively power station i t period pondage bounds (m3);During respectively power station i t
Section generating flow bound (m3/s);Respectively power station i t period storage outflow bounds (m3/s);The respectively power station i t periods exert oneself bound (MW);Respectively power station i initial time and T
The retaining state at moment;SBi、SEiThe initial retaining state that respectively power station i is specified and retaining state expected from the last period;Respectively hydroelectric system t period gross capabilities bound (MW).
2nd, the parallel frameworks of Fork/Join
The interface class of the parallel frameworks of Fork/Join and realize class wrapper in Java.util.concurrent.Although
Fork/Join parallel efficiency is compared with other parallel frameworks and non-optimal, but its great advantage is to carry to developer
Very easy application program has been supplied to realize interface.Developer no longer needs the various parallel relevant issues of processing, for example synchronously,
Communication etc., while the mistakes such as the deadlock and data contention debugged can be avoided being difficult to, need to only be compiled according to the implementation to stationary interface
Docking for algorithm and framework interface can be completed by writing seldom program, greatly simplifie the niggling for writing concurrent program,
The workload of developer can be largely saved, operating efficiency is improved, Fork/Join false code schematic diagrames are shown in Fig. 4.For exploit person
For member, following 4 problems need to mainly be paid close attention to by realizing that Algorithm parallelization is calculated using the parallel frameworks of Fork/Join.
1) code word that algorithm is realized need to inherit Fork/Join application interface classes
Java.util.concurrent.RecursiveA ction or java.util.concurrent.RecursiveTask.
2) suitable threshold value is selected to divide task.Example in Fig. 2, the size of threshold value directly affects point of subproblem
Cut number.When threshold value sets less than normal, it is meant that the scale of subproblem is smaller and dividing number is more, easily causes resource management consumption
It is excessive;When threshold value sets bigger than normal, it is meant that the larger and dividing number of subproblem is less, or even group problematic amount is less than
During active line number of passes, it is easily caused part worker thread and is in idle state.Therefore, in order to ensure that all working thread can be distributed
To subproblem, general threshold value sets formula as follows:
In formula:Upper integer is taken for result of calculation;μ is scale domination threshold value;ScFor former problem scale;W is multinuclear processing
Device logic line number of passes (processor with " hyperthread " technology can simulate 2 logic threads in 1 kernel).In addition, framework
Application interface class provides the method interface for dividing subproblem, therefore, and developer need to only provide suitable threshold value and realize offer
Interface, how to realize that task is divided and thread is distributed as program, developer need not be concerned about.
3) void compute () method of Fork/Join interface classes is realized.This method interface mainly realizes each subtask
Calculating, i.e., by algorithm can the program in machine code of parallelization part write this method interface.
4) task dividing mode is set.Father's task can be divided into multiple subtasks by being divided per subtask, and the son divided
Task number is write code by developer and realized.In general, be conducive to hardware real by the way of father's task is half-and-half decomposed
Existing load balancing.
3rd, case method important parameter is defined
(1) adaptive chaos overall annealing genetic algorithm (PSCWAGA)
Logistic mapping equations:xn+1=μ xn(1-xn) (3.1)
Jk(fi)=exp (fi/Tk) (3.4)
In formula:Xn is the random number on [0,1], is the iterative value of variable x n-ths;μ is control parameter, and μ ∈ [0,4].
Wherein when=4, Logistic mapping equations are completely in chaos state, moreover, μ initial value can not for 0,0.25,
0.5th, 0.75 and 1;Pcl, Pc2, Pm1, Pm2 are the control parameter on interval (0,1);F ' be two intersect individual in fitness compared with
Big value;F is the fitness of current individual;Fmax, favg represent maximum adaptation degree and population average fitness in population respectively;P
(Xi) it is select probability, Jk (fi) is fitness function;Fi is individual Xi target function value;M is Population Size;K is heredity
Algorithm iteration number of times;Tk is gradually in 0 annealing temperature.Wherein, temperature is declined by Tk=1/ln (k/T0+1).
(2) ADAPTIVE MIXED particle cluster algorithm (PAHPSO)
Particle energy
Wherein
In formula:Pibest is particle xi history optimal location;Pgbest is population global optimum position;N is dimension;Xi
(k) it is particle position;Vi (k) is flying speed of partcles.It can be seen that, e (Pi) ∈ [0,1].Particle energy describes searching for particle
Suo Nengli, and key effect is adaptively served to algorithm.See that particle energy is calculated and particle present bit in formula (3.6)
Put relevant with the similarity degree of population global optimum position with the similarity degree and particle history optimal location of speed.
Particle energy threshold value
Wherein:speed(Pi(curG))=Pibest(curG)/Pibest(curG-1)
In formula:MaxG is maximum iteration;CurG is current iteration number of times;E is previously given constant, for controlling energy
Measure the variation tendency of threshold value;EIni is the particle energy upper limit;EFin is particle energy lower limit.
Particle energy threshold value and Evolution of Population degree and evolutionary rate are closely related.It can see from formula (3.5), grain
Sub- energy threshold is relevant with particle optimal location and particle energy bound.Particle energy threshold value is continually changing in iterative process,
When particle energy is less than current particle energy threshold, mutation operation is carried out to speed and position, formula (3.8), (3.9) are seen.
Vi(k)=mutation (Vi(k)) (3.8)
Xi(k)=mutation (Xi(k)) (3.9)
Particle similarity
Wherein
In formula:Pibest is particle xi history optimal location;Pjbest is particle xj history optimal location.From formula
(3.10) it can be seen that in, the history optimal location calculated corresponding thereto of adjacent particles similarity is relevant.
Particle similarity threshold
In formula:SIni is the particle similarity upper limit;SFin is particle similarity lower limit;S is constant, controls similarity threshold
Amplitude of variation.
(3) dynamic programming method (DP)
St=Tt+1(St+1, Qt, Nt);T=T, T-1 ..., 1 (3.13)
In formula:T, T be respectively period sequence number and when hop count;St, Qt, Nt be respectively M dimension (power station number) power station storage capacity,
Reservoir inflow, exert oneself, be vector;ft *(St) when to be period t state be St, from period t to the system maximum generation of last period
Amount, hundred million kWh;Bt (St, Qt, Nt) is that the initial retaining states of period t are St, and reservoir inflow is Qt, the sheet that decision-making is exerted oneself during for Nt
Period system generated energy, hundred million kWh;Tt+1 (St+1, Qt, Nt) is period t+1 to t state transition equation, and usually water is put down
Weigh equation, sees formula (1.2).
(4) discrete differential dynamic programming method (DDDP)
Gallery:When each iterative search starts, a range of " corridor is constructed to current initial solution in state feasible zone
Road ", and the search optimal trajectory in current " gallery ", the general discrete less status number in gallery, usual discrete 3 states
Point, you can significantly reduce calculation scale, is shown in Fig. 8.In Fig. 8, Δ1And Δ2Respectively upper width of corridor and lower width of corridor, it is whole
Individual width of corridor is Δ=Δ1+Δ2.Calculate for convenience, Δ can be set1=Δ2, in addition, gallery coboundary and lower boundary are not
The feasible zone of state variable can be exceeded.
4th, typical intelligent method paralell design implementation under coarseness pattern
For current numerous emerging intelligent algorithms, different optimized algorithm solution procedurees is different, such as hereditary
Algorithm main process is the step optimizing such as to select, intersect, making a variation by population at individual;Particle cluster algorithm main process is by individual
Body extreme point and global extremum point constantly update a body position to search for globally optimal solution;Ant group algorithm main process passes through individual
The pheromones left constantly are evolved to globally optimal solution.But, the Optimization Mechanism of these intelligent algorithms is commonly optimal from individual
Search one by one is to global optimum, and its common feature is, when algorithm starts to perform, to be initially generated the individual population of certain scale, plant
Each individual in group is single independent initial solution, and iterative calculation of each individual in searching process is separate
, therefore, it can use initial population is decomposed into the coarseness pattern that scale more Small Population optimizing collects to carry out parallelization and set
Meter is with calculating.It is of the invention main with parallel adaptive chaos overall annealing genetic algorithm and parallel adaptive Hybrid Particle Swarm
Exemplified by the paralell design of two kinds of typical intelligent methods, the coarse grain parallelism based on Fork/Join multi-core parallel concurrent frameworks is summarized
Design method.
(1) parallel adaptive chaos overall annealing genetic algorithm (PSCWAGA)
The parameter initializations of Step 1.Set population scale m, chaos sequence number d, population maximum iteration Kmax, just
Beginning temperature T0 and auto-adaptive parameter Pc1, Pc2, Pm1, Pm2.
The initialization of population of Step 2.According to Logistic mapping equations, n group chaos sequences are generated at random in chaotic space,
Be mapped to the different individual of generation m in solution space and constitute populations, population at individual by each power station different periods water level value (zi1,
Zi2 ..., zin) constitute.
Step 3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously.
Step 4 starts parallel computation flow.
It is smaller that 1. father population is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Sub- population.
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical.
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
(1) individual adaptation degree is evaluated.Directly using object function as fitness function, while using optimal save strategy
Strategy, that is, record per generation optimum individual, be not involved in crossing operation and mutation operator, calculates after terminating, replaces suitable in this generation
The worst individual of response, it is ensured that it smoothly enters of future generation.
(2) Selecting operation.Using integrally annealed selection mechanism, it is allowed to which parent participates in competition, and each individual by fitting relatively
Response function formula (5) and select probability formula (6) filter out the next generation.
(3) crossing operation.Crossover operation is performed using arithmetic crossover method, crossover operator is obtained by formula (2).
(4) mutation operator.Mutation operation is performed using non-uniform mutation mode, mutation operator is obtained by formula (3).
(5) parent and offspring individual fitness are calculated.Assuming that parent individuality Xi is by intersecting, variation generation filial generation X 'iIf,
f(X′i) > f (Xi), then with X 'iInstead of Xi;Otherwise, with probability exp [(f (X 'i)-f(Xi))/Tk] retain Xi.
(6) parameter updates.Iterations k=k+1, temperature Tk=1/ln (k/T0+1).
(7) judge whether sub- population iteration terminates.The condition of convergence is used as according to annealing temperature Tk or maximum iteration.When
When both any one reach the condition of convergence of initial setting up, return (1);Otherwise, sub- population calculates convergence and terminated.
4. Parallel Step, which collect, merges the optimization solution of each sub- population and collects as a result, returns to main thread.
Step 5 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
PSCWAGA algorithm flows are shown in Fig. 5.
(2) parallel adaptive Hybrid Particle Swarm (PAHPSO)
Step1 parameter initializations.Set population scale m, chaos sequence number d, population maximum iteration Kmax, flight
Acceleration c1, c2, inertial factor w, constant e, constant s, particle energy upper limit eIni, particle energy lower limit eFin and particle phase
Like degree upper limit sIni and particle similarity lower limit sFin.
Step2 initialization of population.It is random initial in day part water level allowed band according to Logistic mapping equations
Change particle populations body position (zi1, zi2 ..., zin) and flying speed of partcles (Vi1, Vi2 ..., Vin).Wherein, particle
Position element is water level, and flying speed element is fluctuation in stage speed.
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously.
Step4 starts parallel computation flow.
It is smaller that 1. father population is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Sub- population.
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical.
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
(1) particle fitness, particle individual optimal solution and population globally optimal solution are calculated.Particle fitness and its individual
Optimal solution compares, if particle fitness is more excellent than its individual optimal solution, current particle position is used as personal best particle;Particle
Fitness is compared with population globally optimal solution, if particle fitness is more excellent than population globally optimal solution, and current particle position is made
For population global optimum position.
(2) particle energy and particle energy threshold value are calculated.If particle energy is less than current particle energy threshold, to particle
Current location performs mutation operation with speed.
(3) particle similarity and particle similarity threshold are calculated.If two adjacent particles similarities are less than current particle phase
Like degree threshold value, then mutation operation is performed to the history optimal location of poor particle.
(4) the greedy random searching strategy based on neighborhood is introduced to be updated the personal best particle of particle.If in neighbour
The current location that domain search is arrived is bigger than particle fitness before search, then instead of particle body position before search, then with after search
Particle body position is made comparisons with particle history optimal location and population global optimum position, more new particle history optimal location
And population optimal location.
(5) speed and the position of particle populations are updated.
(6) judge whether sub- population iteration terminates.If current iteration number of times is less than maximum iteration, return (1);It is no
Then, sub- population calculates convergence and terminated.
4. Parallel Step, which collect, merges the result of calculation of each sub- population and collects as a result, returns to main thread.
Step5 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
PAHPSO algorithm flows are shown in Fig. 5.
(3) Fork/Join frameworks coarse grain parallelism design pattern and method
According to PSCWAGA and PAHPSO method calculation process, induction and conclusion Fork/Join frameworks coarse grain parallelism design mould
Formula and method:
Step1 initialization algorithms parameter and population scale.
Step2, which creates thread pool and set, calculates threshold value.
Father's population dividing is many sub- populations by recursive schema using the parallel frameworks of Fork/Join by Step3, and average mark
It is fitted on each sub-line journey.
Each sub- populations of Step4 press former serial approach process optimization and calculated, until each sub- population iteration terminates.
Step5, which collects, to be merged the result of calculation of each sub- population and collects as a result, returns to main thread.
Step6 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
Fork/Join frameworks coarse grain parallelism design pattern is shown in Fig. 5.
5th, exemplary dynamic planing method paralell design implementation under fine granularity pattern
Classical Dynamic Programming class method is one of most widely used optimization method of current Optimal Scheduling of Multi-reservoir System, is mainly had
The Dynamic Programming class method such as conventional dynamic planing method, discrete differential dynamic programming method, Approach by inchmeal dynamic programming method.This
Class dynamic programming method optimizing iterative manner is had nothing in common with each other, but the core of method, which is different state discrete point combination, to be moved
Solution optimizing in state planning recurrence formula, and it is separate that different state discrete point combined iterations, which is calculated,.Therefore,
Classical Dynamic Programming class method is applied to the Parallel Design under fine granularity pattern.It is of the invention main with parallel dynamic programming method
Exemplified by the paralell design of two kinds of exemplary dynamic planning class methods of parallel discrete differential dynamic programming method, summary is based on
The fine grained parallel design method of Fork/Join multi-core parallel concurrent frameworks.
(1) parallel dynamic programming method (PDP)
Step1 data prepare.Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature
Water level, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St.
Step2 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously.
Step3 starts parallel computation flow.
1. Parallel Step build father's task of parallel computation.By all retaining state variable combinations in iteration cycle
Solution in Bt (S t, It, Nt) is calculated as father's task, and for water level in calculating process, exert oneself, the index wound such as generated energy
Build memory space.
It is smaller that 2. father's task is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Subtask.
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical.
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., all shapes in subtask
State discrete combination solves calculating in recurrence formula.
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread.
Step4 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
PDP algorithm flows are shown in Fig. 6.
(2) parallel discrete differential dynamic programming method (PDDDP)
Step1 data prepare.Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature
Water level, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St (the state discrete number of DDDP methods is typically chosen as 3).
Step2 sets iteration gallery number, and generates initial feasible solution as calculating initial track.
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads,
Fork/Join calculating threshold value is set simultaneously.
Step4 chooses maximum width of corridor and is used as current gallery.
Step5 starts parallel computation flow.
1. Parallel Step build father's task of parallel computation in current gallery.By all retaining states in gallery
Solution of the variable combination in Bt (St, It, Nt) is calculated as father's task, and for water level in calculating process, exert oneself, generated energy etc.
Index creates memory space.
It is smaller that 2. father's task is divided into multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Subtask.
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread.In order to keep cpu load
Balance, it is ensured that the subtask number of each logic thread distribution is identical.
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., all shapes in subtask
State discrete combination solves calculating in recurrence formula.
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread.
Step 5 exports current optimal trajectory according to result set, and judge current optimal trajectory whether with initial track phase
Together.If identical, into Step 6;If differing, into Step 7.
Step 6 judges whether current gallery is last gallery.If it is not, setting next smaller width of corridor
As current width of corridor, into Step 7;Terminated if so, calculating, output optimal trajectory is as calculating optimal solution and destroys line
Cheng Chi.
Step 7 sets current optimal trajectory as the initial track of next iteration, returns to Step 5.
PDDDP algorithm flows are shown in Fig. 6.
(3) Fork/Join frameworks fine grained parallel design pattern and method
According to PDP and PDDDP method calculation process, induction and conclusion Fork/Join frameworks fine grained parallel design pattern and
Method:
Step1 data prepare.Counted including initiation parameter and setting state discrete.
Step2, which creates thread pool and set, calculates threshold value.
Step3 is performed by serial flow until starting to calculate the return value of state discrete point combination, into parallel computation.
Father's task is divided into multiple subtasks, and average mark by Step4 using the parallel frameworks of Fork/Join by recursive schema
It is fitted on each sub-line journey.
Dynamic Programming recurrence formula derivation is pressed in state discrete point combination in each subtasks of Step5, until each son is appointed
Business calculating terminates.
Step6, which collects, to be merged the result of calculation of each subtask and collects as a result, returns to main thread.
Step7 filters out current optimal solution from result set, and continues until calculating to terminate by the serial flow execution of method,
And destroying threads pond.
Fork/Join frameworks fine grained parallel design pattern is shown in Fig. 6.
(1) Back ground Information
In order to verify the parallel efficiency of PSCWAGA, PAHPSO, PDP, PDDDP method, with Cascade Reservoirs annual electricity generating capacity most
Large-sized model (see《Embodiment》) as calculated examples, and according to the optimization characteristics of each method, select different scheduling pair
As the multi-reservoir of optimization (participate in) is tested, the scheduler object for test is Hongsuihe River Cascade Reservoirs.Multi-reservoir topology
Structure is shown in accompanying drawing 7.Computer CPU for test is configured to Intel Xeon E31245@3.30GHz (4 core), test mode
Closed for " hyperthread ".
(2) parallel performance index
Speed-up ratio Sp and efficiency Ep are two common counters for weighing parallel algorithm performance, and its mathematic(al) representation difference is as follows
It is shown:
Sp=T1/Tp (1)
Ep=Sp/p (2)
In formula:T1 represents algorithm execution time under serial environment, s;Tp represents algorithm execution time under p kernel environment,
s。
(3) testing scheme
1. PSCWAGA methods
Participate in optimization reservoir for Lubuge, Yun Peng, Tianshengqiao-I, illumination, Long Tan and 6 season of rock beach regulation and more than
Power station, and set three groups of population scales scheme of different sizes to carry out emulation testing.Each scheme population scale size sets and is shown in Table
1。
Three groups of simulating scheme population scale sizes of table 1PSCWAGA algorithms
2. PAHPSO methods
It is Tianshengqiao-I, illumination, Long Tan and the regulation of 4 season of rock beach and above power station to participate in the reservoir of optimization, and is set
Three groups of population scales scheme of different sizes carries out emulation testing.Each scheme population scale size sets and is shown in Table 2.
Three groups of simulating scheme population scale sizes of table 2PAHPSO algorithms
3. PDP methods
It is Long Tan and the regulation of 2 season of rock beach and above power station to participate in the reservoir of optimization, and sets three groups and discrete count out not
Same scheme carries out emulation testing.Each scheme discrete point number sets and is shown in Table 3.
Three groups of simulating schemes of table 3PDP algorithms are discrete to count out
4. PDDDP methods
It is day one, Long Tan and the regulation of 3 season of rock beach and above power station to participate in the reservoir of optimization, and sets for three group scheduling cycles
(calculation scale) different scheme carries out emulation testing.Each scheme dispatching cycle sets and is shown in Table 4.
Three groups of simulating schemes of table 4PDDDP algorithms are discrete to count out
(4) the parallel result of each scheme
1. PSCWAGA methods
PSCWAGA method test results are shown in Table 5.Minimum take of 3 schemes is appeared under 4 nuclear environments, respectively 582s,
881s and 1173s, 1583s, 2425s and 3355s are reduced than serial time, maximum speed-up ratio respectively reached 3.72,
3.75 and 3.86, the acceleration of multi-core CPU has been given full play to, the computational efficiency of algorithm is significantly improved.
Serial parallel Comparative result under the different multi-core environments of each simulating scheme of table 5PSCWAGA methods
2. PAHPSO methods
PAHPSO method test results are shown in Table 6.Minimum take of 3 schemes is appeared under 4 nuclear environments, is respectively
139.7s, 263.9s and 382.7s, 363.4s, 723.5s and 1075.9s are reduced than serial time, and maximum accelerates score
3.60,3.74 and 3.81 have not been reached, the acceleration of multi-core CPU has been given full play to, and significantly improve the calculating effect of algorithm
Rate.
Serial parallel Comparative result under the different multi-core environments of each simulating scheme of table 6PAHPSO methods
3. PDP methods
PDP method test results are shown in Table 7.Minimum take of 3 schemes is appeared under 4 nuclear environments, respectively 25.1s,
91.5s and 187.5s, 64.1s, 251.7s and 532.1s are reduced than serial time, and maximum speed-up ratio is respectively reached
3.56th, 3.75 and 3.84, the acceleration of multi-core CPU has been given full play to, the computational efficiency of algorithm is significantly improved.
Serial parallel Comparative result under the different multi-core environments of each simulating scheme of table 7PDP methods
4. PDDDP methods
PDDDP method test results are shown in Table 8.Minimum take of 3 schemes is appeared under 4 nuclear environments, respectively 15.1s,
362.3s and 928.4s, 6.7s, 258.0s and 669.7s are reduced than serial time, and maximum speed-up ratio is respectively reached
1.80th, 3.48 and 3.59, the acceleration of multi-core CPU has been given full play to, the computational efficiency of algorithm is significantly improved.
Serial parallel Comparative result under the different multi-core environments of each simulating scheme of table 8PDDDP methods
(5) brief summary
1) by PSCWAGA, PAHPSO, PDP, PDDDP method example test result, using Fork/Join multi-core parallel concurrents
Framework, can give full play to multi-core CPU parallel performance, and significantly Reduction Computation takes, and significantly improves algorithm computational efficiency.
2) calculation scale of parallel method is bigger, and Reduction Computation is time-consuming more, and the advantage of parallel computation is more obvious;And with
Calculation scale gradually to increase, when parallel efficiency is incrementally increased for acceleration, and speed-up ratio is more nearly preferable speed-up ratio.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit is required rather than described above is limited, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.
Moreover, it will be appreciated that although the present specification is described in terms of embodiments, not each embodiment is only wrapped
Containing an independent technical scheme, this narrating mode of specification is only that for clarity, those skilled in the art should
Using specification as an entirety, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art
It may be appreciated other embodiment.
Claims (3)
1. a kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method, its feature exists
In comprising the following steps:
(1) the parallel frameworks of Fork/Join are built:The core of the parallel frameworks of Fork/Join inherits the characteristic of " divide and conquer ", passes through
Recursive subdivision original problem, formed several scales it is smaller, separate and can parallel computation subproblem;When each subproblem is carried out
After independent parallel is calculated, the sub- result for combining each subproblem is the final result of exportable former problem;Fork/Join Frame Designs
Unique Thread Pool Technology, when program starts to perform, acquiescence is created and available processor the same number of active threads
Number;In " divide and conquer " implementation procedure, the threshold value of the control that can freely set a subproblem scale is defined as son
When the higher limit of problem scale, i.e. group problem scale are less than or equal to threshold value, " divide and conquer ", which is performed, to be terminated, and each subproblem is put down
It is assigned in different threads and starts parallel computation;In addition, during subproblem parallel computation, Fork/Join is devised solely
Special deque sequencing model, and employ " work is stolen " algorithm, i.e., when the queue calculating task of a thread is space-time,
Calculating task " will be stolen " from other in running order thread queue afterbodys;
(2) the parallel frameworks of Fork/Join are realized:1) code word that algorithm is realized need to inherit Fork/Join application interface classes
Java.util.concurrent.RecursiveAction or java.util.concurrent.RecursiveTask;2)
Threshold value is selected to divide task;3) void compute () method of Fork/Join interface classes is realized;4) task division side is set
Formula;
(3) typical intelligent method paralell design under coarseness pattern:1) parallel adaptive chaos overall annealing genetic algorithm:
The parameter initializations of Step 1.Set population scale m, chaos sequence number d, population maximum iteration Kmax, initial temperature
Spend T0 and auto-adaptive parameter Pc1, Pc2, Pm1, Pm2;
The initialization of population of Step 2.According to Logistic mapping equations, n group chaos sequences are generated at random in chaotic space, are mapped
The different individuals of m are generated in solution space and constitute populations, population at individual by each power station different periods water level value (zi1,
Zi2 ..., zin) constitute;
Step 3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads, simultaneously
Fork/Join calculating threshold value is set;
Step 4 starts parallel computation flow;
1. father population is divided into the smaller son of multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Population;
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load to put down
Weighing apparatus, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
A evaluates individual adaptation degree;
B Selecting operations:Using integrally annealed selection mechanism, it is allowed to which parent participates in competition, and filters out individual of future generation;
C crossing operations:Crossover operation is performed using arithmetic crossover method;
D mutation operators:Mutation operation is performed using non-uniform mutation mode;
E calculates parent and offspring individual fitness:When parent individuality Xi is by the generation filial generation X ' that intersects, makes a variationiIf, f (X 'i) > f
(Xi), then with X 'iInstead of Xi;Otherwise, with probability exp [(f (X 'i)-f(Xi))/Tk] retain Xi;
F parameters update.Iterations k=k+1, temperature Tk=1/ln (k/T0+1).
G judges whether sub- population iteration terminates;According to annealing temperature Tk or maximum iteration as the condition of convergence, when both appoint
When meaning one reaches the condition of convergence of initial setting up, A is returned;Otherwise, sub- population calculates convergence and terminated;
4. Parallel Step, which collect, merges the optimization solution of each sub- population and collects as a result, returns to main thread.
Step 5 filters out optimal solution from result set, calculates and terminates and destroying threads pond.
2) parallel adaptive Hybrid Particle Swarm (PAHPSO)
Step1 parameter initializations;
Step2 initialization of population;
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads, simultaneously
Fork/Join calculating threshold value is set;
Step4 starts parallel computation flow;
1. father population is divided into the smaller son of multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Population;
The sub- population set that 2. Parallel Step divide is evenly distributed to Different Logic thread.In order to keep cpu load to put down
Weighing apparatus, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step population independent operatings that 3. each thread is assigned to are calculated, and main calculation procedure is as follows:
A calculates particle fitness, particle individual optimal solution and population globally optimal solution;Particle fitness and its individual optimal solution
Compare, if particle fitness is more excellent than its individual optimal solution, current particle position is used as personal best particle;Particle fitness
Compared with population globally optimal solution, if particle fitness is more excellent than population globally optimal solution, current particle position is used as population
Global optimum position;
B calculates particle energy and particle energy threshold value;If particle energy is less than current particle energy threshold, to particle present bit
Put and perform mutation operation with speed;
C calculates particle similarity and particle similarity threshold;If two adjacent particles similarities are less than current particle similarity threshold
Value, then perform mutation operation to the history optimal location of poor particle;
D introduces the greedy random searching strategy based on neighborhood and the personal best particle of particle is updated;If in neighborhood search
The current location arrived is bigger than searching for preceding particle fitness, then instead of particle body position before search, then individual with particle after search
Body position is made comparisons with particle history optimal location and population global optimum position, more new particle history optimal location and kind
Group's optimal location;
E updates speed and the position of particle populations;
F judges whether sub- population iteration terminates;If current iteration number of times is less than maximum iteration, return (1);Otherwise, sub- kind
Group calculates convergence and terminated;
4. Parallel Step, which collect, merges the result of calculation of each sub- population and collects as a result, returns to main thread;
Step5 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
3) Fork/Join frameworks coarse grain parallelism design pattern and method
According to PSCWAGA and PAHPSO method calculation process, induction and conclusion Fork/Join frameworks coarse grain parallelism design pattern and
Method:
Step1 initialization algorithms parameter and population scale;
Step2, which creates thread pool and set, calculates threshold value;
Father's population dividing is many sub- populations by recursive schema using the parallel frameworks of Fork/Join by Step3, and is evenly distributed to
Each sub-line journey;
Each sub- populations of Step4 press former serial approach process optimization and calculated, until each sub- population iteration terminates;
Step5, which collects, to be merged the result of calculation of each sub- population and collects as a result, returns to main thread;
Step6 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
(4) exemplary dynamic planing method paralell design under fine granularity pattern:1) parallel dynamic programming method:
Step1 data prepare;Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature water
Position, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St;
Step2 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads, simultaneously
Fork/Join calculating threshold value is set;
Step3 starts parallel computation flow;
1. Parallel Step build father's task of parallel computation;By all retaining state variable combinations in iteration cycle in Bt
Solution in (St, It, Nt) is calculated as father's task, and for water level in calculating process, exert oneself, the index such as generated energy creates storage
Space;
2. father's task is divided into the smaller son of multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Task;
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread.In order to keep cpu load to put down
Weighing apparatus, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., the institute in subtask it is stateful from
Dissipate combination and calculating is solved in recurrence formula;
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread;
Step4 filters out optimal solution from result set, calculates and terminates and destroying threads pond;
2) parallel discrete differential dynamic programming method:
Step1 data prepare;Obtain the power station primary attribute characteristic value and indicatrix needed for calculating, including power station feature water
Position, power factor, water level-storage-capacity curve, aerial drainage-tailwater level curve etc., and according to reservoir day part retaining bound, discrete
Number determines discrete state variable St;
Step2 sets iteration gallery number, and generates initial feasible solution as calculating initial track;
Step3 creates thread pool, and the worker thread number for giving tacit consent to the thread pool of generation is identical with cpu logic number of threads, simultaneously
Fork/Join calculating threshold value is set;
Step4 chooses maximum width of corridor and is used as current gallery;
Step5 starts parallel computation flow;
1. Parallel Step build father's task of parallel computation in current gallery;By all retaining state variables in gallery
Solution in Bt (St, It, Nt) is combined to calculate as father's task, and for water level in calculating process, exert oneself, the index such as generated energy
Create memory space;
2. father's task is divided into the smaller son of multiple scales by Parallel Step according to the calculating threshold value of setting by recursive schema
Task;
The subtask ensemble average that 3. Parallel Step divide is assigned to Different Logic thread;In order to keep cpu load to put down
Weighing apparatus, it is ensured that the subtask number of each logic thread distribution is identical;
The Parallel Step subtask independent operatings that 4. each thread is assigned to are calculated, i.e., the institute in subtask it is stateful from
Dissipate combination and calculating is solved in recurrence formula;
5. Parallel Step, which collect, merges the result of calculation of each subtask and collects as a result, returns to main thread;
Step 5 exports current optimal trajectory according to result set, and judges whether current optimal trajectory is identical with initial track;If
It is identical, then into Step 6;If differing, into Step 7;
Step 6 judges whether current gallery is last gallery;If it is not, setting next smaller width of corridor conduct
Current width of corridor, into Step 7;Terminated if so, calculating, output optimal trajectory is used as calculating optimal solution and destroying threads pond;
Step 7 sets current optimal trajectory as the initial track of next iteration, returns to Step 5;
3) Fork/Join frameworks fine grained parallel design pattern and method:
According to PDP and PDDDP method calculation process, induction and conclusion Fork/Join frameworks fine grained parallel design pattern and method:
Step1 data prepare.Counted including initiation parameter and setting state discrete;
Step2, which creates thread pool and set, calculates threshold value;
Step3 is performed by serial flow until starting to calculate the return value of state discrete point combination, into parallel computation;
Father's task is divided into multiple subtasks by Step4 using the parallel frameworks of Fork/Join by recursive schema, and is evenly distributed to
Each sub-line journey;
Dynamic Programming recurrence formula derivation is pressed in state discrete point combination in each subtasks of Step5, until each subtask meter
Terminate;
Step6, which collects, to be merged the result of calculation of each subtask and collects as a result, returns to main thread;
Step7 filters out current optimal solution from result set, and continues to perform by the serial flow of method until calculating terminates, and sells
Ruin thread pool.
Set 2. the Cascade Reservoirs Optimized Operation multi-core parallel concurrent according to claim 1 based on Fork/Join frameworks is calculated
Meter method, it is characterised in that it is specially when threshold value sets less than normal that threshold value, which divides task, in step (2), it is meant that the rule of subproblem
Mould is smaller and dividing number is more, easily causes resource management consumption excessive;When threshold value sets bigger than normal, it is meant that the scale of subproblem
Bigger and dividing number is less, or even group problematic amount is when being less than active line number of passes, is easily caused part worker thread and is in the spare time
Configuration state.Therefore, in order to ensure that all working thread can be assigned to subproblem, general threshold value sets public formula (I) as follows:
In public formula (I):Upper integer is taken for result of calculation;μ is scale domination threshold value;ScFor former problem scale;W is multinuclear processing
Device logic line number of passes.
Set 3. the Cascade Reservoirs Optimized Operation multi-core parallel concurrent according to claim 1 based on Fork/Join frameworks is calculated
Meter method, it is characterised in that setting task dividing mode is specially and divides that father's task can be divided into per subtask in step (2)
Multiple subtasks, and the subtask number divided is write code by developer and realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611005315.0A CN107015861A (en) | 2016-11-07 | 2016-11-07 | A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611005315.0A CN107015861A (en) | 2016-11-07 | 2016-11-07 | A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107015861A true CN107015861A (en) | 2017-08-04 |
Family
ID=59439532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611005315.0A Pending CN107015861A (en) | 2016-11-07 | 2016-11-07 | A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107015861A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609679A (en) * | 2017-08-21 | 2018-01-19 | 华中科技大学 | The preferred method for drafting of multi-parameter and system of a kind of annual-storage reservoir power generation dispatching figure |
CN108108244A (en) * | 2017-12-15 | 2018-06-01 | 中南大学 | A kind of side slope strength reduction factor multithreads computing method |
CN108491260A (en) * | 2018-04-12 | 2018-09-04 | 迈普通信技术股份有限公司 | Communication equipment multitask test method and device |
CN108596504A (en) * | 2018-05-03 | 2018-09-28 | 中国水利水电科学研究院 | Consider the economically viable multi-reservoir schedule parallel dynamic programming method of computing resource |
CN109408214A (en) * | 2018-11-06 | 2019-03-01 | 北京字节跳动网络技术有限公司 | A kind of method for parallel processing of data, device, electronic equipment and readable medium |
CN109636004A (en) * | 2018-11-16 | 2019-04-16 | 华中科技大学 | A kind of hydroelectric system combined dispatching neighborhood search dimensionality reduction optimization method |
CN110222938A (en) * | 2019-05-10 | 2019-09-10 | 华中科技大学 | A kind of Hydropower Stations head relation cooperative optimization method and system |
CN110851987A (en) * | 2019-11-14 | 2020-02-28 | 上汽通用五菱汽车股份有限公司 | Method, apparatus and storage medium for predicting calculated duration based on acceleration ratio |
CN111124690A (en) * | 2020-01-02 | 2020-05-08 | 哈尔滨理工大学 | Secure distribution method of E-mail server based on OpenMP thread optimization |
CN111260500A (en) * | 2019-12-12 | 2020-06-09 | 浙江工业大学 | Hadoop-based distributed differential evolution scheduling method for small hydropower station |
CN112949154A (en) * | 2021-03-19 | 2021-06-11 | 上海交通大学 | Parallel asynchronous particle swarm optimization method and system and electronic equipment |
CN113467397A (en) * | 2021-07-06 | 2021-10-01 | 山东大学 | Multi-layer hierarchical control system and method for comprehensive energy system |
CN113608894A (en) * | 2021-08-04 | 2021-11-05 | 电子科技大学 | Fine granularity-oriented algorithm component operation method |
CN114338335A (en) * | 2021-12-15 | 2022-04-12 | 一汽资本控股有限公司 | Integrated monitoring system and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971174A (en) * | 2014-05-06 | 2014-08-06 | 大连理工大学 | Hydropower station group optimized dispatching method based on improved quantum-behaved particle swarm algorithm |
CN104182909A (en) * | 2014-08-21 | 2014-12-03 | 大连理工大学 | Multi-core parallel successive approximation method of hydropower system optimal scheduling |
CN105719091A (en) * | 2016-01-25 | 2016-06-29 | 大连理工大学 | Parallel multi-objective optimized scheduling method for cascaded hydropower station group |
-
2016
- 2016-11-07 CN CN201611005315.0A patent/CN107015861A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971174A (en) * | 2014-05-06 | 2014-08-06 | 大连理工大学 | Hydropower station group optimized dispatching method based on improved quantum-behaved particle swarm algorithm |
CN104182909A (en) * | 2014-08-21 | 2014-12-03 | 大连理工大学 | Multi-core parallel successive approximation method of hydropower system optimal scheduling |
CN105719091A (en) * | 2016-01-25 | 2016-06-29 | 大连理工大学 | Parallel multi-objective optimized scheduling method for cascaded hydropower station group |
Non-Patent Citations (1)
Title |
---|
王森: ""梯级水电站群长期优化调度混合智能算法及并行方法研究"", 《中国优秀博士学位论文全文数据库工程科技Ⅱ辑》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609679B (en) * | 2017-08-21 | 2019-04-12 | 华中科技大学 | A kind of preferred method for drafting of multi-parameter and system of annual-storage reservoir power generation dispatching figure |
CN107609679A (en) * | 2017-08-21 | 2018-01-19 | 华中科技大学 | The preferred method for drafting of multi-parameter and system of a kind of annual-storage reservoir power generation dispatching figure |
CN108108244A (en) * | 2017-12-15 | 2018-06-01 | 中南大学 | A kind of side slope strength reduction factor multithreads computing method |
CN108108244B (en) * | 2017-12-15 | 2021-09-28 | 中南大学 | Slope intensity reduction coefficient multi-thread parallel computing method |
CN108491260A (en) * | 2018-04-12 | 2018-09-04 | 迈普通信技术股份有限公司 | Communication equipment multitask test method and device |
CN108596504B (en) * | 2018-05-03 | 2019-11-08 | 中国水利水电科学研究院 | Consider the economically viable multi-reservoir schedule parallel dynamic programming method of computing resource |
CN108596504A (en) * | 2018-05-03 | 2018-09-28 | 中国水利水电科学研究院 | Consider the economically viable multi-reservoir schedule parallel dynamic programming method of computing resource |
CN109408214A (en) * | 2018-11-06 | 2019-03-01 | 北京字节跳动网络技术有限公司 | A kind of method for parallel processing of data, device, electronic equipment and readable medium |
CN109636004A (en) * | 2018-11-16 | 2019-04-16 | 华中科技大学 | A kind of hydroelectric system combined dispatching neighborhood search dimensionality reduction optimization method |
CN110222938A (en) * | 2019-05-10 | 2019-09-10 | 华中科技大学 | A kind of Hydropower Stations head relation cooperative optimization method and system |
CN110851987A (en) * | 2019-11-14 | 2020-02-28 | 上汽通用五菱汽车股份有限公司 | Method, apparatus and storage medium for predicting calculated duration based on acceleration ratio |
CN110851987B (en) * | 2019-11-14 | 2022-09-09 | 上汽通用五菱汽车股份有限公司 | Method, apparatus and storage medium for predicting calculated duration based on acceleration ratio |
CN111260500A (en) * | 2019-12-12 | 2020-06-09 | 浙江工业大学 | Hadoop-based distributed differential evolution scheduling method for small hydropower station |
CN111260500B (en) * | 2019-12-12 | 2021-12-07 | 浙江工业大学 | Hadoop-based distributed differential evolution scheduling method for small hydropower station |
CN111124690A (en) * | 2020-01-02 | 2020-05-08 | 哈尔滨理工大学 | Secure distribution method of E-mail server based on OpenMP thread optimization |
CN112949154A (en) * | 2021-03-19 | 2021-06-11 | 上海交通大学 | Parallel asynchronous particle swarm optimization method and system and electronic equipment |
CN112949154B (en) * | 2021-03-19 | 2023-02-17 | 上海交通大学 | Parallel asynchronous particle swarm optimization method and system and electronic equipment |
CN113467397A (en) * | 2021-07-06 | 2021-10-01 | 山东大学 | Multi-layer hierarchical control system and method for comprehensive energy system |
CN113608894A (en) * | 2021-08-04 | 2021-11-05 | 电子科技大学 | Fine granularity-oriented algorithm component operation method |
CN114338335A (en) * | 2021-12-15 | 2022-04-12 | 一汽资本控股有限公司 | Integrated monitoring system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107015861A (en) | A kind of Cascade Reservoirs Optimized Operation multi-core parallel concurrent based on Fork/Join frameworks calculates design method | |
Feng et al. | Optimization of hydropower reservoirs operation balancing generation benefit and ecological requirement with parallel multi-objective genetic algorithm | |
CN107015852A (en) | A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling | |
Zhang et al. | Use of parallel deterministic dynamic programming and hierarchical adaptive genetic algorithm for reservoir operation optimization | |
CN105719091B (en) | A kind of parallel Multiobjective Optimal Operation method of Hydropower Stations | |
CN105790266B (en) | A kind of parallel Multi-objective Robust Optimized Operation integrated control method of micro-capacitance sensor | |
Feng et al. | Peak operation of hydropower system with parallel technique and progressive optimality algorithm | |
CN105975342A (en) | Improved cuckoo search algorithm based cloud computing task scheduling method and system | |
CN103729694B (en) | The method that improvement GA based on polychromatic sets hierarchical structure solves Flexible workshop scheduling | |
Feng et al. | Multiobjective operation optimization of a cascaded hydropower system | |
Fang et al. | Evolutionary optimization using epsilon method for resource-constrained multi-robotic disassembly line balancing | |
Feng et al. | Scheduling of short-term hydrothermal energy system by parallel multi-objective differential evolution | |
CN109670650A (en) | The method for solving of Cascade Reservoirs scheduling model based on multi-objective optimization algorithm | |
Li et al. | Hierarchical multi-reservoir optimization modeling for real-world complexity with application to the Three Gorges system | |
CN101231720A (en) | Enterprise process model multi-target parameter optimizing method based on genetic algorithm | |
CN101593132A (en) | Multi-core parallel simulated annealing method based on thread constructing module | |
CN115085202A (en) | Power grid multi-region intelligent power collaborative optimization method, device, equipment and medium | |
CN115271437A (en) | Water resource configuration method and system based on multi-decision-making main body | |
Wang et al. | Application of hybrid artificial bee colony algorithm based on load balancing in aerospace composite material manufacturing | |
CN108769105A (en) | A kind of scheduling system of knowledge services multi-task scheduling optimization method and its structure under cloud environment | |
CN112148446B (en) | Evolutionary strategy method for multi-skill resource limited project scheduling | |
Hu et al. | A novel adaptive multi-objective particle swarm optimization based on decomposition and dominance for long-term generation scheduling of cascade hydropower system | |
Napalkova et al. | Multi-objective stochastic simulation-based optimisation applied to supply chain planning | |
CN103440540B (en) | A kind of parallel method of land utilization space layout artificial immunity Optimized model | |
CN113779842A (en) | Reinforcing rib structure layout optimization design method based on genetic algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170804 |