CN103034534A - Electric power system analysis parallel computing method and system based on grid computation - Google Patents

Electric power system analysis parallel computing method and system based on grid computation Download PDF

Info

Publication number
CN103034534A
CN103034534A CN2011103087164A CN201110308716A CN103034534A CN 103034534 A CN103034534 A CN 103034534A CN 2011103087164 A CN2011103087164 A CN 2011103087164A CN 201110308716 A CN201110308716 A CN 201110308716A CN 103034534 A CN103034534 A CN 103034534A
Authority
CN
China
Prior art keywords
task
calculation
parallel
client computer
computation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103087164A
Other languages
Chinese (zh)
Inventor
唐聪
周挺辉
杜浩
严正
李乃湖
景雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grid Solutions SAS
Alstom Grid Inc
Original Assignee
Alstom Grid Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alstom Grid Inc filed Critical Alstom Grid Inc
Priority to CN2011103087164A priority Critical patent/CN103034534A/en
Publication of CN103034534A publication Critical patent/CN103034534A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The invention relates to an electric power system analysis parallel computing method and system based on grid computation. According to the method and the system, a parallel computation task of electric power system analysis is executed in a net-connected computer environment containing a server and a plurality of clients. The computation task and available computation resources of each client are obtained in the server, and a queue of tasks to be computed is constructed. Then the computation task and a computation mode are sent to one or a plurality of available clients by the queue of tasks to be computed and the available computation resources for computation. The computation mode is set to be that when fewer computation tasks exist, parallel programs based on a shared memory parallel method are executed by a plurality of processors of the client, when more computation tasks exist, serial programs are executed respectively by the processors of the client, and the tasks to be computed can be completed in as little time as possible.

Description

Power System Analysis parallel calculating method and system based on grid computing
Technical field
The present invention relates to the computing system for Power System Analysis, especially relate to Power System Analysis parallel calculating method and system based on grid computing.
Background technology
Present electric system calculation procedure mainly is divided into following four kinds: the calculation procedure of conventional serial, carry out serial program to realize the concurrent program of parallel processing multitask at multicomputer; Use the single task parallel algorithm of disperseing the memory parallel system to realize; And the single task parallel algorithm of using the shared drive parallel system to realize.The below introduces various realizations in detail.
Scheme one, namely the conventional serial program is the most common.The BPA that is used widely, the EMTSP that comes from EPRI, the PSS/E that comes from Simens PTI, non-commercial software I nterPSS and all adopt this mode based on tool box MatPower of Matlab etc.Serial program is to use computing machine to solve a step by a step in the mode of serial to find the solution the mathematical problem that electric system is calculated, and this kind program can only be used the resource of a core on multiprocessor computer.The performance boost method of this class method mainly can adopt two kinds of methods, and the first is used the more powerful computing machine of single-threaded computing power, and it two is methods (being scheme two) by executed in parallel multitask cited below.
Scheme two also is the first parallel scheme, and namely the independent serial program of carrying out is to realize the method for parallelization processing multitask separately on many computing machines or a plurality of processor, and it is to be based upon on the basis of scheme one to a great extent.The thinking of this scheme is that task to be calculated is assigned on many computing machines that serial computing software is housed, and the result is also reclaimed in long-range execution, and obvious performance boost is arranged when processing the task of the same type of a large amount of numbers.But this respect does not have obtainable business software to realize, but there is the research of this respect, South China Science ﹠ Engineering University has realized similar scheme (referring to list of references [1]) based on InterPSS, Shanghai Communications University has also realized similar scheme (referring to list of references [2]) based on PSS/E, and American Electric Power research institute has realized the similar approach (referring to list of references [3]) based on ETMSP.
Scheme three, it also is the second parallel scheme, the single task parallel algorithm of namely use disperseing memory parallel system (the grid computing structure of the Beowulf structure that adopts such as cluster machine and common computing system) to realize, it is the most complicated solution of present visible optimum, have a large amount of research papers, it is the PSASP of Electric Power Research Institute that unique business software realizes.This kind scheme realizes by algorithm research and program, the part computation process of calculation procedure can be expanded to and to use many processor concurrent operations, wherein every processor uses independently memory headroom, communicates with synchronous refresh data in the memory headroom separately by network connection between processor and the processor.With regard to research paper, Hong Kong University's flood tide was realized parallel flow, transient state program (referring to list of references [4]) based on the BBDF form at IBM Cluster, also there is similar research (referring to list of references [5]) in the Central China University of Science and Technology, and also there is similar research (referring to list of references [6]) in Shanghai Communications University.
Scheme four, it also is the third parallel scheme, the single task parallel algorithm of namely using shared drive parallel system (scheme in the past all is to realize by symmetric multiprocessor (SMP)) to realize, there are some theoretical researches, once realized the shared drive concurrent program (referring to list of references [7]) on SGI multi-core workstation based on ETMSP such as U.S.'s DianKeYuan, the research of Wu Junqiang has obtained optimal performance under this kind method (referring to list of references [8]).This kind scheme and scheme three principles are similar, but CPU is connected by Ethernet or high-speed bus with the communication of CPU in the scheme three, every CPU uses independently memory headroom calculating of portion, for the content in the memory headroom of coordinating every CPU use need to solve by communication; And all core cpus are shared a cover memory headroom in this kind scheme.
The shortcoming that above-mentioned four kinds of technology exist is:
Scheme one, i.e. there is the problem of performance deficiency in serial program scheme in the face of the magnanimity task of online calculating in the now electric system time; According to Moore's Law, the performance of computing machine doubled every 18 months, but this still is not enough to satisfy the calculation requirement in the electric system.
The basis of scheme two is the serial programs in the scheme one, to process by also calculating simultaneously the multi-task parallel of realizing at many computing machine deploy serial softwares, yet because the mode of unicomputer hardware encipher is all adopted in the mandate of existing business software, cost is linear ascendant trend when many computing machine deploy, cost is very high, and the audient is limited.When calculating single intensive task, scheme two and scheme one be the actual method that adopts serial to carry out all simultaneously, and calculated performance is identical, without any improvement.
Scheme three, take PSASP software as example, the problem of existence may show three aspects::
1, the parallel algorithm of cutting apart by network still belongs to the parallel method of coarse particle scheduling based on disperseing the memory parallel scheme.Speed-up ratio computing formula under this kind method is
S = T s T p = T ps + Σ i = 1 p T fi + T selse T ps + max i = 1 . . . p ( T fi ) + T t 0 + T t + T pelse (formula 1)
Wherein S is speed-up ratio, T sBe serial execution time, T pBe the executed in parallel time, p is for participating in parallel procedure processor number, T PsBe identical execution time that can not the parallelization process in serial and the concurrent program, T FiBut be the corresponding time of carrying out parallelisation procedure of i processor, T SelseBe the execution time of serial program remainder, T PelseFor concurrent program all the other can not parallelization execution time of part, more than length and the processor performance of each time linear; T T0Be the zero-time of network service (as the process such as connect consume time), T tBe the time that the network communication and transmission data consume, this two parts time and processor performance are irrelevant, and only mode connected to the network is relevant.Network connection belongs to a kind of long-term fixed investment for the enterprise customer, its performance is constant within a long term, by above formula as seen, speed-up ratio numerical value promotes along with processor performance and descends, thereby the parallelization effect of program is on a declining curve on long terms; Especially, grid computing is refered in particular to and is utilized civilian LAN (Local Area Network) to connect many computing machines, its communication performance postpone and bandwidth aspect far below the special-purpose large scale computer external bus of costliness, the impact that speed-up ratio is caused is more very.
2, do not realize the combination of two kinds of single task parallelizations and two kinds of parallel methods of multi-task parallelization, main cause is that the computing node among the cluster Cluster is limited, and (the computing node total number is to be difficult to distribute evenly the network blocks number Mi of calculation task number N simultaneously and every task
Figure BSA00000591331400032
Fixing).
3, this is typical large-scale cluster machine (Cluster) application scheme, needs to use special purpose computer, and application surface is narrow and cost is very high, and needs regular whole updating investment.
Scheme four times is owing to share storage, and the communication delay between the thread is minimum, is fit to do the parallelization of fine granularity; This scheme can only use the separate unit multiprocessor computer to process individual task when using separately, can't utilize many computing machine parallelizations to calculate when facing multitask; This scheme can only be finished the calculation task of simple function simultaneously, can't satisfy the demand of multiple computing function in the Power System Analysis.
List of references:
[1] a kind of novel electric power grid computing platform Automation of Electric Systems 2009[1 based on open source software such as Hou Guanji]
[2] Pan likes the PSS/E transient stability policy Master degree candidate of the Shanghai Communications University research paper under the strong grid computing
[3] the Zhang Peijing Chaoyang is based on the online dynamic secure estimation south electric network technical research 2009[2 of grid computing system]
[4] bright a kind of parallel algorithm of Large Scale Sparse Linear system of equations and the journal 2000[8 of application Wuhan Water Conservancy and Electric Power Univ in parallel load flow algorithm thereof of finding the solution of Hong Chao Shen Jun]
[5] the parallel load flow algorithm electric power network technique 2002[6 of the diagonal blocks edged model such as Su new people]
[6] the prosperous scorching Wang Wei in Gu Xiaoxu room uses the trend calculating parallel algorithm 2009[3 of grid platform]
[7]University of Maryland Utility Software Operation on Parallel Computers EPRI TR-105920
[8]Jun Qiang Wu PARALLEL SOLUTION OF LARGE SPARSE MATRIX EQUATIONS AND PARALLEL POWER FLOW IEEE Transactions on Power System.Vol.10,No.3,August 1995
Summary of the invention
Technical matters to be solved by this invention provides a kind of Power System Analysis parallel calculating method and system based on grid computing.
The present invention is that to solve the problems of the technologies described above the technical scheme that adopts be to propose a kind of Power System Analysis parallel calculating method based on grid computing, be used for carrying out at the Net-connected computer environment that comprises a server and a plurality of client computer the parallel computation task of Power System Analysis, said method comprising the steps of: obtain calculation task, this calculation task comprises data file to be calculated; Obtain the available computational resources that is contained in each client computer; This calculation task is joined in the task queue to be calculated; According to this task queue to be calculated and this available computational resources calculation task and computation schema are sent in one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set; In these one or more available client machines, select calculation procedure carrying out this calculation task according to computation schema, and return the result of calculation output file, wherein when this computation schema is serial, move the serial program that equals the processor core number in client computer; When this computation schema when parallel, this client computer is moved concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode; And process this result of calculation output file, and this calculation task of mark " is finished ".
In one embodiment of this invention, described calculation task also comprises task priority and task type.
In one embodiment of this invention, before this calculation task being joined in the task queue to be calculated, also comprise: be the network dividing data file of pressing the electric power networks division with a network data Divide File in this calculation task.
In one embodiment of this invention, in this task queue to be calculated according to the regularly arranged calculation task of first-in first-out.
In one embodiment of this invention, in this task queue to be calculated according to first-in first-out rule and priority arrangement calculation task.
In one embodiment of this invention, this calculation task comprises computational data file and task type.
In one embodiment of this invention, when the calculation task number in this task queue to be calculated during less than total processing core number of available client machine, determine that this available computational resources is well-to-do; Calculation task number in this task queue to be calculated determines that this available computational resources is for nervous during more than or equal to total processing core number of available client machine.
In one embodiment of this invention, also comprise after sending this calculation task: this calculation task of mark is " in the execution " and a subsidiary overtime time limit, if this calculation task can't be marked as " finishing " before the overtime time limit, then this calculation task is put limit priority and again listed in this task queue to be calculated.
In one embodiment of this invention, after these one or more client computer are returned this result of calculation output file, comprise that also this client computer of report is upstate.
In one embodiment of this invention, the calculation procedure of carrying out in this client computer is to this client computer distribution by this server.
The present invention proposes a kind of Power System Analysis parallel calculating method based on grid computing in addition, be used for carrying out at the Net-connected computer environment that comprises a server and a plurality of client computer the parallel computation task of Power System Analysis, said method comprising the steps of: obtain calculation task, this calculation task comprises data file to be calculated; Obtain the available computational resources that is contained in each client computer; This calculation task is joined in the task queue to be calculated; According to this task queue to be calculated and this available computational resources calculation task and computation schema are sent in one or more available client machines, carry out this calculation task and return the result of calculation output file to impel these one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set; And when this computation schema is serial, impels and move the serial program that equals the processor core number on the client computer; When this computation schema when parallel, impel this client computer to move concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode; And process the result of calculation output file that returns, and this calculation task of mark " is finished ".
The present invention also proposes a kind of Power System Analysis concurrent computational system based on grid computing, be used for carrying out at the Net-connected computer environment that comprises a server and a plurality of client computer the parallel computation task of Power System Analysis, described system comprises: task data acquisition module, managing computing resources module, queue management module, task scheduling modules and result data processing module.The task data acquisition module is used for obtaining calculation task, and this calculation task comprises data file to be calculated.The managing computing resources module is used for obtaining the available computational resources that is contained in each client computer.Queue management module is used for this calculation task is joined a task queue to be calculated.Task scheduling modules sends to calculation task and computation schema in one or more available client machines according to this task queue to be calculated and this available computational resources, carry out this calculation task and return the result of calculation output file to impel these one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set; And when this computation schema is serial, impels and move the serial program that equals the processor core number on the client computer; When this computation schema when parallel, impel this client computer to move concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode.The result data processing module is processed the result of calculation output file that returns, and this calculation task of mark " is finished ".
The present invention is owing to adopt above technical scheme, can calculate the performance of effectively utilizing the existing computer resource of enterprise to promote the Power System Analysis software for calculation by networking need not to purchase under the prerequisite of Special Equipment, under the prerequisite that low cost increases, satisfy preferably electric system to calculating real-time, the day by day urgent demand of accuracy, the resource extent that is used for calculating has preferably elasticity, can expand on a large scale.Especially compare the cluster solution of PSASP, the present invention needn't use special purpose computer, has significantly reduced hardware cost.
Description of drawings
For above-mentioned purpose of the present invention, feature and advantage can be become apparent, below in conjunction with accompanying drawing the specific embodiment of the present invention is elaborated, wherein:
Fig. 1 illustrates according to an embodiment of the invention hybrid parallel computing environment.
Fig. 2 illustrates according to an embodiment of the invention hybrid parallel computing method flow process.
Embodiment
Say that summarily embodiments of the invention propose a kind of Power System Analysis hybrid parallel calculation Design scheme based on grid computing.
Fig. 1 illustrates according to an embodiment of the invention hybrid parallel computing environment.With reference to shown in Figure 1, exemplary computing environment is the server-client structure, comprises a Grid computing service device 100 and a plurality of grid computing client computer client computer 200-1,200-2, and 200-3 ... 200-P, P are positive integer, with label 200 general designations.Realize networking by computer network 300 between server 100 and each client computer 200.In an embodiment of the present invention, computer network 300 LAN (Local Area Network) preferably.But in the situation that communication delay allows, also can select wide area network or the Internet.Server 100 and each client computer 200 all can be made up by personal computer, therefore can utilize the many computing machines that are connected to consolidated network to make up the computing environment of present embodiment.
In server 100 operation Grid computing service programs, this Grid computing service program has logic function module shown in Figure 1, comprises task data module 101, network division module 102, queue management module 103, managing computing resources module 104, task scheduling modules 105 and result data processing module 106.Task data module 101 is connected with the input of system external tasks, and task data module 101 is connected with network division module 102 and queue management module 103 at server side simultaneously.Network is divided module 102 and is connected with task scheduling modules 105 simultaneously with queue management module 103.Managing computing resources module 104 is connected with client computer 200 and is connected with task scheduling modules 105 simultaneously.Task scheduling modules 105 is connected with result data processing module 106 and is connected with client computer 200.Result data processing module 106 is connected with client computer 200 and is connected with the operator by the man-machine interface (not shown).In addition, although not shown among Fig. 1, the purpose for communication can comprise communication middleware in Grid computing service program or its submodule.
At each client computer 200 operation grid computing client-side program, this grid computing client-side program comprises parallel computation software 201 and communication middleware 202.Parallel computation software 201 can be the software that is installed in advance on each client computer 200, also can be to be distributed to client computer 200 on and the executable program of operation by communication middleware 202 in the mode of broadcasting from server 100 or other equipment.
In server 100 sides, task data module 101 can be obtained the calculation task in the Power System Analysis, comprises data file to be calculated, task priority and task type.The different task type can corresponding different Power System Analysis application type, the flowmeter that has a tidal wave of calculation, voltage stabilization calculating, electro-magnetic transient stability Calculation etc.
The kind of data file to be calculated for example is flow data file or electric power networks data file.For the electric power networks data file, can divide module 102 by network and take the method for network division to be divided into the polylith subnet, and save as new network dividing data file.At this, network refers to power system network, and the core of all kinds of analytical calculations of field of power is the systems of linear equations of finding the solution based on the power system network parameter, and " dividing by network " can be decomposed into some subnets with a large-scale power network.A kind of known partitioning technology commonly used be K.W.Chan (K.W.Chan.Efficient heuristic partitioning algorithm for parallel processing of large power systems network equations[J] .IEEEProceedings-Generation Transmission and Distribution.1995,142 (6): P625-630) in nineteen ninety-five propose based on factor path tree network partitioning method, large-scale system of equations is decomposed into the parallel pick up speed of finding the solution of some groups of small-sized systems of linear equations.Power System Analysis is calculated the field Parallel Algorithm usually based on this thinking.
In the present invention, if a plurality of calculation task is shared a network data file, then its shared network dividing data file.That is to say, use second of the identical network data file or more calculation tasks to divide at network and can skip network in the module 102 and divide this calculation procedure, and the network dividing data file of having processed before directly quoting.
Queue management module 103 can form task queue to be calculated according to entering Queue time and prioritization.For instance, queue management module 103 generates task queue to be calculated for task scheduling modules 105 in segmentation first-in first-out (FIFO) mode with priority, and each unit in the formation can be with these four attributes of mission number, data file, priority and task type.
Managing computing resources module 104 can be by communicating by letter with client computer 200 communication middleware 202 of client computer (as run on), with quantity, performance and the available situation of obtaining in real time the client computer that is connected into computer network 300.Managing computing resources module 104 can obtain these information constantly.For example, managing computing resources module 104 can record in real time whole computing system in the adding of run duration client computer 200 or withdraw from, and this information can be used for task scheduling modules 105.
Each available client computer 200 in the computer network 300 is assigned and sent to task scheduling modules 105 dynamically with calculation task and computation schema by communication middleware according to the number of calculation task formation.Whether for instance, can identify client computer 200 by status indication available.State is labeled as client computer " doing " state for the client computer in " spare time " is the available client machine in giving the client computes process of calculation task.At this, determine computation schema according to the tensity of computational resource.Task scheduling modules 105 can be according to task quantity to be calculated and available computational resources scale (for example the quantity of available client machine, available processors quantity), the tensity of assessment available computational resources.When definite this available computational resources is " well-to-do ", this computation schema can be set for parallel, and when definite this available computational resources was " anxiety ", it was serial that this computation schema can be set.
In each client computer 200 side, communication middleware 202 can have the report client computer can be with the function of situation (" hurrying " and " spare time "), performance (the single-threaded ability that comprises this client processor core number and every core).Communication middleware 202 can make each client computer 200 and server 100 obtain two-way communications capabilities simultaneously, server 100 can send the files such as data file, parallel computation executive routine to each client computer 200, and each client computer 200 can be passed the files such as output file that calculate executive routine back to server 100.
The client computer 200 that receives calculation task is moved calculation procedure to finish calculation task according to computation schema.After task computation finishes, client computer 200 is restored to the result data processing module 106 that runs on server end with result of calculation output file and " task type " by communication middleware 202 automatically, reports simultaneously the state of these client computer " spare time " to the managing computing resources module 104 that runs on server end.
In one embodiment, parallel computation software 201 is the power system analysis softwares that can use at single section computing machine, and it is used according to different Power System Analysis can be a plurality of executable programs of corresponding a plurality of functions.In another embodiment, parallel computation software 201 is the single executable programs that comprise a plurality of functions and can select by parameter difference in functionality, and the selection of computing function is finished by " task type " parameter that the communication middleware 202 that runs on client computer 200 obtains.
In one embodiment, parallel computation software 201 can be based on the concurrent program of shared drive parallel method, has flexible parallelization function, can use adaptively whole processor cores on the single computer according to particular hardware, can also only use a core with single-threaded serial mode executive routine.In one embodiment, finished by " computation schema " parameter that communication middleware 202 obtains with the selection of parallelization or core serial running.
Accordingly, client computer 200 can be chosen in move simultaneously a plurality of serial modes on same the multi-core computer calculation procedure to solve a plurality of tasks.Perhaps, client computer 200 can be chosen on same the multi-core computer and move based on the single calculation task of parallelisation procedure quick solution of sharing storage.
In addition, the result data processing module 106 that runs on server end can be for multiple Power System Analysis Functional Design, " task type " that return according to each client computer 200 calls the result of calculation output file that different sub-function module processing client is returned, this task of mark is finished, and forms final conclusion or report output to man-machine interface.
Fig. 2 illustrates according to an embodiment of the invention hybrid parallel computing method flow process.With reference to shown in Figure 2, the method for present embodiment may further comprise the steps:
Before the work beginning, each client computer 200 at first starts communication middleware 202.
At step S201, server 100 initiating task data modules 101 are obtained calculation task.Calculation task can comprise data file to be calculated, task priority, task type.The different task type can corresponding different Power System Analysis application type, the flowmeter that has a tidal wave of calculation, voltage stabilization calculating, electro-magnetic transient stability Calculation etc.
And, at step S202, start managing computing resources module 104, obtain available computational resources.For example, by with the communicating by letter of the communication middleware of each client computer 200, obtain available situation and the performance of each client computer, as the reference frame of task scheduling modules 105.
At this, the available situation of client computer, namely the criterion of client computer " spare time " and " hurrying " can be: client computer does not carry out that high CPU takies or high EMS memory occupation or high disk read-write when taking program, and this client computer is " spare time "; Otherwise be " hurrying ", when client computer was carried out calculation procedure, it was masked as " doing ".Can be that CPU takies, EMS memory occupation or disk read-write take and set respectively a threshold value, to determine that high CPU takies or high EMS memory occupation or high disk read-write take.
At step S203, if any the network data file, then process electric power networks division module 102 obtains the network dividing data file by the electric power networks division in the calculation task.
And at step S204, calculation task enters queue management module 103 according to self attributes and forms task queue to be calculated, and in task queue, each calculation task can comprise these four attributes of mission number, data file, priority and task type.
At this, queue management module 103 can be configured to segmentation first in first out (FIFO) mode of using, its exemplary operation mechanism is as follows: each calculation task person of being operated distribute one more than or equal to 1 less than or equal to 99 priority number, less its priority of priority numeral is higher, the reservation priority that numbering 0 recomputates immediately as overtime task.For each priority number, it uses the formation mode of first in first out, and namely in the formation of same priority, laggard task will come the last of this section formation; For the corresponding segmentation formation of different priority number, its order with 0-99 merges total formation of generation, and this kind method is called the segmentation fifo queue.When a new task adds in total formation, will insert in the corresponding segmentation formation according to its priority number, total formation will be done corresponding dynamic adjustment and generate new execution sequence.
At step S205, task scheduling modules 105 Distribution Calculation tasks namely provide available computational resources that calculation task and computation schema are sent to each client computer 200 by communication middleware according to task queue to be calculated in the queue management module 103 and managing computing resources module 104.
Calculation task can comprise data file and various parameter, for example task type.
Computation schema can be serial or parallel.In one embodiment, the judgement of " computation schema " parameter and act on as follows:
Queue management module 103 can obtain the N of queue depth in real time Queue, managing computing resources module 104 can obtain available processor core number N in real time Core, wherein
Figure BSA00000591331400111
P is real-time available Number of Clients in the computer network, N iProcessor core number for corresponding i platform client computer.
Work as N Queue〉=N CoreThe time, showing that current available computational resources is nervous, the task flagging that all minutes send is the computation schema of " serial ", so that move the serial program that equals the processor core number on every client computer, namely calculates N with serial mode simultaneously on the i platform client computer iIndividual task to be calculated.
Work as N Queue<N CoreThe time, show that current available computational resources is well-to-do, then all task flaggings that minute send are the computation schema of " walking abreast ", so that move concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode on each client computer.
Preferably, each task to be calculated all is marked as " in the execution " and subsidiary overtime time limit by task scheduling modules 105 distribution dequeue the time, when causing because of reasons such as client computer are excessively busy, go offline when can't be before the overtime time limit being labeled as " finishing " by result data processing module 106, this task priority is set to 0 and reinserts the formation to be calculated of queue management module 103.
At step S206, each client computer 200 is received behind the calculation task to call according to the data file of parameter and task and is calculated executable program and calculate and return result of calculation to the result data processing module 106 of server 100.After calculation task was finished, each client computer 200 was to the state of managing computing resources module 104 these client computer of report " spare time " that run on server end.
At step S207, result data processing module 106 exports output content to man-machine interface after handling result of calculation according to corresponding computing function.Result data processing module 106 also can be labeled as the calculation task of correspondence " finishing ".
Test to prove the advantage of embodiments of the invention below by reality.
Example 1
Use an independent computing machine to take the Shared Memory Parallel Programming method to walk abreast and find the solution transient stability emulation, program is the self-programmed software based on the parallel storehouse of OpenMP shared drive, tested object is the power system network case (hereinafter being called 3872 cases) that has 3872 bus nodes, test environment is AMDPhenomII2.5GHz four nuclear computing machines, and test result is as shown in table 1 below:
Table 1
For simulating old type computing machine, the processor frequency reducing is retested to 800MHz, the result is as follows:
Figure BSA00000591331400122
Table 2
As seen the calculation procedure based on the Shared Memory Parallel Programming method all can obtain almost constant desirable speed-up ratio on the computing machine of different performance, and the parallelization performance boost is remarkable.
Example 2
The electric power networks that server end is carried out is divided module and also can be adopted the Shared Memory Parallel Programming method to carry out parallelization, for the power system network case that has 3872 bus nodes, just like the test result of following table 3:
Thread Count 1 2 4
Working time 94ms 46ms 31ms
Speed-up ratio 1.00 2.04 3.03
Core efficient 100% 102% 76%
Table 3
A plurality of flow processs that proof Shared Memory Parallel Programming method is calculated for electric system all possess effective acceleration, simultaneously because the network partition process is consuming time shorter, can be placed on that server end carries out and the real-time that can not affect the designed system of the present invention.
Example 3
Use five Dell Optiplex 360 type computing machines (to possess Core 22.6GHz dual core processor, 2GBRAM, Windows XP operating system) make up a cover Evaluation Platform according to embodiments of the invention, wherein a computing machine is as service end, four computing machines are as client computer, and five computing machines use the 100M LAN (Local Area Network) to connect.
The following test of design: calculation task of a computing machine serial computing, calculation task of a computing machine parallel computation, the single computer serial mode calculates 11 and 16 calculation tasks, the single computer parallel schema calculates 16 calculation tasks, four computing machine serial modes calculate 11 and 16 calculation tasks, four computing machine parallel computations 11 and 16 tasks adopt mode computation 11 of the present invention and 16 tasks, and tested object is 3872 node cases.The result is as shown in table 4 below:
Figure BSA00000591331400131
Table 4
As seen, for each grid computing client computer, when using the parallel computation pattern, can effectively accelerate the computing velocity of individual task, thereby when number of tasks to be calculated is less, take parallel schema can obtain optimum performance; For the more situation of number of tasks, if calculation task can be by mean allocation on each core, serial mode can be faster than parallel mode so, otherwise then slower.If adopt design of the present invention, then need not the manual intervention computation schema, the present invention all has best performance under multiple test case, and the so non-processor multiple of 11 tasks is tested lower especially obvious.
Compared with prior art, the above embodiment of the present invention has following beneficial effect:
1, the present invention is based on the concept of grid computing, effectively utilize the existing computer resource of enterprise to promote the performance of Power System Analysis software for calculation by intranet need not to purchase under the prerequisite of Special Equipment, under the prerequisite that low cost increases, satisfy preferably electric system to calculating real-time, the day by day urgent demand of accuracy, the resource extent that is used for calculating has preferably elasticity, can expand on a large scale.Especially compare the cluster solution of PSASP, the present invention has significantly reduced hardware cost.
2, the computational resource real-time estimating method taked of the present invention, when the operating personnel of client computers carry out the application of higher resource occupation, can this computing machine of mark be " hurrying ", be not engaged in one's work thereby do not affect operating personnel, this computing machine adds in the computational resource of grid computing again after aforementioned applications is finished; All do not consider this kind practical application scene in the existing electric system Research on Parallel.
3, because the relatively crowded present situation of the relatively relatively poor and communication environment of intranet communication performance, to carrying out the speed-up ratio performance evaluation based on the program of disperseing the memory parallel system, i.e. the T in the formula 1 on business T0And T tNumerical value is difficult to obtain greatly desirable parallel speed-up ratio performance, and that adopts among the present invention can avoid this problem effectively based on shared drive parallel system method, because the T of internal memory equivalence T0And T tNumerical value is the ns level, namely almost can ignore, and disperses memory parallel program speed-up ratio computing formula to be modified to following formula:
S = T s T p = T ps + Σ i = 1 p T fi + T selse T ps + max i = 1 . . . p ( T fi ) + T pelse (formula 2)
Call duration time will no longer affect speed-up ratio this moment.Simultaneously aforementioned mention " speed-up ratio numerical value promotes along with processor performance and descends; thus the parallelization effect of program is on a declining curve on long terms " this problem, because revised formula no longer comprises the item with the non-linear ratio of processor performance, thereby the result of its speed-up ratio performance on different computing machines should be basically identical.
Especially, we with based on disperseing the memory parallel algorithm and writing based on two kinds of program threads of shared drive algorithm, obtain following two groups of test datas with algorithm of the same race after each self-test, see Table 5:
As seen: one, the parallel speed-up ratio of shared drive is significantly higher than disperses the memory parallel speed-up ratio; Two, along with processor performance promotes, disperse the speed-up ratio of memory parallel system significantly decay to occur, the speed-up ratio of shared drive parallel system then remains unchanged substantially.
The Performance Ratio of table 5 algorithm of the same race under different parallel modes and different test platform
Figure BSA00000591331400142
Figure BSA00000591331400151
(annotating: because test platform and simulation step length are different, disperse absolute figure computing time of memory parallel and shared memory parallel to be worth without comparison)
4, the present invention takes the timeout mechanism of described single task to be calculated, after being marked as " in the calculating " and exceeding because of various reasons the overtime time limit, by priority flag be 0 and again drop into to calculate formation after, can obtain faster the result of calculation of this task, avoided simultaneously having and moving preferably self-healing property because the catastrophic failures such as Internet Transmission mistake or client computers shutdown cause calculation task or loss of data.
5, the present invention takes described according to queue depth with process the strategy that the core numeric ratio determines computation schema, when number of tasks to be calculated is counted greater than available computers, can guarantee that the computational resource that participates in grid computing obtains being bordering on 100% utilization factor, when number of tasks to be calculated is counted less than available computers, can guarantee that then each task is found the solution with the fastest speed again.
6, compare tradition at many computing machine deploy serial softwares and calculate simultaneously the mode of the multi-task parallel processing that realizes, the present invention can directly distribute calculating by server end and use executable file, need not to intervene, and function expansibility is strong, and the manual maintenance amount is few.
7, the present invention is optimized for the general algorithm of dividing by electric power networks of electric system parallel computation, when finding the solution a plurality of calculation task of consolidated network data, only needs to calculate primary network and divides, and other calculating can multiplexing division result.
Although the present invention discloses as above with preferred embodiment; so it is not to limit the present invention, any those skilled in the art, without departing from the spirit and scope of the present invention; when can doing a little modification and perfect, so protection scope of the present invention is when with being as the criterion that claims were defined.

Claims (12)

1. the Power System Analysis parallel calculating method based on grid computing is used for the parallel computation task in the Net-connected computer environment execution Power System Analysis that comprises a server and a plurality of client computer, said method comprising the steps of:
Obtain calculation task, this calculation task comprises data file to be calculated;
Obtain the available computational resources that is contained in each client computer;
This calculation task is joined in the task queue to be calculated;
According to this task queue to be calculated and this available computational resources calculation task and computation schema are sent in one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set;
In these one or more available client machines, select calculation procedure carrying out this calculation task according to computation schema, and return the result of calculation output file, wherein when this computation schema is serial, move the serial program that equals the processor core number in client computer; When this computation schema when parallel, this client computer is moved concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode; And
Process this result of calculation output file, and this calculation task of mark " is finished ".
2. the method for claim 1 is characterized in that, described calculation task also comprises task priority and task type.
3. the method for claim 1 is characterized in that, also comprises before this calculation task being joined in the task queue to be calculated: be the network dividing data file of pressing the electric power networks division with a network data Divide File in this calculation task.
4. the method for claim 1 is characterized in that, in this task queue to be calculated according to the regularly arranged calculation task of first-in first-out.
5. the method for claim 1 is characterized in that, in this task queue to be calculated according to first-in first-out rule and priority arrangement calculation task.
6. the method for claim 1 is characterized in that, this calculation task comprises computational data file and task type.
7. the method for claim 1 is characterized in that, when the calculation task number in this task queue to be calculated during less than total processing core number of available client machine, determines that this available computational resources is well-to-do; Calculation task number in this task queue to be calculated determines that this available computational resources is for nervous during more than or equal to total processing core number of available client machine.
8. the method for claim 1, it is characterized in that, also comprise after sending this calculation task: this calculation task of mark is " in the execution " and a subsidiary overtime time limit, if this calculation task can't be marked as " finishing " before the overtime time limit, then this calculation task is put limit priority and again listed in this task queue to be calculated.
9. the method for claim 1 is characterized in that, after these one or more client computer are returned this result of calculation output file, comprises that also this client computer of report is upstate.
10. the method for claim 1 is characterized in that, the calculation procedure of carrying out in this client computer is to this client computer distribution by this server.
11. the Power System Analysis parallel calculating method based on grid computing is used for the parallel computation task in the Net-connected computer environment execution Power System Analysis that comprises a server and a plurality of client computer, said method comprising the steps of:
Obtain calculation task, this calculation task comprises data file to be calculated;
Obtain the available computational resources that is contained in each client computer;
This calculation task is joined in the task queue to be calculated;
According to this task queue to be calculated and this available computational resources calculation task and computation schema are sent in one or more available client machines, carry out this calculation task and return the result of calculation output file to impel these one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set; And when this computation schema is serial, impels and move the serial program that equals the processor core number on the client computer; When this computation schema when parallel, impel this client computer to move concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode; And
The result of calculation output file that processing is returned, and this calculation task of mark " is finished ".
12. the Power System Analysis concurrent computational system based on grid computing is used for the parallel computation task in the Net-connected computer environment execution Power System Analysis that comprises a server and a plurality of client computer, described system comprises:
The task data acquisition module is used for obtaining calculation task, and this calculation task comprises data file to be calculated;
The managing computing resources module is used for obtaining the available computational resources that is contained in each client computer;
Queue management module is used for this calculation task is joined a task queue to be calculated;
Task scheduling modules, according to this task queue to be calculated and this available computational resources calculation task and computation schema are sent in one or more available client machines, carry out this calculation task and return the result of calculation output file to impel these one or more available client machines, wherein when definite this available computational resources is well-to-do, this computation schema is set for parallel, when determining that this available computational resources is nervous, it is serial that this computation schema is set; And when this computation schema is serial, impels and move the serial program that equals the processor core number on the client computer; When this computation schema when parallel, impel this client computer to move concurrent program based on the shared drive parallel method according to particular hardware in adaptive mode; And
The result data processing module process the result of calculation output file that returns, and this calculation task of mark " is finished ".
CN2011103087164A 2011-09-29 2011-09-29 Electric power system analysis parallel computing method and system based on grid computation Pending CN103034534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103087164A CN103034534A (en) 2011-09-29 2011-09-29 Electric power system analysis parallel computing method and system based on grid computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103087164A CN103034534A (en) 2011-09-29 2011-09-29 Electric power system analysis parallel computing method and system based on grid computation

Publications (1)

Publication Number Publication Date
CN103034534A true CN103034534A (en) 2013-04-10

Family

ID=48021457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103087164A Pending CN103034534A (en) 2011-09-29 2011-09-29 Electric power system analysis parallel computing method and system based on grid computation

Country Status (1)

Country Link
CN (1) CN103034534A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246497A (en) * 2013-04-19 2013-08-14 国家电网公司 Real-time parallel data processing method based on data partitioning
CN104216685A (en) * 2013-06-02 2014-12-17 洛克泰克科技有限公司 Efficient parallel computation on dependency problems
WO2016107426A1 (en) * 2014-12-30 2016-07-07 Huawei Technologies Co., Ltd. Systems and methods to adaptively select execution modes
CN106294037A (en) * 2015-05-25 2017-01-04 中兴通讯股份有限公司 Strike-machine method of testing and device
CN107256158A (en) * 2017-06-07 2017-10-17 广州供电局有限公司 The detection method and system of power system load reduction
CN110543361A (en) * 2019-07-29 2019-12-06 中国科学院国家天文台 Astronomical data parallel processing device and method
CN110866167A (en) * 2019-11-14 2020-03-06 北京知道创宇信息技术股份有限公司 Task allocation method, device, server and storage medium
CN111190706A (en) * 2018-11-14 2020-05-22 中国电力科学研究院有限公司 Multitask optimization engine driving method and system based on electric power transaction
CN111857061A (en) * 2019-04-28 2020-10-30 北京国电智深控制技术有限公司 Method, device and system for realizing calculation task and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195846A1 (en) * 2005-02-25 2006-08-31 Fabio Benedetti Method and system for scheduling jobs based on predefined, re-usable profiles
CN101114937A (en) * 2007-08-02 2008-01-30 上海交通大学 Electric power computation gridding application system
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 Method for implementing parallel power flow calculation based on multi-core computer in electric grid
CN201298233Y (en) * 2008-10-22 2009-08-26 西北电网有限公司 An electrical power system electro-magnetism transient distributed simulation device
CN101587639A (en) * 2009-06-23 2009-11-25 华中科技大学 City bus information management and dispatch decision support system based on network
WO2010037177A1 (en) * 2008-10-03 2010-04-08 The University Of Sydney Scheduling an application for performance on a heterogeneous computing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195846A1 (en) * 2005-02-25 2006-08-31 Fabio Benedetti Method and system for scheduling jobs based on predefined, re-usable profiles
CN101114937A (en) * 2007-08-02 2008-01-30 上海交通大学 Electric power computation gridding application system
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 Method for implementing parallel power flow calculation based on multi-core computer in electric grid
WO2010037177A1 (en) * 2008-10-03 2010-04-08 The University Of Sydney Scheduling an application for performance on a heterogeneous computing system
CN201298233Y (en) * 2008-10-22 2009-08-26 西北电网有限公司 An electrical power system electro-magnetism transient distributed simulation device
CN101587639A (en) * 2009-06-23 2009-11-25 华中科技大学 City bus information management and dispatch decision support system based on network

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246497A (en) * 2013-04-19 2013-08-14 国家电网公司 Real-time parallel data processing method based on data partitioning
CN103246497B (en) * 2013-04-19 2016-01-20 国家电网公司 A kind of real time data method for parallel processing based on Data Placement
CN104216685A (en) * 2013-06-02 2014-12-17 洛克泰克科技有限公司 Efficient parallel computation on dependency problems
US9939792B2 (en) 2014-12-30 2018-04-10 Futurewei Technologies, Inc. Systems and methods to adaptively select execution modes
WO2016107426A1 (en) * 2014-12-30 2016-07-07 Huawei Technologies Co., Ltd. Systems and methods to adaptively select execution modes
CN106294037A (en) * 2015-05-25 2017-01-04 中兴通讯股份有限公司 Strike-machine method of testing and device
CN107256158A (en) * 2017-06-07 2017-10-17 广州供电局有限公司 The detection method and system of power system load reduction
CN111190706A (en) * 2018-11-14 2020-05-22 中国电力科学研究院有限公司 Multitask optimization engine driving method and system based on electric power transaction
CN111857061A (en) * 2019-04-28 2020-10-30 北京国电智深控制技术有限公司 Method, device and system for realizing calculation task and storage medium
CN110543361A (en) * 2019-07-29 2019-12-06 中国科学院国家天文台 Astronomical data parallel processing device and method
CN110543361B (en) * 2019-07-29 2023-06-13 中国科学院国家天文台 Astronomical data parallel processing device and astronomical data parallel processing method
CN110866167A (en) * 2019-11-14 2020-03-06 北京知道创宇信息技术股份有限公司 Task allocation method, device, server and storage medium
CN110866167B (en) * 2019-11-14 2022-09-20 北京知道创宇信息技术股份有限公司 Task allocation method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN103034534A (en) Electric power system analysis parallel computing method and system based on grid computation
US20100125847A1 (en) Job managing device, job managing method and job managing program
CN102195886A (en) Service scheduling method on cloud platform
Li Energy-efficient task scheduling on multiple heterogeneous computers: Algorithms, analysis, and performance evaluation
CN102981890A (en) Computing task and virtual machine deploying method within a virtual data center
CN103401939A (en) Load balancing method adopting mixing scheduling strategy
Deng et al. Dynamic virtual machine consolidation for improving energy efficiency in cloud data centers
CN104375882A (en) Multistage nested data drive calculation method matched with high-performance computer structure
CN104331331A (en) Resource distribution method for reconfigurable chip multiprocessor with task number and performance sensing functions
CN105867998A (en) Virtual machine cluster deployment algorithm
Wang et al. Task scheduling for MapReduce in heterogeneous networks
CN113806606A (en) Three-dimensional scene-based electric power big data rapid visual analysis method and system
Zhang et al. Performance-aware energy-efficient virtual machine placement in cloud data center
Mo et al. Heet: Accelerating Elastic Training in Heterogeneous Deep Learning Clusters
Maraghi et al. Batch arrival vacation queue with second optional service and random system breakdowns
Xu et al. Hybrid scheduling deadline-constrained multi-DAGs based on reverse HEFT
Liu et al. Dynamic fair division of multiple resources with satiable agents in cloud computing systems
CN115827237A (en) Storm task scheduling method based on cost performance
CN104166593A (en) Method for computing asynchronous and concurrent scheduling of multiple application functions
CN112148475B (en) Loongson big data integrated machine task scheduling method and system integrating load and power consumption
CN104090813A (en) Analysis modeling method for CPU (central processing unit) usage of virtual machines in cloud data center
CN103631659A (en) Schedule optimization method for communication energy consumption in on-chip network
CN108429704A (en) A kind of node resource distribution method and device
Duan et al. Accelerating DAG-Style Job Execution via Optimizing Resource Pipeline Scheduling
Lu et al. Synchronous Dislocation Scheduling Quantum Algorithm Optimization in Virtual Private Cloud Computing Environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130410