CN107506932A - Power grid risk scenes in parallel computational methods and system - Google Patents

Power grid risk scenes in parallel computational methods and system Download PDF

Info

Publication number
CN107506932A
CN107506932A CN201710756256.9A CN201710756256A CN107506932A CN 107506932 A CN107506932 A CN 107506932A CN 201710756256 A CN201710756256 A CN 201710756256A CN 107506932 A CN107506932 A CN 107506932A
Authority
CN
China
Prior art keywords
power grid
grid risk
computing engines
parallel computation
contextual data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710756256.9A
Other languages
Chinese (zh)
Inventor
莫文雄
章磊
胡金星
张志亮
王莉
何兵
孙煜华
冯圣中
吴永欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Guangzhou Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Guangzhou Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS, Guangzhou Power Supply Bureau Co Ltd filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201710756256.9A priority Critical patent/CN107506932A/en
Publication of CN107506932A publication Critical patent/CN107506932A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Algebra (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The present invention relates to a kind of power grid risk scenes in parallel computational methods and system, structure parallel computation pond, the parallel computation pond includes multiple calculate nodes, and the calculate node includes multiple computing engines;Multiple power grid risk contextual datas are obtained, for power grid risk contextual data corresponding to each computing engines distribution;Each power grid risk contextual data computing engines corresponding to are transmitted, parallel computation is carried out to each power grid risk contextual data by each computing engines;Obtain parallel computation result.In this programme, described power grid risk contextual data, contain the data of large-scale branch breaking, pass through the distribution to the contextual data, each power network contextual data is transferred to corresponding computing engines and carries out parallel computation, can efficiently complete the power grid risk scene calculating task that computationally intensive, iterations is more, requirement of real-time is high.

Description

Power grid risk scenes in parallel computational methods and system
Technical field
The present invention relates to electric power network technique field, more particularly to a kind of power grid risk scenes in parallel computational methods and system.
Background technology
With the growth of social and economic level and the increase of population, power network scale is increasing, safety of the society to power network The requirement of property and reliability also more and more higher.In order to ensure power network normal operation, researcher proposes power networks risk and commented Estimate index and computational methods.It will be carried out because the forecast failure scene to be considered is large number of, and to each fault scenes Evaluation work, the amounts of calculation such as tidal current analysis, load reduction are very large.
In order to improve the computational efficiency of power grid risk assessment, typically completed by buying high-performance server or work station, But because expensive and maintenance cost is high, it is difficult to possess;Also have a trial optimized algorithm, but due to technology involves a wide range of knowledge, Deep, the raising that optimized algorithm is brought to computational efficiency is very little, therefore the computational efficiency of power grid risk assessment is low.
The content of the invention
Based on this, it is necessary to for traditional power grid risk assessment computational efficiency it is low the problem of, there is provided a kind of power network Risk scenes in parallel computational methods and system.
A kind of power grid risk scenes in parallel computational methods, comprise the following steps:
Parallel computation pond is built, parallel computation pond includes multiple calculate nodes, and calculate node includes multiple computing engines;
Multiple power grid risk contextual datas are obtained, for power grid risk contextual data corresponding to the distribution of each computing engines;
Each power grid risk contextual data computing engines corresponding to are transmitted, by each computing engines to each power grid risk scene Data carry out parallel computation;
Obtain parallel computation result.
A kind of power grid risk scenes in parallel computing system, including with lower module:
Parallel computation pond builds module, and for building parallel computation pond, parallel computation pond includes multiple calculate nodes, calculates Node includes multiple computing engines;
Contextual data distribute module, for obtaining multiple power grid risk contextual datas, for corresponding to the distribution of each computing engines Power grid risk contextual data, transmit each power grid risk contextual data computing engines corresponding to;
Parallel computation module, for carrying out parallel computation to each power grid risk contextual data by each computing engines;
As a result collection module, for obtaining parallel computation result.
According to the power grid risk scenes in parallel computational methods and system of the invention described above, built simultaneously using multiple calculate nodes Row computing pool, each computing engines multiple power network contextual datas being assigned in each calculate node, drawn by each calculating Hold up and parallel computation is carried out to power grid risk contextual data, obtain the result of calculation of each computing engines.In this scheme, by right The distribution of contextual data, each power network contextual data is transferred to corresponding computing engines and carries out parallel computation, can be efficiently The power grid risk scene calculating task that computationally intensive, iterations is more, requirement of real-time is high is completed, so as to improve power grid risk field The computational efficiency of scape.
A kind of readable storage medium storing program for executing, is stored thereon with executable program, and the program is realized above-mentioned when being executed by processor The step of power grid risk scenes in parallel computational methods.
A kind of computing device, including memory, processor and storage on a memory and can run on a processor can Configuration processor, the step of realizing above-mentioned power grid risk scenes in parallel computational methods during computing device program.
According to the power grid risk scenes in parallel computational methods of the invention described above, the present invention also provides a kind of readable storage medium storing program for executing And computing device, for realizing above-mentioned power grid risk scenes in parallel computational methods by program.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the power grid risk scenes in parallel computational methods in one embodiment of the invention;
Fig. 2 is the schematic flow sheet of the structure parallel computation pond step in another embodiment of the present invention;
Fig. 3 is the schematic flow sheet of the allocation scenarios data step in another embodiment of the present invention;
Fig. 4 is the structural representation of the power grid risk scenes in parallel computing system in another embodiment of the present invention;
Fig. 5 is the schematic flow sheet of the power grid risk scenes in parallel computational methods in another embodiment of the present invention;
Fig. 6 is the schematic flow sheet of the power grid risk scenes in parallel computational methods in another embodiment of the present invention;
Fig. 7 is the parallel computation Organization Chart of the power grid risk scenes in parallel computational methods in another embodiment of the present invention.
Embodiment
For the objects, technical solutions and advantages of the present invention are more clearly understood, below in conjunction with drawings and Examples, to this Invention is described in further detail.It should be appreciated that embodiment described herein is only to explain the present invention, Do not limit protection scope of the present invention.
It is shown in Figure 1, it is the schematic flow sheet of the power grid risk scenes in parallel computational methods of one embodiment of the invention. Power grid risk scenes in parallel computational methods in the embodiment comprise the following steps:
Step S110:Parallel computation pond is built, parallel computation pond includes multiple calculate nodes, and calculate node includes multiple meters Calculate engine;
In this step, calculate node can be computer equipment, including polycaryon processor and storage device, pass through connection The processor and storage device of multiple calculate nodes, can construct parallel computation pond.
Step S120:Multiple power grid risk contextual datas are obtained, for power grid risk scene corresponding to the distribution of each computing engines Data;
In this step, by distributing power grid risk contextual data to each computing engines, each calculating can be adjusted and drawn The calculating pressure held up.
Step S130:Each power grid risk contextual data computing engines corresponding to are transmitted, by each computing engines to each electricity Net risk contextual data carries out parallel computation;
In this step, distribute after the completion of power grid risk contextual data, according to allocation result, each contextual data is transmitted Into each computing engines of each calculate node, parallel computation is carried out.
Step S140:Obtain parallel computation result.
In this step, after the completion of due to parallel computation, each result of calculation is still stored in each computing engines, because This needs to return to result of calculation by each computing engines.
In the present embodiment, parallel computation pond is built using multiple calculate nodes, multiple power network contextual datas is assigned to Each computing engines in each calculate node, parallel computation is carried out to power grid risk contextual data by each computing engines, Obtain the result of calculation of each computing engines.In this scheme, by the distribution to contextual data, by each power network contextual data Computing engines corresponding to being transferred to carry out parallel computation, can efficiently complete that computationally intensive, iterations is more, requirement of real-time High power grid risk scene calculating task.
Optionally, power grid risk contextual data can include the data of the branch breaking of large scale electric network.By to each The arrangement of data is cut-off on road, forms multiple power grid risk contextual datas;
It is shown in Figure 2, it is the schematic flow sheet that parallel computation pond is built in one of embodiment, structure in the embodiment The step of building parallel computation pond comprises the following steps:
Step S111:Multiple calculate nodes are obtained, each calculate node is added in cluster, task pipe is created to cluster Reason;
Step S112:The computing engines quantity of each calculate node is configured, addition cluster to initial cluster configuration file, is obtained Target cluster configuration file, parallel computation pond is created according to target cluster configuration file.
In the present embodiment, obtain the calculate node for participating in parallel computation, addition calculate node to same collection Group, after creating task management and configuration computing engines quantity, the cluster is added in initial cluster configuration file, the mesh of acquisition Cluster configuration file is marked, can be used in and create parallel computation pond.Due in practical application, for different calculating tasks, it is necessary to Cluster corresponding to targetedly creating.Due to the feature of group system multinode, cluster configuration is intricate, especially large-scale In system, manual configuration clustered node efficiency is low, by using cluster configuration file, can improve establishment and the pipe in parallel pond The efficiency of reason.
Optionally, aforesaid operations can be completed in the one of calculate node in parallel pond, pass through host IP address or master Machine name, the calculate node available for participation parallel computation is found, and the calculate node found is added to a cluster, by right The cluster creates task unified management.
In one of the embodiments, the step of computing engines quantity for configuring each calculate node, comprises the following steps:
By the CPU core quantity that the computing engines quantity configuration in current calculate node is current calculate node.
In the present embodiment, Data-intensive computing is calculated as due to power grid risk contextual data, in calculating process, Substantially there is no I/O operation, IO obstructions seldom occur;On the other hand, the single cores of CPU are queuing processing when handling multiple processes , due to the corresponding process of a computing engines, therefore the computing engines quantity of each calculate node is arranged to and the meter The CPU core quantity for calculating engine is consistent, it is possible to reduce due to performance loss caused by system call, makes full use of each calculate node Computing resource.In one of the embodiments, each calculate node is in same LAN, passes through LAN between calculate node Carry out the transmission of data.
In the present embodiment, because the data volume that power grid risk contextual data is designed into practical operation is very big, and Contextual data, which needs to be transferred in each computing engines, carries out parallel computation, therefore the transmission rate of data is had higher requirements. Each calculate node in parallel computation pond is networked using the transmission medium specially laid, and has higher transmission rate, And postpone low, improve the transmission of parallel computation pond Scene data, and then improve the efficiency of parallel computation.
In one of the embodiments, after building parallel computation pond, any one chosen in parallel computation pond calculates section Point is used as client, and each power grid risk contextual data is transmitted to client;Wherein, client is that the distribution of each computing engines is corresponding Power grid risk contextual data;
Each data are transmitted corresponding to the step of computing engines to comprise the following steps:
Pass through each data of client transmissions to corresponding computing engines.
In the present embodiment, the calculating of power grid risk contextual data is designed into substantial amounts of computing, enters by using client The distribution of row calculating task, calculating task can be preferably divided, improve the efficiency of parallel computation.
Shown in Figure 3, it is power grid risk scene number corresponding to each computing engines distribution in one of embodiment to be According to schematic flow sheet, the step of being power grid risk contextual data corresponding to each computing engines distribution in the embodiment include with Lower step:
Step S121:Obtain that computing engines in parallel computation pond are total and the engine sequence number of each computing engines;
Step S122:Power grid risk contextual data is sequentially numbered, obtains scene numbering;
Step S123:Current scene numbering is added 1 to computing engines sum complementation, obtained corresponding with current scene numbering Targeting engine sequence number, current scene is numbered into affiliated power grid risk contextual data and distributed to the calculating belonging to targeting engine sequence number Engine.
In the present embodiment, by using the distribution method of contextual data, each contextual data can distribute to obtain one The individual computing engines for being responsible for calculating it, while the contextual data amount that each computing engines distribute to obtain is essentially identical, because This makes parallel computation pond reach the effect of load balance, is advantageous to improve CPU utilization rate, minimizes task free time, carry High parallel efficiency calculation.
In one of which embodiment, the platform of parallel computation is MATLAB, utilizes the single program multiple data stream of platform Method to power grid risk contextual data carry out parallel computation.
In the present embodiment, the data volume of power grid risk contextual data is larger, but the computational methods of each contextual data are It is similar, the simply difference of data.SPMD (Single Program Multiple Data) refer to single program multiple data stream and Row computational methods, Parallel Computation is run by using MATLAB, same section of program is operated on multiple computing engines, journey Corresponding code is encoded with to different data in sequence, each computing engines is using same program to different power grid risk fields Scape data are calculated, while the program bag contains necessary logic, and each computing engines can only carry out the part language in program Sentence, it is not necessary to perform whole program, the return value of result of calculation is stored with the object of composite types, therefore is improved parallel The efficiency of calculating.
Comprise the following steps in one of which embodiment, the step of parallel computation:
Corresponding power grid risk contextual data is transferred to by current computing engines corresponding with current computing engines GPU;
Wherein, GPU judges the type of the power grid risk contextual data received, if the power grid risk contextual data received is thick Close matrix data, then dense matrix data transfer is subjected to parallel computation into GPU video memorys using gpuArray () function;
If the power grid risk contextual data received is sparse matrix data, storehouse is calculated to dilute using MEX functions and CUDA Dredge matrix data and carry out parallel computation;
If the power grid risk contextual data received is multiple data and is scalar, made by oneself by the generation of arrayfun functions Adopted function, parallel computation is carried out to the power grid risk contextual data of reception using SQL.
In the present embodiment, computing engines transfer data to GPU after the contextual data of distribution is received (Graphics Processing Unit, graphics processor) carries out parallel computation.GPU is the core of computer display apparatus Part, each GPU contain multiple stream handles, and for image procossing, these stream handles are designed to work in a parallel fashion.CUDA (Compute Unified Device Architecture) is GPU universal parallel computing architecture, applies to GPU Floating-point operation.Using the far super CPU of GPU floating-point operation performance, high memory bandwidth and high performance-price ratio the characteristics of, by CUDA frameworks Apply in the parallel computation task of large scale electric network risk scene, be advantageous to improve computational efficiency.
The calculating of matrix is the basic composition that power grid risk scene calculates, in a matrix, if the element number that numerical value is 0 is remote Far more than the number of non-zero element, and when non-zero Elemental redistribution does not have rule, then the matrix is referred to as sparse matrix;In contrast, If non-zero element number is in the great majority, the matrix is referred to as dense matrix.
In order to improve the parallel efficiency calculation of contextual data, by gpuArray () function by the scene of dense matrix type Data duplication matrix and vector of the generation with gpuArray attributes, can be calculated automatically into GPU video memorys using GPU. For the contextual data of sparse matrix type, if using above-mentioned and dense matrix type identical calculation, can consume Substantial amounts of GPU video memorys space, causes calculation scale to be restricted, therefore calculates storehouse to sparse matrix using MEX functions and CUDA Data carry out parallel computation, and MEX functions therein can call all kinds of CUDA to calculate the parallel computation that storehouse mix GPU.Tool Body, the parallel of GPU can be carried out using two sparse matrix numerical computations storehouses of the CUDA CUSPARSE provided and CUSOLVER Calculate.The power grid risk contextual data of reception is multiple data and is the situation of scalar, can be that input or output variable are more than The situation of one.
In one of which embodiment, obtain parallel computation result the step of comprise the following steps:
Receive the result of calculation that each calculate node returns;Wherein, each calculate node is converged by gather functions respectively Collect the result of calculation of each computing engines each included.
In the present embodiment, due to power grid risk contextual data be assigned to each computing engines carry out parallel computation, it is necessary to Obtain the result of calculation that each computing engines return.Calculate node each first is by gather functions, by result of calculation from GPU Video memory is recovered to physical memory, and then each calculate node returns again to result of calculation.By calling gather functions, improve and obtain Take the speed of the result of calculation of each computing engines.
According to above-mentioned power grid risk scenes in parallel computational methods, the present invention also provides a kind of power grid risk scenes in parallel and calculated System, just the embodiment of the power grid risk scenes in parallel computing system of the present invention is described in detail below.
It is shown in Figure 4, it is the structural representation of the power grid risk scenes in parallel computing system of one embodiment of the invention, Power grid risk scenes in parallel computing system in the embodiment includes:
Parallel computation pond builds module 210, and for building parallel computation pond, parallel computation pond includes multiple calculate nodes, Calculate node includes multiple computing engines;
Contextual data distribute module 220, it is corresponding for the distribution of each computing engines for obtaining multiple power grid risk contextual datas Power grid risk contextual data, transmit each power grid risk contextual data to corresponding to computing engines;
Parallel computation module 230, for carrying out parallel computation to each power grid risk contextual data by each computing engines;
As a result collection module 240, for obtaining parallel computation result.
In one of which embodiment, parallel computation pond structure module 210 obtains multiple calculate nodes, and each calculate is saved Point is added in cluster, and task management is created to cluster;Configure the computing engines quantity of each calculate node, addition cluster is to initial Cluster configuration file, target cluster configuration file is obtained, parallel computation pond is created according to target cluster configuration file.
In one of the embodiments, parallel computation pond builds module 210 by the computing engines number in current calculate node Amount is configured to the CPU core quantity of current calculate node.
In one of the embodiments, parallel computation pond structure module 210 configures each calculate node in same local Net, make the transmission for carrying out data between each calculate node by LAN.
In one of the embodiments, contextual data distribute module 220 chooses any one calculating in parallel computation pond Node transmits each power grid risk contextual data to client as client;Wherein, client is the distribution pair of each computing engines The power grid risk contextual data answered, pass through each data of client transmissions to corresponding computing engines.
In one of which embodiment, the computing engines that contextual data distribute module 220 is obtained in parallel computation pond are total The engine sequence number of several and each computing engines;Power grid risk contextual data is sequentially numbered, obtains scene numbering;By current field Scape numbering adds 1 to computing engines sum complementation, obtains targeting engine sequence number corresponding with current scene numbering, current scene is compiled Power grid risk contextual data belonging to number is distributed to the computing engines belonging to targeting engine sequence number.
In one of which embodiment, parallel computation pond structure module 210 utilizes MATLAB platform construction parallel computations Pond, parallel computation module 230 are carried out using the method for the single program multiple data stream of MATLAB platforms to power grid risk contextual data Parallel computation.
In one of which embodiment, contextual data distribute module 220 is by current computing engines by corresponding power network Risk contextual data is transferred to GPU corresponding with current computing engines;Wherein, GPU judges the power grid risk contextual data received Type, when the power grid risk contextual data of reception is dense matrix data, using gpuArray () function by dense matrix Data transfer carries out parallel computation into GPU video memorys;
When the power grid risk contextual data of reception is sparse matrix data, storehouse is calculated to dilute using MEX functions and CUDA Dredge matrix data and carry out parallel computation;
It is multiple data in the power grid risk contextual data of reception and when being scalar, is made by oneself by the generation of arrayfun functions Adopted function, parallel computation is carried out to the power grid risk contextual data of reception using SQL.
In one of which embodiment, as a result collection module 240 receives the result of calculation that each calculate node returns;Its In, each calculate node collects the result of calculation of the computing engines each included by gather functions respectively.
The power grid risk scenes in parallel computational methods one of the power grid risk scenes in parallel computing system of the present invention and the present invention One correspondence, the technical characteristic illustrated in the embodiment of above-mentioned power grid risk scenes in parallel computational methods and its advantage are applicable In the embodiment of power grid risk scenes in parallel computing system.
According to above-mentioned power grid risk scenes in parallel computational methods, the embodiment of the present invention also provide a kind of readable storage medium storing program for executing and A kind of computing device.Executable program is stored with readable storage medium storing program for executing, the program realizes above-mentioned power network when being executed by processor The step of risk scenes in parallel computational methods;Computing device includes memory, processor and storage on a memory and can located The executable program that runs on reason device, the step of above-mentioned power grid risk scenes in parallel computational methods is realized during computing device program Suddenly.
In a specific embodiment, power grid risk scenes in parallel computational methods comprise the following steps:
Using perceptive construction on mathematics structure parallel computation pond, parallel computation pond includes multiple calculate nodes, calculate node Including multiple computing engines;Wherein, the step of building parallel pond comprises the following steps:
Multiple calculate nodes are obtained, MATLAB is added to the system fire wall of calculate node, and are each node installation MATLAB Distributed Parallel Computing servers, and start service, each calculate node is in same LAN, leads between calculate node Cross the transmission that LAN carries out data;
Each calculate node is added in cluster using the interface Admin Center that MATLAB is provided, in interface MATLAB Job Scheduler interfaces create task management to cluster;
By the CPU core quantity that the computing engines quantity configuration in current calculate node is current calculate node, cluster is added To initial cluster configuration file, target cluster configuration file is obtained, while Monitor Jobs are established for the configuration, monitors cluster Task status;Parallel computation pond is created according to target cluster configuration file.
Multiple power grid risk contextual datas are obtained, for power grid risk contextual data corresponding to the distribution of each computing engines;
Wherein, after building parallel computation pond, any one calculate node in parallel computation pond is chosen as client, will Each power grid risk contextual data is transmitted to client;Pass through each data of client transmissions to corresponding computing engines.
Wherein, for each computing engines distribution corresponding to power grid risk contextual data the step of comprise the following steps:
The computing engines sum in parallel computation pond is obtained, and the engine of computing engines is returned using labindex () function Sequence number;
Power grid risk contextual data is sequentially numbered, obtains scene numbering;
Current scene numbering is added 1 to computing engines sum complementation, obtained and the corresponding targeting engine of current scene numbering Sequence number, current scene is numbered into affiliated power grid risk contextual data and distributed to the computing engines belonging to targeting engine sequence number.
The parallel computation tool box provided using MATLAB carries out parallel computation, specifically includes:
Each power grid risk contextual data computing engines corresponding to are transmitted, by each computing engines to each power grid risk scene Data carry out parallel computation, and the single program multiple data stream method that parallel computation is provided using MATLAB platforms is realized;Wherein, parallel The step of calculating, comprises the following steps:
Corresponding power grid risk contextual data is transferred to by current computing engines corresponding with current computing engines GPU;If the power grid risk contextual data received is dense matrix data, using gpuArray () function by dense matrix number Parallel computation is carried out according to being transferred in GPU video memorys;If the power grid risk contextual data received is sparse matrix data, use MEX functions and CUDA calculate storehouse and carry out parallel computation to sparse matrix data;If the power grid risk contextual data received is multiple Data and be scalar, then SQL, the power grid risk using SQL to reception are generated by arrayfun functions Contextual data carries out parallel computation.
Obtain parallel computation result;Wherein, each calculate node collects the meter of each computing engines by gather functions Result is calculated, is as a result returned in the variable of composite object types, then each calculate node is received in client and returns The result of calculation returned, the result of calculation of each calculate node, value is indexed by the engine sequence number of computing engines.
It is shown in Figure 5, it is the flow signal of the power grid risk scenes in parallel computational methods of another embodiment of the present invention Figure.Power grid risk scenes in parallel computational methods in the embodiment comprise the following steps:
Step S310:Parallel computation pond is built, is comprised the following steps:
The MATLAB softwares of identical version are installed to each calculate node of participation parallel computation, and MATLAB is added to Fire wall;The MATLAB softwares of installation identical version can avoid the compatibility issue come due to version different band;Due to follow-up Each calculate node in parallel computation pond carries out data interaction by LAN in step, in order to avoid some behaviour in interaction Work is influenceed by fire wall, it is therefore desirable to makees relative set to protecting wall in advance.
Each calculate node installation MATLAB Distributed Parallel Computing servers are given with keeper's identity, and start service;
Each calculate node is configured in same LAN, each calculate node is then added to one using Admin Center In individual cluster;Task management is created in MATLAB Job Scheduler;Computing engines quantity is configured for individual calculate node, it is excellent Selection of land, the computing engines quantity of each calculate node are identical with the calculate node CPU core quantity;
Cluster is added in cluster configuration file, while Monitor Jobs are established for the configuration file, monitors cluster Task status.
Parallel computation pond is created using cluster configuration file.
Step S320:Multiple power grid risk contextual datas are obtained, for power grid risk scene corresponding to the distribution of each computing engines Data, comprise the following steps:
Calculative power grid risk contextual data is obtained, and is numbered by ordered pair scene;
The numbering of scene obtains remainder+1, obtained result is the responsible execution scene to computing engines number complementation The engine sequence number of computing engines;
Step S330:Each power grid risk contextual data computing engines corresponding to are transmitted, by each computing engines to each electricity Net risk contextual data carries out parallel computation;Comprise the following steps:
Parallel computation is carried out using MATLAB parallel computation tool box, Parallel Computation is opened using spmd-end; Wherein, spmd-end is used to mark the sentence for needing to carry out parallel computation using the method for SPMD single program multiple data streams.Wherein, MATLAB parallel computation tool box can solve computational problem and data using polycaryon processor, GPU and computer cluster The tool box of intensive problem, by using high-level structures such as parallel for circulations, special array type and parallelization numerical algorithms Make, parallelization can be carried out to MATLAB application programs, without carrying out CUDA or MPI programmings.
The engine sequence number of current computing engines is returned using labindex () function;
According to the result of calculation of the above-mentioned numbering to scene, transmit each scene and enter into the computing engines of corresponding engine sequence number Row parallel computation.
Wherein, the corresponding GPU of a computing engines, being selected by gpuDevice () function for each computing engines are corresponding GPU calculated, including:
Dense matrix calculates:MATLAB provide can direct GPUization parallel computation built-in function to improve dense matrix Computational efficiency.Contextual data is replicated by gpuArray () function and carries out parallel computation on GPU automatically into GPU video memorys;
Sparse matrix calculates:Because MATLAB built-in function does not support the direct GPU parallel computations of sparse matrix, therefore Using two sparse matrix numerical computations storehouses of the CUSPARSE and CUSOLVER provided free by CUDA, provided with reference to MATLAB MEX functions comprising GPU interfaces mix GPU parallel computation;
SQL:By the arrayfun functions of the MATLAB support SQLs provided, to inputting, exporting number It is that scalar contextual data carries out parallel computation according to more than one and input data;
In this step, GPU is as coprocessor, and aiding CPU completion degree of parallelism is high, data-intensive, logic is simply transported It can be regarded as industry.The ability of GPU parallel computations is even more powerful, and its inside has quick storage system, in addition, GPU hardware design energy Thousands of parallel threads are enough managed, this thousands of thread is all created and managed by GPU, is carried out without developer any Programming and management.
Step S340:Parallel computation result is obtained, is comprised the following steps:
Each computing engines collect result of calculation by gather functions and return to CPU, while reset falls GPU internal memory;
During due to carrying out parallel computation by SPMD methods, the return value of each computing engines is with composite type Storage, it is therefore desirable to return the result in the object of composite types.Optionally, can be created in advance before parallel computation Composite objects are built, and carry out initialization assignment.
Value is indexed by engine sequence number, obtains the result of calculation of each calculate node.
In the present embodiment, overall basic procedure is as shown in fig. 6, the Distributed Parallel Computing server structure for passing through MATLAB Parallel computation environment as shown in Figure 7 is built, calls CUDA to calculate storehouse using MATLAB parallel computations tool box, is efficiently completed meter The risk scene calculating task that calculation amount is big, iterations is more, requirement of real-time is high, can be by risk scene calculating task in Duo Tai Parallel computation on computer, there is good fault-tolerance, both make use of the easy advantage of MATLAB itself parallel computations, combine again GPU is adapted to the characteristics of complicated calculations, is favorably improved the efficiency of power grid risk scene calculating, realizes the parallel of large-scale calculations Change.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that come for one of ordinary skill in the art Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

1. a kind of power grid risk scenes in parallel computational methods, it is characterised in that comprise the following steps:
Parallel computation pond is built, the parallel computation pond includes multiple calculate nodes, and the calculate node includes multiple calculating and drawn Hold up;
Multiple power grid risk contextual datas are obtained, for power grid risk contextual data corresponding to each computing engines distribution;
Each power grid risk contextual data computing engines corresponding to are transmitted, by each computing engines to each power network Risk contextual data carries out parallel computation;
Obtain parallel computation result.
2. power grid risk scenes in parallel computational methods according to claim 1, it is characterised in that the structure parallel computation The step of pond, comprises the following steps:
Multiple calculate nodes are obtained, each calculate node is added in cluster, task management is created to the cluster;
The computing engines quantity of each calculate node is configured, the cluster is added to initial cluster configuration file, obtains target Cluster configuration file, parallel computation pond is created according to the target cluster configuration file.
3. power grid risk scenes in parallel computational methods according to claim 2, it is characterised in that the configuration is each to be calculated The step of computing engines quantity of node, comprises the following steps:
By the CPU core quantity that the computing engines quantity configuration in current calculate node is current calculate node.
4. power grid risk scenes in parallel computational methods according to claim 2, it is characterised in that at each calculate node The transmission of data is carried out by LAN between same LAN, calculate node.
5. power grid risk scenes in parallel computational methods according to claim 2, it is characterised in that further comprising the steps of:
Any one calculate node in the parallel computation pond is chosen as client, by each power grid risk contextual data Transmit to the client;
Wherein, the client is power grid risk contextual data corresponding to each computing engines distribution;
Each data of transmission comprise the following steps corresponding to the step of computing engines:
Pass through each data of the client transmissions to corresponding computing engines.
6. power grid risk scenes in parallel computational methods according to claim 1, it is characterised in that described to draw for each calculating The step of holding up power grid risk contextual data corresponding to distribution comprises the following steps:
Obtain that computing engines in the parallel computation pond are total and the engine sequence number of each computing engines;
The power grid risk contextual data is sequentially numbered, obtains scene numbering;
Current scene numbering is added 1 to the computing engines sum complementation, obtained and the corresponding targeting engine of current scene numbering Sequence number, current scene is numbered affiliated power grid risk contextual data and distributed to the calculating belonging to the targeting engine sequence number and is drawn Hold up.
7. power grid risk scenes in parallel computational methods according to claim 1, it is characterised in that the parallel computation is put down Platform is MATLAB, and the power grid risk contextual data is counted parallel using the method for the single program multiple data stream of the platform Calculate.
8. power grid risk scenes in parallel computational methods according to claim 7, it is characterised in that the step of the parallel computation Suddenly comprise the following steps:
Corresponding power grid risk contextual data is transferred to GPU corresponding with current computing engines by current computing engines;
Wherein, the GPU judges the type of the power grid risk contextual data received, if the power grid risk contextual data received is thick Close matrix data, then the dense matrix data transfer is counted parallel into GPU video memorys using gpuArray () function Calculate;
If the power grid risk contextual data received is sparse matrix data, storehouse is calculated to described dilute using MEX functions and CUDA Dredge matrix data and carry out parallel computation;
If the power grid risk contextual data received is multiple data and is scalar, self-defined letter is generated by arrayfun functions Number, parallel computation is carried out to the power grid risk contextual data of reception using the SQL.
9. power grid risk scenes in parallel computational methods according to claim 8, it is characterised in that the acquisition parallel computation As a result the step of, comprises the following steps:
Receive the result of calculation that each calculate node returns;Wherein, each calculate node is collected respectively by gather functions respectively From including each computing engines result of calculation.
10. a kind of power grid risk scenes in parallel computing system, it is characterised in that including with lower module:
Parallel computation pond builds module, and for building parallel computation pond, the parallel computation pond includes multiple calculate nodes, described Calculate node includes multiple computing engines;
Contextual data distribute module, for obtaining multiple power grid risk contextual datas, for corresponding to each computing engines distribution Power grid risk contextual data, transmit each power grid risk contextual data computing engines corresponding to;
Parallel computation module, for carrying out parallel computation to each power grid risk contextual data by each computing engines;
As a result collection module, for obtaining parallel computation result.
CN201710756256.9A 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system Pending CN107506932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710756256.9A CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710756256.9A CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Publications (1)

Publication Number Publication Date
CN107506932A true CN107506932A (en) 2017-12-22

Family

ID=60694126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710756256.9A Pending CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Country Status (1)

Country Link
CN (1) CN107506932A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108599173A (en) * 2018-06-21 2018-09-28 清华大学 A kind of method for solving and device of batch trend
CN111181914A (en) * 2019-09-29 2020-05-19 腾讯云计算(北京)有限责任公司 Method, device and system for monitoring internal data security of local area network and server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030195938A1 (en) * 2000-06-26 2003-10-16 Howard Kevin David Parallel processing systems and method
CN102983996A (en) * 2012-11-21 2013-03-20 浪潮电子信息产业股份有限公司 Dynamic allocation method and system for high-availability cluster resource management
CN103617494A (en) * 2013-11-27 2014-03-05 国家电网公司 Wide-area multi-stage distributed parallel power grid analysis system
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030195938A1 (en) * 2000-06-26 2003-10-16 Howard Kevin David Parallel processing systems and method
CN102983996A (en) * 2012-11-21 2013-03-20 浪潮电子信息产业股份有限公司 Dynamic allocation method and system for high-availability cluster resource management
CN103617494A (en) * 2013-11-27 2014-03-05 国家电网公司 Wide-area multi-stage distributed parallel power grid analysis system
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡勇: "Matlab的图形处理器并行计算及其在拓扑优化中的应用", 《计算机应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108599173A (en) * 2018-06-21 2018-09-28 清华大学 A kind of method for solving and device of batch trend
CN111181914A (en) * 2019-09-29 2020-05-19 腾讯云计算(北京)有限责任公司 Method, device and system for monitoring internal data security of local area network and server

Similar Documents

Publication Publication Date Title
CN105045856B (en) A kind of big data remote sensing satellite data processing system based on Hadoop
CN105740084B (en) Consider the cloud computing system Reliability Modeling of common cause fault
CN102831011B (en) A kind of method for scheduling task based on many core systems and device
CN104536937B (en) Big data all-in-one machine realization method based on CPU GPU isomeric groups
CN104598425A (en) General multiprocessor parallel calculation method and system
CN111861793B (en) Distribution and utilization electric service distribution method and device based on cloud edge cooperative computing architecture
CN104618693A (en) Cloud computing based online processing task management method and system for monitoring video
CN105550323A (en) Load balancing prediction method of distributed database, and predictive analyzer
CN103699440A (en) Method and device for cloud computing platform system to distribute resources to task
CN103617067A (en) Electric power software simulation system based on cloud computing
CN107729138B (en) Method and device for analyzing high-performance distributed vector space data
CN104216782A (en) Dynamic resource management method for high-performance computing and cloud computing hybrid environment
CN108932588A (en) A kind of the GROUP OF HYDROPOWER STATIONS Optimal Scheduling and method of front and back end separation
CN102346671B (en) Calculation method based on expansible script language
CN110308984B (en) Cross-cluster computing system for processing geographically distributed data
CN109657794B (en) Instruction queue-based distributed deep neural network performance modeling method
CN109815021B (en) Resource key tree method and system for recursive tree modeling program
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
Zhang et al. Design and implementation of task scheduling strategies for massive remote sensing data processing across multiple data centers
CN107506932A (en) Power grid risk scenes in parallel computational methods and system
CN108134848B (en) SOA system resource optimization method based on graph theory K-segmentation
CN107301094A (en) The dynamic self-adapting data model inquired about towards extensive dynamic transaction
CN114490049A (en) Method and system for automatically allocating resources in containerized edge computing
CN112948123A (en) Spark-based grid hydrological model distributed computing method
CN104484230B (en) More satellite data central task stream dispatching algorithms based on nearly data calculating principle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171222

RJ01 Rejection of invention patent application after publication