CN109815002A - A kind of distributed paralleling calculation platform and its method based on in-circuit emulation - Google Patents

A kind of distributed paralleling calculation platform and its method based on in-circuit emulation Download PDF

Info

Publication number
CN109815002A
CN109815002A CN201711162213.4A CN201711162213A CN109815002A CN 109815002 A CN109815002 A CN 109815002A CN 201711162213 A CN201711162213 A CN 201711162213A CN 109815002 A CN109815002 A CN 109815002A
Authority
CN
China
Prior art keywords
data
server
node
distcomp
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711162213.4A
Other languages
Chinese (zh)
Inventor
周智强
刘娜娜
陈继林
何春江
郭中华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Ningxia Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Ningxia Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, Electric Power Research Institute of State Grid Ningxia Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201711162213.4A priority Critical patent/CN109815002A/en
Publication of CN109815002A publication Critical patent/CN109815002A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The present invention relates to a kind of distributed paralleling calculation platform and its method based on in-circuit emulation, distributed paralleling calculation platform is made of more high-performance servers, it is divided into the node of different function, every class node all completes different functions, and entire in-circuit emulation Distributed Parallel Computing is made of gateway server, dispatch server, data server, calculation server.UDP communication mode of the platform using TCP and based on reliable protocol communication realizes data in entire platform and file transmission, completes order between each node server using multicasting technology and file issues.The distributed paralleling calculation platform that the present invention uses multicasting technology and multi-core technology to set up periodically calculates for electric system on-line mode provides strong Service assurance, system run all right.

Description

A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
Technical field
The present invention relates to the real-time distributed computing sides calculated and result recycles of all kinds of calculating of electric system in-circuit emulation A kind of method, and in particular to distributed paralleling calculation platform and its method based on in-circuit emulation.
Background technique
Electric system in-circuit emulation calculating is related to electric power online calculating, multi-core technology, ip multicast technology, distributed field system System, IO multiplexing and multi-course concurrency technology.Electric power online therein calculates:
The formation of electricity market gradually developed with power grid interconnection pattern, so that system operation mode becomes more and more multiple It is miscellaneous, quick and various, this requires more quickly, accurately and effectively stability contorting means.It is obviously, currently used that " dispersion is set Vertical, off-line calculation, On-line matching " stability contorting mode has been difficult to adapt to.And growth, the power grid scale of power system load level Expansion and power supply reliability require raising, prevent power system transient stability destruction be still particularly important in current electric system One of task.Electric analog under online mode calculates, using current electric grid real-time running state and data, analytical calculation knot Fruit meets that current electric grid is practical, avoids calculated result caused by offline mode overly conservative, and method of operation personnel can be made from numerous It frees in the calculating work of weight, has great significance to power grid day-to-day operation with control.
The stability of electric system is the basic premise that power grid operates normally, and repeatedly the lesson of large-scale blackout is filled both at home and abroad It defends oneself bright, it is ensured that the safe and stable operation of electric system has important meaning to the social stability of country and national urgent fast development Justice.The reason of making a general survey of generation, the development of external all previous large-scale blackout, nothing more than being: electric network composition is unreasonable, lacks preventative Security control means;Relay protection movement is incorrect when accident, and transfer big to trend caused by unit trip lacks estimated or has Countermeasure is imitated, stabilizing control system is imperfect;Information is unsmooth etc. during power grid accident.Therefore, it targetedly takes measures to keep away Exempt from that large-scale blackout occurs, and the safety of power grid can then be improved and significantly improve surely by realizing the on-line analysis of electric system and research Determine operation level, is the developing direction of power network safety operation from now on.
In order to guarantee the safety of electric system, it is steady that progress transient state is required in systems organization, design and operational process Point counting of devising a stratagem is analysed.The purpose that electric power system transient stability calculates is to determine system by large disturbances (as short trouble occurs, bears Biggish mutation, power generation, transmission of electricity or the transformer equipment of excision large capacity etc. occur for lotus moment) after, each generating set of system Whether synchronous operation is able to maintain that, whether voltage is able to maintain in a relatively reasonable level, analyzing influence electric system The various factors of transient stability, and research improves the measure of power system transient stability on this basis.It calculates in real time true It is fixed, when stability is unsatisfactory for prescribed requirement, need to take corresponding urgent stabilizing measures.Such as: cut machine, Fast Valving, electricity The measures such as gas braking, cutting load, off-the-line.Speed of decision is improved, system safe and stable operation is quickly made.
Multi-core technology:
Multi-core processor is more pieces of chips, can be inserted directly into single processor slot, but operating system can utilize All relevant resources, using its each execution kernel as independent logic processor.By between multiple execution kernels Division task, multi-core processor can execute more multitask within the specific clock cycle.
Multi-core technology can make server parallel processing task, and multiple nucleus system is easier to expand, and can be more dainty More powerful process performance is incorporated in shape, power consumption used in this shape is lower, it is less to calculate the heat that power consumption generates.It is more Core framework can make current software run more additive color, and can create the frame for promoting following software programming more perfect Structure.Although new software concurrent process pattern is also being explored by conscientious software vendor, with the transplanting to multi-core processor, now It is not necessary to modify can support multi-core platform for software.
It is not necessary to modify can run for the operating system designed to make full use of multiple processors.In order to make full use of multicore Technology, application developer need to incorporate more thinkings, but design cycle and current symmetric multiprocessing system in programming Design cycle it is identical, and existing single threaded application will also continue to run.Have benefited from applying at multicore for threading Operation would indicate that brilliant performance expandability on reason device, and such software includes multimedia application, engineering and other technologies meter Calculate application and the middle layers such as application server and database and rear stratum server application.
The continuous improvement of application demand has greatly pushed development of computer.Such as current server application, it is desirable that high The application of throughput and the multithreading application on multiprocessor, the application of Internet, P2P and general fit calculation has all promoted meter The continuous promotion of calculation machine performance, multi-core technology have become the important technology fulcrum of server technology.ERP, CRM of large enterprise Etc. complicated applications, the height of scientific algorithm, the Large DBMS of government, mathematics medical field, telecommunications, finance etc. Performance calculates, and multi-core technology meets the demand of these applications.
The ip multicast technology being directed to:
Ip multicast technology is also referred to as multi-address broadcast or multicasting technology, is a kind of one or more host of permission, multicast source hair It send single data packet to multiple host, and is disposable and TCP/IP network technology simultaneously.
Communication of the multicast as point to multi--point is one of the effective ways saved network bandwidth and reduce transmission time. In the application of network data transmission, when needing the signal by a node to be transmitted to multiple nodes, either using repetition Point-to-point communication mode, or use broadcast mode, all can serious waste network bandwidth, only multicast is only best selection. Multicast can make one or more multicast sources that data packet be only sent to specific multicast group, and the host of the multicast group is only added Data packet can be received.Currently, ip multicast technology quote extensively network audio, video broadcasting, the network teleconference, The fields such as AOD/VOD, multimedia remote education, scientific algorithm.
A multicast group is constituted using the All hosts that the same ip multicast address receives multicast packets.One multicast group Member change at any time, a host can be added or leave at any time multicast group, the number of multicast members and location Reason position does not receive limitation yet, and a host also may belong to multiple multicast groups.In addition, being not belonging to the host of some multicast group Data packet can be sent to the multicast group.
Distributed file system:
Distributed file system (Distributed File Systems) is the network based on client/server architecture File system.One typical Network File System may include multiple server-sides for multi-client access, and ad-hoc nature is permitted Perhaps some servers not only play the part of client computer, but also serve as the dual role of server.For example, user, which " can deliver " one, allows it The file directory of his client computer access, for client computer, this file directory is no different with using local drive, is had at present Three kinds of basic distributed file systems: Network File System (NFS), Andrew File System (AFS), distributed file system (DFS)。
Network File System (NFS) is developed by Sun Microsystems, Inc.'s shared file system online as TCP/IP earliest System.Sun Microsystems estimate that about more than 3,100,000 systems are in operation NFS now, wherein arriving mainframe computer, as low as greatly PC machine, wherein at least having 80% system is non-Sun platform.NFS is initially for the direct-connected long-range control of family, local area network without disk Work station and server and design.Price is lower, the higher Linux computing cluster of performance, multi-core processor and the knife edge The appearance of series of products, so that demand of the people to highly efficient file access is a dark horse.The agreement formulated exists within 1984 Obviously the demand of user is unable to satisfy in speed." NFS has felt pressure from the cluster file systems such as Lustre and GPFS, The custom file system technology of the Web2.0 service supplier such as Google GFS is also exerted one's influence to NFS." at present NFS Proceed to pNFS stage, that is, NFS4.1 version.It is NFS more than ten years mostly important function upgrading that pNFS, which represents,.It will simultaneously Row I/O standardization, and client is allowed to store two serious achievements that equipment is pNFS by the way that parallel mode is direct-connected.
Andrew File System (AFS) structure is similar to NFS, by Carnegie Mellon University's information technology center (ITC) Research and development, the Transarc company being now made of preceding ITC office worker are responsible for developing and sell.AFS is enhanced compared with NFS.At present There is the Coda File System successful application based on AFS2 to implement.
Distributed file system (DFS) is a version of AFS, the distribution meter as Open Software Foundation (OSF) Calculate the file system portion in environment (DCE).
Multiplexing and multi-course concurrency technology:
Once IO multiplexing refers to that one or more specified IO condition of kernel discovery process prepares to read, it just leads to Know the process, application program can be simultaneously monitored multiple ports I/O to judge whether operation thereon can carry out, Achieve the purpose that time-multiplexed.Suitable for following several occasions: 1, usually interactive when client handles multiple describing words Input and sockets, it is necessary to be multiplexed using I/O.2, when a client handles multiple sockets simultaneously.If 3, One TCP server should handle monitoring socket, and processing has connected socket again, generally also use I/O multiplexing.4, such as One server of fruit should handle TCP, handle UDP again, to be generally multiplexed using I/O.If 5, a server will be handled Multiple services or multiple agreements, will be generally multiplexed using I/O.
Multi-course concurrency technology is based on multi-core technology, and it is exactly using multi-process, as fork function that construction is concurrently simplest. A general concurrent server, receives client connection request in parent process, then creates a new subprocess and to be every A new client provides service.
The characteristics of multi-process is that each process is independent mutually, does not influence the stability of host process, and subprocess, which runs quickly to burst, to have no relations; By increasing CPU, so that it may be easy to expand performance;Thread locking/unlock influence can be reduced to the greatest extent, systematicness is greatly improved Energy;Each subprocess has 2GB address space and related resource, and the UPS upper performance score that can totally reach is very big.
But with the fast development of science and technology, the real-time calculated performance of all kinds of calculating of existing in-circuit emulation far can not Meet the needs of development.
Summary of the invention
To keep the real-time calculated performance upper limit of all kinds of calculating of in-circuit emulation more powerful, the object of the present invention is to provide one kind Distributed paralleling calculation platform and its method based on in-circuit emulation, the present invention are set up using multicasting technology and multi-core technology Distributed paralleling calculation platform periodically calculates for electric system on-line mode and provides strong Service assurance, and system operation is steady It is fixed.
The purpose of the present invention is adopt the following technical solutions realization:
The present invention provides a kind of distributed paralleling calculation platform based on in-circuit emulation, it is improved in that described flat Platform is made of gateway server, dispatch server, data server, calculate node server and communication middleware;The gateway Server, data server and calculate node server are connect with dispatch server;
The dispatch server is as scheduling node, for being responsible for scheduling user task and control and result recycling;
The data server is as back end, for being put in storage for storing history data and result;
The calculate node server forms distributed type assemblies as calculate node, by multiple calculate node servers, uses In responsible data parallel;
The gateway server be used for be responsible for platform unified external interface and periodically send computation requests, including with its The docking of his system and data are synchronous;
Gateway server, dispatch server, data server, calculate node server each server on dispose There is communication middleware as defender, finger daemon is run, and each calculate node starts a communication and keeps in distributed type assemblies Shield process is used for data transfer request and service request.
Further, (yjq refers to application name to the gateway server disposing application program, and non-resident process passes through The application sends computation request message to middleware, has run backed off after random, until restarting operation next time) timed task starting Using with Data Integration application and communication middleware, Data Integration application, which is responsible for regularly updating, to be calculated source data and handles corresponding New data, in-circuit emulation real time data timestamp is marked by file gd.QS file, yjq is applied every 15 minutes by logical Believe that middleware issues a computation requests to dispatch server.
Further, the dispatch server is also used to be responsible for receive and response computation is requested, calculate node resource management And control, task schedule and distribution, it calculates data multicast and unicasts to corresponding calculate node and back end, calculated result is returned Corresponding application program is received and calls, processing result summarizes and interpretation of result, finally notifies corresponding client calculates to complete Situation information;
The data server is also used to be responsible for historical results data and saves, deployment platform application and communication middleware journey Sequence, platform apply the catalogue for the result data deposit response for forwarding dispatch server, and by calculated result summary info Database is written.
Further, the calculate node server is also used to be responsible for the calculation command request of response scheduling server, adjusts Corresponding operation is calculated with calculation procedure, calculated result is returned into scheduling node server.
Further, the calculate node on the platform is dynamically added or exits distributed computing management platform, is formed logical Believe the automatic extension of node;
Distributed paralleling calculation platform program DistComp and each upper layer application and communication middleware are connected using the long of TCP Short connection type is connect, data are transmitted using UDP communication pattern between scheduling node, calculate node and back end.
The present invention provides a kind of using the distributed parallel according to any one of claims 1 to 5 based on in-circuit emulation The implementation method of computing platform, it is improved in that the method includes the following steps:
(1) gateway node server yjq is applied and is transmitted the user data integrated every 15 minutes a cycles To dispatch server;
(2) dispatch server receives as scheduling node and issues meter according to the distribution of calculate node resource situation after computation requests In calculation task to each available calculate node server;
(3) calculate node server after receiving computation requests starts that calculation procedure is called to be calculated, after the completion of to be calculated, Calculated result is passed back into dispatch server;
(4) it after dispatch server receives calculated result, is transmitted to data server and carries out historical results storage and abstract letter Breath storage is saved to for future reference;
(5) dispatch server call result total program is summarized and is notified simultaneously corresponding client, is sent and is calculated Completion notice message;Distributed paralleling calculation platform client application, gateway, scheduling, calculating, data server pass through communication System client is interacted with communication middleware, and asynchronous system receives to come from online data calculating task, forwards number of tasks to node According to administration order information and assign control command.
Further, the step (1) includes the following steps:
1. man-machine interface DSA or D5000 submit the request of two-stage calculating task to scheduling node, two-stage calculating task Solicited message receives and carry out protocal analysis by scheduling node distcomp_master shell control process and file content generates;
2. scheduling node distcomp_master shell controls process reading/home/ndsa/conf/ TaskControl.conf file is parsed producer's primary control program process name, and is communicated with using signaling mechanism, scheduling node Distcomp_master shell is controlled process and is communicated with distributed paralleling calculation platform DistComp using message mechanism, realizes money Source request and the transmission of node Absolute control message;
3. after all Data Preparations are all carried out, controlling process process root by scheduling node distcomp_master shell The corresponding calculate node that calculates is multicast to according to the selection of HostTask.conf file content to participate in calculating, after result to be calculated is all returned, Message, which is sent, to distributed paralleling calculation platform DistComp discharges calculate node control;
Quasi- by distributed paralleling calculation platform DistComp first after 4. calculate node receives the request of two-stage calculating task Then standby data control program distcomp_interface by the calculating shell of message informing calculate node again and start the two-stage Calculated result is returned to scheduling node after the completion of two-stage calculation procedure all calculates by calculation procedure;
6. the communication mode use for calculating shell control program distcomp_interface and calculation procedure of calculate node The message is informed that the calculating shell of calculate node controls automatically after calculation procedure is completed with a stage consistent mode Program distcomp_interface, then by afterflow after the calculating shell control program distcomp_interface of calculate node Journey processing.
Further, the step (2) includes the following steps:
1) scheduling node distcomp_master shell control process receives the request of two-stage task computation, is assisted first View analysis and data generate, and then analyze TaskControl.conf file, producer's management program is recorded in process list team Column;
2) scheduling node distcomp_master shell controls process analysis data directory, and the calculate node for calculating needs is come This subtask is calculated, then sends resource request message to distributed paralleling calculation platform DistComp, is divided after message feedback The currently available node resource of cloth parallel computing platform whether meet demand, if it is satisfied, then being transferred to step 3);Otherwise after supervention Resource request message is sent, until resource meets;
3) scheduling node distcomp_master shell control process requests feedback to generate according to resource information Configuration file is repacked zip file by HostTask.conf and configuration file, and to distributed paralleling calculation platform DistComp sending node monopolizes solicited message, and distributed paralleling calculation platform DistComp is allowed to surrender calculate node control, It is fully controlled by scheduling node distcomp_master shell control process;
4) after successfully controlling calculate node, scheduling node distcomp_master shell controls process analysis TaskControl.conf file, and start all producer's management control journeys listed by TaskControl.conf file simultaneously Process number is recorded in short range queue by sequence;
5) scheduling node distcomp_master shell control process takes out a process in process queue, is sent to it USR1 signal generates preparation to complete data and catalogue, if not receiving the USR1 letter of progress feedback at the appointed time Number, then it is assumed that Data Preparation can not be ready, if process queue at present is sky, this time task computation failure;Otherwise from Continue to take out another producer's management program in process queue, continues step 5);
6) after having successfully received data ready USR1 signal, scheduling node distcomp_master shell controls process for zip text Part content and protocol contents select to be multicasted to calculate node participation calculating work together, and timer is arranged;
7) it in calculate node settlement process, receives after the message of result completion every time all to being presently processing Producer's managing process sends calculating task and completes USR2 signal, and in timer time reaches and sending times are less than specified Number (can be 3 times) analyzes the calculate node for not completing and calculating and lays equal stress on if receiving the USR2 signal of producer's process not yet Data are reselected and are multicasted to respective nodes participation calculating, are then transferred to step 8) by newly-generated HostTask.conf file;
It 8), will in the case that the calculating task for receiving producer completes USR2 signal or overtime sending times have reached the upper limit Producer's calculating task is set to completion status, and sends message informing back end and carry out the operation such as database storage, then sentences Whether disconnected all processes of producer's process queue have all been handled, if untreated complete, process skips to the 5) step, are otherwise indicated this time Interface submits task to be fully completed, and sends to calculate to distributed paralleling calculation platform DistComp and completes message, recycling is only The calculate node resource accounted for returns distributed paralleling calculation platform DistComp control.
Further, the step 3) includes the following steps:
1 > calculate node Distributed Computing Platform application name is attached to version number DistCompV3.2.1 (distcomp_ Interface indicates the subprocess of DistCompV3.2.1, their function is different, and DistCompV3.2.1 is mainly to task Manage and control and to the preparation for calculating data, distcomp_interface is mainly to the calling of calculation procedure With the analysis of calculation document) receive two-stage computation requests after, taken out from message queue and message and parse message, in corresponding mesh Record generates data;
After the completion of 2 > Data Preparation, sent to the calculating shell of calculate node control program distcomp_interface Two-stage computation request message;
The calculating shell control program distcomp_interface of 3 > calculate node analyzes TaskControl.conf first File, then according to its content analysis corresponding LocalTask_PSASP_DISTATCLF.exe_ (hostname) .conf file And by it is in need participate in calculate two-stage calculation procedure be recorded in process queue;
4 > take out producer's calculation procedure from process queue and start and exit signal with monitoring process and timing is set Device reads destination file after calculation procedure, which calculates, to be completed to exit and starts next calculation procedure (if there is in the case where), After all calculation procedures have all been calculated, the calculating success message of related destination file content is sent to scheduling node;
If 5 > time-out time is already expired, calculation procedure do not calculate as a result, if decide whether to restart meter according to number is reruned It adds journey to recalculate, recalculate if necessary, then process is back to 4 > step;Otherwise it is sent to scheduling node and calculates failure Message.
Compared with the immediate prior art, the excellent effect that technical solution provided by the invention has is:
The distributed paralleling calculation platform that the present invention uses multicasting technology and multi-core technology to set up is online for electric system Pattern cycle, which calculates, provides strong Service assurance, system run all right.
Whether period calculating or event or the artificial triggering of distributed paralleling calculation platform based on on-line operation state It calculates, in the case that calculating task is saturated with respect to computing resource, operation is efficient, stablizes.
Distributed paralleling calculation platform realizes the predistribution of data and program, reduces Internet traffic, greatly mentions High communication efficiency, the extensive layered distribution type parallel computing platform of multi-stage scheduling under distributed integration scheduling scheme solve Single Point of Faliure problem, realizes Network Load Balance, has evaded the insufficient phenomenon of the utilization of resources, improved resource utilization.
The present invention sets up distributed parallel platform by ip multicast technology and multi-core technology.The present invention is multiple using I/O multichannel With the efficient concurrent function for realizing distributed computing with multi-process theory.The spread function of communications platform of the present invention, multicast Technology reduces Internet traffic, improves running efficiency of system.The present invention effectively plays the computing resource of Servers-all, reaches high Effect, reliable real-time computing function.The present invention reduces the operating cost of powernet simulation calculation, improves automation and calculates energy Power.
Detailed description of the invention
Fig. 1 is Distributed Parallel Computing management platform connection schematic diagram provided by the invention;
Fig. 2 is that communication middleware provided by the invention constitutes schematic diagram;
Fig. 3 is that Distributed Computing Platform communication system provided by the invention constitutes schematic diagram;
Fig. 4 is distributed paralleling calculation platform provided by the invention from starting to the basic flow chart exited;
Fig. 5 is distributed paralleling calculation platform general structure design schematic diagram provided by the invention;
Fig. 6 is scheduling node process flow design diagram provided by the invention;
Fig. 7 is calculate node process flow design diagram provided by the invention;
Fig. 8 is distributed paralleling calculation platform network diagram provided by the invention;
Fig. 9 is Distributed Computing Platform data flow diagram provided by the invention;
Specific embodiment
Specific embodiments of the present invention will be described in further detail with reference to the accompanying drawing.
The following description and drawings fully show specific embodiments of the present invention, to enable those skilled in the art to Practice them.Other embodiments may include structure, logic, it is electrical, process and other change.Embodiment Only represent possible variation.Unless explicitly requested, otherwise individual component and function are optional, and the sequence operated can be with Variation.The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.This hair The range of bright embodiment includes equivalent obtained by the entire scope of claims and all of claims Object.Herein, these embodiments of the invention can individually or generally be indicated that this is only with term " invention " For convenience, and if in fact disclosing the invention more than one, the range for being not meant to automatically limit the application is to appoint What single invention or inventive concept.
The technical term used first to the present invention does following explanations:
In-circuit emulation calculates: being calculated by the period and carries out electric power online security and stability analysis and electromechanical, electromagnetism hybrid simulation.
Distributed Parallel Computing: a large amount of calculating tasks are distributed to different computers and carry out multi-core parallel concurrent calculating, and will As a result the software systems summarized.
Online data integration: point utilize on-line operation data and offline mode data, formed a set of electric network model in detail, ginseng It counts up to whole, is able to reflect the integral data of real-time working condition, improves the quality of data, reduce the difficulty and workload of data maintenance, There is provided data basis for safety on line Stability Assessment, early warning and prevention and control, thus realize off-line calculation analysis onlineization with It is efficient.
One, distributed paralleling calculation platform
Distributed paralleling calculation platform is made of the server of one group of associated responsible different business processing, such as Fig. 1 and 8 Shown, Distributed Parallel Computing is made of more high-performance servers, is divided into the node of different function, and every class node is all completed Different functions, entire in-circuit emulation Distributed Parallel Computing is by gateway server, dispatch server, data server, calculating Server composition.They constitute a distributed type assemblies, and platform includes several calculate nodes, are responsible for data parallel, Scheduling node is the core of platform, is responsible for scheduling user task and control and result recycling, back end is for storing history Data and result enter library facility, and gateway server is responsible for the unified external interface of platform and periodically sends computation requests, packet Include with other systems dock and data synchronizing function etc..
UDP communication mode of the distributed paralleling calculation platform using TCP and based on reliable protocol communication, realizes whole system Interior data and file transmission, complete order between each node server using multicasting technology and file issue.
Since gateway, gateway node server yjq was applied every 15 minutes entire in-circuit emulation distributed computing process The user data integrated is transmitted dispatch server by a cycle, and dispatch server receives after computation requests according to meter The distribution of operator node resource situation issues in calculating task to each available calculate node server, and calculate node receives calculating and asks Start that calculation procedure is called to be calculated after asking, after the completion of to be calculated, result is passed back into dispatch server, dispatch server is received To after calculated result, it is transmitted to that data server carries out historical results storage and summary info storage saves the scheduling with to for future reference Server calls correlated results total program to be summarized simultaneously and notifies corresponding client simultaneously, sends and calculates completion notice Message.
Gateway server disposes yjq timed task starting application and Data Integration application and communication middleware, data are whole It closes application to be responsible for regularly updating calculating source data and handle corresponding new data, passes through file gd.QS file and mark in-circuit emulation Real time data timestamp, yjq, which is applied, issued a computation requests to dispatch server by communication middleware every 15 minutes.
Dispatch server plays the role of forming a connecting link as distributed computing Core Management Server, is responsible for receiving and ring Computation requests, calculate node resource management and control, task schedule and distribution are answered, data multicast is calculated and unicasts to corresponding calculating Node and back end, the recycling and the corresponding application program of calling of calculated result, processing result summarizes and interpretation of result, most After notify corresponding client to calculate performance information.
Calculate node forms distributed type assemblies by multiple servers, and each server is mutually indepedent, is responsible for response scheduling clothes The calculation command request of business device, calls calculation procedure to calculate corresponding operation, calculated result is returned to scheduling node server.
Data server is responsible for the preservation of historical results data, and deployment platform application and communication middleware program, platform are answered The catalogue responded with the result data deposit for forwarding dispatch server, and data are written into calculated result summary info Library.
Two, communication middleware:
The each server that communication middleware is deployed in distributed type assemblies is run as finger daemon, each section in cluster Point all starts a communication finger daemon and is used for data transfer request and service request.
Backstage is started using script mode, dependence of the releasing communication middleware to configuration file, dynamic extending, greatly Improve the maintainability of system.
Entire distributed paralleling calculation platform is divided by function three groups: ServerGroup, ResultGroup, TaskGroup, ServerGroup include scheduling, data, gateway node, and ResultGroup includes scheduling node, TaskGroup Including all calculate nodes for calculating of platform.
Each node can be dynamically added or exit Distributed Computing Platform, form the automatic extension of communication node.
Distributed paralleling calculation platform program DistComp and each upper layer application and communication middleware use TCP connection side Formula, can grow connection short can also connect, and transmit data using reliable UDP communication pattern between each node.Communication middleware is constituted As shown in Figure 2.
Distributed paralleling calculation platform communication middleware Starting mode is as shown in table 1 below:
1 distributed paralleling calculation platform communication middleware Starting mode of table
Three, distributed paralleling calculation platform communication system:
Distributed paralleling calculation platform client application, gateway, scheduling, calculating, data server pass through communication system Client is interacted with communication finger daemon, and asynchronous system receives to come from online data calculating task, forwards task data to node With administration order information and assign control command.
Using interacting with scheduling and gateway, passing through local communication middleware, there are multiple on the same communication finger daemon Client;It, will certainly be concurrent to communication middleware since the same communication finger daemon increases multiple client local communication Task process performance proposes very high requirement, this just needs communication middleware to improve and improves concurrent tasks processing capacity, Fig. 3 Illustrate that Distributed Computing Platform communication system is constituted.
Four, distributed paralleling calculation platform basic handling process:
Distributed paralleling calculation platform is followed successively by application memory, reading platform configuration from the basic procedure exited is started to, Connection communication middleware finger daemon simultaneously monitors connection socket state, reconnects communication when losing to connect automatically and guards Process reads application configuration, starts producer's application management process, opens the systemic circulation of message-driven event, presses when message arrives According to messaging protocol flow processing particular message, after capturing and exiting signal, disconnects and communicate finger daemon and connect, in release It deposits, site clearing backed off after random, flow chart is as shown in Figure 4.
A kind of implementation method of distributed paralleling calculation platform the present invention also provides application based on in-circuit emulation, including under State step:
(1) gateway node server yjq is applied and is transmitted the user data integrated every 15 minutes a cycles To dispatch server;
(2) dispatch server receives as scheduling node and issues meter according to the distribution of calculate node resource situation after computation requests In calculation task to each available calculate node server;
(3) calculate node server after receiving computation requests starts that calculation procedure is called to be calculated, after the completion of to be calculated, Calculated result is passed back into dispatch server;
(4) it after dispatch server receives calculated result, is transmitted to data server and carries out historical results storage and abstract letter Breath storage is saved to for future reference;
(5) dispatch server call result total program is summarized and is notified simultaneously corresponding client, is sent and is calculated Completion notice message.
Distributed paralleling calculation platform whole design schematic diagram is as shown in figure 5, Distributed Computing Platform data flow diagram such as Fig. 9 Shown, wherein step (1) includes the following steps:
1. man-machine interface DSA or D5000 etc. submit the request of two-stage calculating task to scheduling node, the message by Distcomp_master scheduling shell side sequence receives and carries out protocal analysis and file content generates.
2. distcomp_master reading/home/ndsa/conf/TaskControl.conf file, parses producer Primary control program process name, and communicated with using signaling mechanism, distcomp_master and DistCompV3.2.1 use message Mechanism communication, realizes the transmission of resource request and node Absolute control message.
After 3. all Data Preparations are all carried out, by distcomp_master process according to HostTask.conf file Content selection multicasts to the corresponding calculate node that calculates and participates in calculating, and after result to be calculated is all returned, sends to DistCompV3.21 Message discharges node control power.
4. preparing data by DistCompV3.2.1 first, then after calculate node receives the request of two-stage calculating task Two-stage calculation procedure is started by message informing distcomp_interface again, all calculates completion to two-stage calculation procedure Afterwards, calculated result is returned into scheduling node.
5. calculate node distcomp_interface and the communication mode of calculation procedure use and a stage consistent side The message is informed distcomp_interface automatically, then by distcomp_ after calculation procedure is completed by formula The processing of interface follow-up process.
Wherein the flow diagram of step (2) is as shown in fig. 6, include the following steps:
1) scheduling node distcomp_master shell control process receives the request of two-stage task computation, is assisted first View analysis and data generate, and then analyze TaskControl.conf file, producer's management program is recorded in process list team Column.
2) distcomp_master analyzes data directory, calculates and how many calculate node is needed to calculate this subtask, then To DistComp send resource request message, to the currently available node resource of message feedback post analysis platform whether meet demand, If it is satisfied, process jumps 3, otherwise continue to send resource request message, until resource meets.
3) distcomp_master requests feedback to generate HostTask.conf and other some configurations according to resource information Configuration file is repacked zip file, and monopolizes solicited message to DistComp sending node by file, allows DistComp Calculate node control is surrendered, is fully controlled by distcomp_master.
4) after successfully controlling calculate node, distcomp_master analyzes TaskControl.conf file, and opens simultaneously All producer's supervisor control programs listed by dynamic this document, are recorded in queue for process number.
5) distcomp_master takes out a process in process queue, is sent to it USR1 signal to complete data The preparations such as generate with catalogue, if not receiving the USR1 signal of the progress feedback at the appointed time, then it is assumed that data are quasi- Standby work can not be ready, if process queue at present is sky, this time task computation failure, otherwise continues to take from process queue Another producer's management program out, process continue to do step 5 work.
6) after having successfully received data ready USR1 signal, distcomp_master is by zip file content and protocol contents one It plays selection and is multicasted to calculate node participation calculating work, and timer is set.
7) it in calculate node settlement process, receives after the message of result completion every time all to being presently processing Producer's managing process sends USR2 signal, reaches interior and sending times less than 3 (may specify) if do not had also in timer time There is the USR2 signal (calculating task completion signal) for receiving producer's process, then analyzes which node does not complete calculating and regenerates HostTask.conf file, data are reselected and are multicasted to respective nodes participation calculating, 8) process jumps to.
8) in the case that the USR2 signal for receiving producer or overtime sending times have reached the upper limit, which is calculated and is appointed Business is set to completion status, and sends message informing back end and carry out the operation such as database storage, then judges process team, producer Arrange whether all processes have all been handled, if untreated complete, process skips to step 5, otherwise indicates that task is submitted at this interface It has been be fully completed that, send to calculate to DistCompV3.2.1 and complete message, recycle exclusive calculate node resource, return DistCompV3.2.1 control.
Wherein the flow chart of step (3) is as shown in fig. 7, comprises following step:
After 1 > calculate node DistCompV3.2.1 receives two-stage computation requests, message is taken out from message queue and is solved Message is analysed, generates data in respective directories.
After the completion of 2 > Data Preparation, two-stage calculating is sent to shell control program distcomp_interface is calculated Request message.
3 > distcomp_interface analyzes TaskControl.conf file first, then according to its content analysis phase Answer LocalTask_PSASP_DISTATCLF.exe_ (hostname) .conf file and by it is in need participate in calculate second order Section calculation procedure record is in the queue.
4 > take out producer's calculation procedure from queue and start and exit signal with monitoring process and timer is set, when Calculation procedure, which calculates, to be read destination file and starts next calculation procedure (if there is in the case where) after completion is exited, to all After calculation procedure has all been calculated, the calculating success message of related destination file content is sent to scheduling node.
If 5 > time-out time is already expired, calculation procedure do not calculate as a result, if decide whether to restart meter according to number is reruned It adds journey to recalculate, recalculate if necessary, then process jumps to 4 > step, otherwise sends to scheduling node and calculates failure Message.
The above embodiments are merely illustrative of the technical scheme of the present invention and are not intended to be limiting thereof, although referring to above-described embodiment pair The present invention is described in detail, those of ordinary skill in the art still can to a specific embodiment of the invention into Row modification perhaps equivalent replacement these without departing from any modification of spirit and scope of the invention or equivalent replacement, applying Within pending claims of the invention.

Claims (9)

1. a kind of distributed paralleling calculation platform based on in-circuit emulation, which is characterized in that the platform is by gateway server, tune Spend server, data server, calculate node server and communication middleware composition;The gateway server, data server It is connect respectively with dispatch server with calculate node server;
The dispatch server is as scheduling node, for being responsible for scheduling user task and control and result recycling;
The data server is as back end, for being put in storage for storing history data and result;
The calculate node server forms distributed type assemblies as calculate node, by multiple calculate node servers, for bearing Blame data parallel;
The gateway server is used to be responsible for the unified external interface of platform and periodically sends computation requests, including is with other The docking of system and data are synchronous;
The gateway server, dispatch server, data server, calculate node server each server on equal portion Administration has communication middleware as defender, finger daemon operation;Each calculate node starts one in the distributed type assemblies It communicates finger daemon and is used for data transfer request and service request.
2. distributed paralleling calculation platform as described in claim 1, which is characterized in that the gateway server application deployment journey Sequence is with timed task starting application and Data Integration application and communication middleware;The Data Integration application is responsible for regularly updating It calculates source data and handles corresponding new data, in-circuit emulation real time data timestamp is marked by the file of file gd.QS;It answers A computation requests are issued to dispatch server by communication middleware every 15 minutes with program.
3. distributed paralleling calculation platform as described in claim 1, which is characterized in that the dispatch server is also used to be responsible for It receives and response computation is requested, calculate node resource management and control, task schedule and distribution, calculate data multicast and unicast to Corresponding calculate node and back end, the recycling of calculated result and the corresponding application program of calling, processing result summarize and tie Fruit analysis, finally notifies corresponding client to calculate performance information;
The data server is also used to be responsible for historical results data and saves, deployment platform application and communication middleware program, Platform applies the catalogue for the result data deposit response for forwarding dispatch server, and calculated result summary info is written Database.
4. distributed paralleling calculation platform as described in claim 1, which is characterized in that the calculate node server is also used to It is responsible for the calculation command request of response scheduling server, calls calculation procedure to calculate corresponding operation, calculated result is returned to Scheduling node server.
5. distributed paralleling calculation platform as described in claim 1, which is characterized in that the calculate node dynamic on the platform It is added or is exited distributed computing management platform, forms the automatic extension of communication node;
Distributed paralleling calculation platform program DistComp and each upper layer application and communication middleware are short using the long connection of TCP Connection type;Data are transmitted using UDP communication pattern between scheduling node, calculate node and back end.
6. a kind of reality using the distributed paralleling calculation platform according to any one of claims 1 to 5 based on in-circuit emulation Existing method, which is characterized in that the method includes the following steps:
(1) gateway node server yjq is to transmit scheduling for the user data integrated every 15 minutes a cycles Server;
(2) it receives to be calculated according to calculate node resource situation after computation requests as the dispatch server of scheduling node and appoint Business is issued to each available calculate node server;
(3) the calculate node server after receiving computation requests starts that calculation procedure is called to be calculated, and after the completion of calculation, will count It calculates result and passes back to dispatch server;
(4) it after the dispatch server receives calculated result, is transmitted to data server and carries out historical results storage and abstract letter Breath storage is saved to for future reference;
(5) the dispatch server call result total program is summarized and is notified simultaneously corresponding client, is sent and is calculated Completion notice message;The application of distributed paralleling calculation platform client, gateway, scheduling, calculating, data server pass through logical Letter system client is interacted with communication middleware, and asynchronous system receives to come from online data calculating task, forwards task to node Data and administration order information and assign control command.
7. implementation method as claimed in claim 6, which is characterized in that the step (1) includes the following steps:
1. man-machine interface DSA or D5000 submits the request of two-stage calculating task, the two-stage calculating task to scheduling node Solicited message receives and carry out protocal analysis by scheduling node distcomp_master shell control process and file content generates;
2. the scheduling node distcomp_master shell controls process reading/home/ndsa/conf/ TaskControl.conf file is parsed the primary control program process name of producer, and is communicated with using signaling mechanism;Scheduling section Point distcomp_master shell is controlled process and is communicated with distributed paralleling calculation platform DistComp using message mechanism, is realized The transmission of resource request and node Absolute control message;
3. after carrying out all Data Preparations, by the scheduling node distcomp_master shell control process process according to HostTask.conf file content selection group is transmitted to the corresponding calculate node that calculates and participates in calculating, after result to be calculated returns back, Message, which is sent, to distributed paralleling calculation platform DistComp discharges calculate node control;
4. quasi- by distributed paralleling calculation platform DistComp first after the calculate node receives the request of two-stage calculating task Then standby data control program distcomp_interface by the calculating shell of message informing calculate node again and start the two-stage Calculated result after the completion of two-stage calculation procedure calculates, is returned to the scheduling node by calculation procedure;
6. calculate node calculate shell control program distcomp_interface and calculation procedure communication mode use and one The message is informed that the calculating shell of calculate node controls program after calculation procedure is completed by stage consistent mode automatically Then distcomp_interface controls program distcomp_interface follow-up process by the calculating shell of calculate node Reason.
8. implementation method as claimed in claim 6, which is characterized in that the step (2) includes the following steps:
1) after scheduling node distcomp_master shell control process receives the request of two-stage task computation, agreement is carried out first Analysis and data generate, and then analyze TaskControl.conf file, producer's management program is recorded in process list queue;
2) scheduling node distcomp_master shell controls process analysis data directory, according to the calculate node meter for calculating needs This subtask is calculated, then resource request message is sent to distributed paralleling calculation platform DistComp, is distributed after message feedback The currently available node resource of formula parallel computing platform whether meet demand, if it is satisfied, then being transferred to step 3);Otherwise continue to send Resource request message, until resource meets;
3) the scheduling node distcomp_master shell control process requests feedback to generate according to resource information Configuration file is repacked zip file by HostTask.conf and configuration file, and to distributed paralleling calculation platform DistComp sending node monopolizes solicited message, and distributed paralleling calculation platform DistComp is allowed to surrender calculate node control, It is fully controlled by scheduling node distcomp_master shell control process;
4) after controlling calculate node, scheduling node distcomp_master shell controls process analysis TaskControl.conf text Part, and start all producer's supervisor control programs listed by TaskControl.conf file simultaneously, process number is recorded in closely In journey queue;
5) scheduling node distcomp_master shell control process takes out a process in process queue, is sent to it USR1 Signal generates preparation to complete data and catalogue, if not receiving the USR1 signal of progress feedback at the appointed time, Think that Data Preparation can not be ready, if process queue at present is sky, this time task computation failure;Otherwise from process team Continue to take out another producer's management program in column, continues step 5);
6) after receiving data ready USR1 signal, scheduling node distcomp_master shell control process by zip file content and Protocol contents select to be multicasted to calculate node participation calculating work together, and timer is arranged;
7) in calculate node settlement process, receive every time a result completion message after all to the producer being presently processing Managing process sends calculating task and completes USR2 signal, in timer time reaches and retransmits, weight number is less than specified 3 It is secondary) if receiving the USR2 signal of producer's process not yet, it analyzes and does not complete the calculate node calculated and regenerate Data are reselected and are multicasted to respective nodes participation calculating, are transferred to step 8) by HostTask.conf file;
8) in the case that the calculating task for receiving producer completes USR2 signal or overtime sending times have reached the upper limit, by the factory Family's calculating task is arranged to completion status, and sends message informing back end and carry out the operation such as database storage, then judges Whether all processes of producer's process queue have all been handled, if untreated complete, process skips to the 5) step, otherwise indicate this boundary Face submits task to be fully completed, and sends to calculate to distributed paralleling calculation platform DistComp and completes message, recycling is exclusive Calculate node resource, return distributed paralleling calculation platform DistComp control.
9. implementation method as claimed in claim 6, which is characterized in that the step 3) includes the following steps:
1 > calculate node Distributed Computing Platform application name is attached to version number DistCompV3.2.1 and receives two-stage calculating After request, message is taken out from message queue and parses message, generate data in respective directories;
After the completion of 2 > Data Preparation, second order is sent to the calculating shell of calculate node control program distcomp_interface Section computation request message;
The calculating shell control program distcomp_interface of 3 > calculate node analyzes TaskControl.conf file first, Then according to the corresponding LocalTask_PSASP_DISTATCLF.exe_ of its content analysis (hostname) .conf file and by institute The two-stage calculation procedure in need for participating in calculating is recorded in process queue;
4 > take out producer's calculation procedure from process queue and start and exit signal with monitoring process and timer is set, when It is all complete to all calculation procedures when calculation procedure calculates reading destination file after completion is exited and starts next calculation procedure Afterwards, the calculating success message of attached destination file content is sent to scheduling node;
5 > if overtime calculation procedure do not calculate yet as a result, if according to number is reruned decide whether that restart calculation procedure counts again It calculates, recalculates if necessary, then process is back to 4 > step;Otherwise it is sent to scheduling node and calculates failure news.
CN201711162213.4A 2017-11-21 2017-11-21 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation Pending CN109815002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711162213.4A CN109815002A (en) 2017-11-21 2017-11-21 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711162213.4A CN109815002A (en) 2017-11-21 2017-11-21 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation

Publications (1)

Publication Number Publication Date
CN109815002A true CN109815002A (en) 2019-05-28

Family

ID=66598728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711162213.4A Pending CN109815002A (en) 2017-11-21 2017-11-21 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation

Country Status (1)

Country Link
CN (1) CN109815002A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262288A (en) * 2019-07-15 2019-09-20 北京七展国际数字科技有限公司 A kind of electric power isomery hybrid real-time simulation system
CN111090794A (en) * 2019-11-07 2020-05-01 远景智能国际私人投资有限公司 Meteorological data query method, device and storage medium
CN111679859A (en) * 2020-06-11 2020-09-18 山东省计算中心(国家超级计算济南中心) I/O intensive high-performance application-oriented automatic parallel MPI-I/O acceleration method
CN111753997A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Distributed training method, system, device and storage medium
CN111767338A (en) * 2020-02-10 2020-10-13 中国科学院计算技术研究所 Distributed data storage method and system for online super real-time simulation of power system
CN112637067A (en) * 2020-12-28 2021-04-09 北京明略软件系统有限公司 Graph parallel computing system and method based on analog network broadcast
CN113239522A (en) * 2021-04-20 2021-08-10 四川大学 Atmospheric pollutant diffusion simulation method based on computer cluster
CN113886092A (en) * 2021-12-07 2022-01-04 苏州浪潮智能科技有限公司 Computation graph execution method and device and related equipment
CN116074392A (en) * 2023-03-31 2023-05-05 成都四方伟业软件股份有限公司 Intelligent matching method and device for data stream transmission modes
CN113283803B (en) * 2021-06-17 2024-04-23 金蝶软件(中国)有限公司 Method for making material demand plan, related device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441580A (en) * 2008-12-09 2009-05-27 华北电网有限公司 Distributed paralleling calculation platform system and calculation task allocating method thereof
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management
CN103873321A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributed file system-based simulation distributed parallel computing platform and method
CN108256263A (en) * 2018-02-07 2018-07-06 中国电力科学研究院有限公司 A kind of electric system hybrid simulation concurrent computational system and its method for scheduling task
US20180247001A1 (en) * 2015-09-06 2018-08-30 China Electric Power Research Institute Company Limited Digital simulation system of power distribution network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441580A (en) * 2008-12-09 2009-05-27 华北电网有限公司 Distributed paralleling calculation platform system and calculation task allocating method thereof
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management
CN103873321A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributed file system-based simulation distributed parallel computing platform and method
US20180247001A1 (en) * 2015-09-06 2018-08-30 China Electric Power Research Institute Company Limited Digital simulation system of power distribution network
CN108256263A (en) * 2018-02-07 2018-07-06 中国电力科学研究院有限公司 A kind of electric system hybrid simulation concurrent computational system and its method for scheduling task

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张凯;何颖;: "基于云计算的电力仿真系统研究" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262288A (en) * 2019-07-15 2019-09-20 北京七展国际数字科技有限公司 A kind of electric power isomery hybrid real-time simulation system
CN111090794A (en) * 2019-11-07 2020-05-01 远景智能国际私人投资有限公司 Meteorological data query method, device and storage medium
CN111090794B (en) * 2019-11-07 2023-12-05 远景智能国际私人投资有限公司 Meteorological data query method, device and storage medium
CN111767338A (en) * 2020-02-10 2020-10-13 中国科学院计算技术研究所 Distributed data storage method and system for online super real-time simulation of power system
CN111679859B (en) * 2020-06-11 2023-08-18 山东省计算中心(国家超级计算济南中心) Automatic parallel MPI-I/O acceleration method for I/O intensive high-performance application
CN111679859A (en) * 2020-06-11 2020-09-18 山东省计算中心(国家超级计算济南中心) I/O intensive high-performance application-oriented automatic parallel MPI-I/O acceleration method
CN111753997A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Distributed training method, system, device and storage medium
CN112637067A (en) * 2020-12-28 2021-04-09 北京明略软件系统有限公司 Graph parallel computing system and method based on analog network broadcast
CN113239522A (en) * 2021-04-20 2021-08-10 四川大学 Atmospheric pollutant diffusion simulation method based on computer cluster
CN113239522B (en) * 2021-04-20 2022-06-28 四川大学 Atmospheric pollutant diffusion simulation method based on computer cluster
CN113283803B (en) * 2021-06-17 2024-04-23 金蝶软件(中国)有限公司 Method for making material demand plan, related device and storage medium
CN113886092A (en) * 2021-12-07 2022-01-04 苏州浪潮智能科技有限公司 Computation graph execution method and device and related equipment
CN116074392A (en) * 2023-03-31 2023-05-05 成都四方伟业软件股份有限公司 Intelligent matching method and device for data stream transmission modes

Similar Documents

Publication Publication Date Title
CN109815002A (en) A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
CN106844198B (en) Distributed dispatching automation test platform and method
Sfiligoi glideinWMS—a generic pilot-based workload management system
Fujimoto Distributed simulation systems
CN103197952B (en) The management system and method disposed for application system maintenance based on cloud infrastructure
Sotiriadis et al. SimIC: Designing a new inter-cloud simulation platform for integrating large-scale resource management
CN104112049B (en) Based on the MapReduce task of P2P framework across data center scheduling system and method
CN103066701A (en) Power grid dispatching real-time operation command method and system
Maeno et al. Evolution of the ATLAS PanDA production and distributed analysis system
Lin et al. Key technologies and solutions of remote distributed virtual laboratory for E-learning and E-education
CN104484228B (en) Distributed parallel task processing system based on Intelli DSC
CN110247981A (en) A kind of electric power scheduling automatization system application micro services remodeling method
CN107071067B (en) Cgo-based high-performance stock market access system and method
Casajus et al. Status of the DIRAC Project
Zato et al. Platform for building large-scale agent-based systems
CN112948088A (en) Cloud workflow intelligent management and scheduling system in cloud computing platform
Popović et al. A novel cloud-based advanced distribution management system solution
CN104486447A (en) Large platform cluster system based on Big-Cluster
CN113220480B (en) Distributed data task cross-cloud scheduling system and method
CN113627963B (en) Electric power refined operation rule base creation method
Paterson et al. Performance of combined production and analysis WMS in DIRAC
Theiss et al. A Java software agent framework for hard real-time manufacturing control
CN104462581A (en) Micro-channel memory mapping and Smart-Slice based ultrafast file fingerprint extraction system and method
CN109961376A (en) A kind of distributed energy storage apparatus management/control system and method
Yin et al. Research on Man-Machine Service Reliability of New Generation Power System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination