CN117707792A - Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator - Google Patents

Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator Download PDF

Info

Publication number
CN117707792A
CN117707792A CN202410159842.5A CN202410159842A CN117707792A CN 117707792 A CN117707792 A CN 117707792A CN 202410159842 A CN202410159842 A CN 202410159842A CN 117707792 A CN117707792 A CN 117707792A
Authority
CN
China
Prior art keywords
server
power grid
remote
data
fpga accelerator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410159842.5A
Other languages
Chinese (zh)
Inventor
张小雪
魏心泉
孙雯雯
陆一鸣
吕广宪
刘鹏
王立岩
刘军
杜建
王国庆
刘玉芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Online Shanghai Energy Internet Research Institute Co ltd
Original Assignee
China Online Shanghai Energy Internet Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Online Shanghai Energy Internet Research Institute Co ltd filed Critical China Online Shanghai Energy Internet Research Institute Co ltd
Priority to CN202410159842.5A priority Critical patent/CN117707792A/en
Publication of CN117707792A publication Critical patent/CN117707792A/en
Pending legal-status Critical Current

Links

Landscapes

  • Power Sources (AREA)

Abstract

The invention relates to a remote parallel acceleration device, method and system based on an FPGA accelerator, wherein the device comprises: the system comprises a server and an FPGA accelerator, wherein the server further comprises a resource allocation scheduling module and a remote collaborative scheduling module, the resource allocation scheduling module is used for allocating the calculation tasks according to the power grid data files and the local server resource conditions, and when the local server resource conditions can meet the data volume of the power grid data files, the calculation tasks are allocated to the construction module; when the condition of the local server resources can not meet the data volume of the power grid data file, distributing the calculation task to the remote collaborative scheduling module; and the remote collaborative scheduling module is used for sending a collaborative acceleration request to the remote server. The invention realizes the automatic allocation of the computing tasks and the automatic scheduling of the computing resources among the areas and between the area server and the FPGA accelerator cluster.

Description

Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator
Technical Field
The invention relates to the technical field of power system resource calculation, in particular to a remote parallel acceleration device, method and system based on an FPGA accelerator.
Background
In the digital economic era, in order to be more suitable for the external development of new situations and meet the new requirements of internal management, the digital transformation of power grid management is imperative. The digital transformation is to take the improvement of the safety quality and benefit efficiency of the whole point location management system as a main aim, and big data, artificial intelligence and FPGA chips have been widely researched in the field of novel intelligent power grids.
The power flow calculation of the power system is an important means for planning operation, optimization and reliability analysis of the power grid, is a basis for ensuring the safe, stable and reliable operation of the power system, and meanwhile, the traditional general processor is low in calculation efficiency and cannot be suitable for the environment due to the large scale and complex structure of the power network in China. Because the development characteristics of the novel power system are different from those of the traditional power system in the aspects of data volume, data structure, algorithm requirement and the like, the hidden danger problems of high system resource consumption and low effectiveness exist because a plurality of algorithms are carried when the CPU/GPU architecture is used for carrying out power system resource calculation tasks traditionally based on application requirements of different scenes.
The prior patent document CN105912767A discloses a multi-stage power grid off-site collaborative joint calculation method based on a BS framework, which essentially aims at achieving multi-site multi-stage power grid unfolding joint calculation by constructing a joint calculation platform, so that the working efficiency is improved, but the method only aims at optimizing and improving a CPU platform frame and does not consider the execution efficiency of an electric power system resource analysis algorithm.
Disclosure of Invention
The invention aims to solve the technical problem of providing a remote parallel acceleration device, method and system based on an FPGA accelerator, which realize automatic allocation of computing tasks and automatic scheduling of computing resources between areas and between an area server and an FPGA accelerator cluster.
The technical scheme adopted for solving the technical problems is as follows: the utility model provides a different place parallel accelerating device based on FPGA accelerator, including server and FPGA accelerator, the server includes: the receiving module is used for receiving a calculation task, wherein the calculation task comprises a power grid data file; the construction module is used for establishing an equation and a matrix according to the power grid data file; the FPGA accelerator accelerates the calculation of the equation and the matrix by utilizing the parallel operation characteristic, the server also comprises a resource allocation scheduling module and a remote collaborative scheduling module, the resource allocation scheduling module is used for allocating the calculation tasks according to the power grid data file and the local server resource condition, and when the local server resource condition can meet the data volume of the power grid data file, the calculation tasks are allocated to the construction module; when the condition of the local server resources can not meet the data volume of the power grid data file, distributing the calculation task to the remote collaborative scheduling module; and the remote collaborative scheduling module is used for sending a collaborative acceleration request to the remote server.
The resource allocation scheduling module comprises:
the hardware equipment occupation amount estimation unit is used for estimating the hardware equipment occupation amount required by completing the calculation task according to the data amount of the power grid data file;
the resource acquisition unit is used for acquiring the current idle resource quantity of the local server and the calculated quantity which can be processed by the FPGA accelerator in parallel;
the judging unit is used for judging whether the occupation amount of the hardware equipment required by completing the calculation task is larger than the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel;
the first execution unit is used for forwarding the calculation task to the construction module when the occupation amount of hardware equipment required for completing the calculation task is smaller than or equal to the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel;
and the second execution unit is used for forwarding the calculation task to the off-site collaborative scheduling module when the occupation amount of the hardware equipment required for completing the calculation task is larger than the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel.
The remote collaborative scheduling module comprises:
the remote hardware resource receiving unit is used for receiving the resource condition of the remote server;
the remote collaborative acceleration screening unit is used for determining a target remote server according to the resource condition of the remote server;
the local hardware resource sending unit is used for sending the resource condition of the local server to the remote server;
and the collaborative acceleration task transmission unit is used for sending the calculation task to the target remote server and receiving the calculation task sent by the remote server.
The off-site collaborative acceleration screening unit comprises:
an evaluation subunit, configured to evaluate the bearing capacity of each of the different servers according to the resource situation of the different servers;
the sequencing subunit is used for sequencing the bearing capacity of the servers in different places from big to small;
a comparing subunit, configured to compare the maximum bearing capacity with a hardware device occupation amount required for completing the computing task;
and the determining subunit is used for taking the remote server corresponding to the maximum bearing capacity as the target remote server of the collaborative acceleration request when the maximum bearing capacity is larger than the occupation amount of the hardware equipment required for completing the calculation task.
The server resource condition comprises available idle resources of the server, an energy consumption sign of the server and a fault sign of the FPGA accelerator.
The server further includes: and the data verification module is used for carrying out checksum conversion on the data in the power grid data file.
The data verification module comprises:
the data verification unit is used for verifying the data in the power grid data file and determining whether the power grid data file has data errors, data deletions and data anomalies;
the feedback error reporting unit is used for reporting errors when the power grid data file has data errors;
the data supplementing unit is used for supplementing data to the missing data in the power grid data file when the power grid data file has data missing;
the data conversion unit is used for carrying out format conversion on abnormal data in the power grid data file when the power grid data file has data abnormality;
and the data integration unit is used for integrating the correct data, the supplemented data and the format-converted data in the power grid data file into a new power grid data file.
The technical scheme adopted for solving the technical problems is as follows: the off-site parallel acceleration method based on the FPGA accelerator is applied to the off-site parallel acceleration device based on the FPGA accelerator and comprises the following steps of:
receiving a calculation task, wherein the calculation task comprises a power grid data file;
acquiring a local server resource condition;
when the resource condition of the local server can meet the data volume of the power grid data file, distributing the calculation task to the construction module, and jointly completing the calculation task by the local server and the FPGA accelerator;
and when the resource condition of the local server can not meet the data volume of the power grid data file, distributing the calculation task to the remote co-scheduling module, and sending a co-acceleration request to a remote server.
Before sending the collaborative acceleration request to the remote server, the method further comprises the following steps:
evaluating the bearing capacity of each remote server according to the resource condition of the remote server;
sequencing the bearing capacity of each different server from big to small;
comparing the maximum carrying capacity with the occupation amount of hardware equipment required for completing the calculation task;
and when the maximum bearing capacity is larger than the occupation amount of the hardware equipment required by completing the calculation task, using the remote server corresponding to the maximum bearing capacity as the target remote server of the collaborative acceleration request.
The technical scheme adopted for solving the technical problems is as follows: the remote parallel acceleration system based on the FPGA accelerator comprises a plurality of remote parallel acceleration devices based on the FPGA accelerator, and a plurality of servers of the remote parallel acceleration devices based on the FPGA accelerator are in real-time communication connection in a wireless communication mode.
Advantageous effects
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects: according to the invention, the FPGA collaborative computing platform is established for the power system computing algorithm by utilizing the parallel pipeline computing characteristic of the FPGA, and based on the architecture, the remote collaborative acceleration mobilizing module is further deployed and developed, a plurality of FPGA collaborative computing platforms of the server-side remote networking are established, and the automatic distribution of computing tasks and the scheduling of computing resources between the server and the FPGA accelerator are realized. The FPGA is used as a chip with high safety, reconfigurability and low power consumption, can better meet the application requirements of algorithms of various power systems, can flexibly update the algorithm function based on the characteristics of each power grid, and provides a low-cost, low-power consumption and high-efficiency collaborative computing platform for the power system.
Drawings
FIG. 1 is an exemplary diagram of a Field Programmable Gate Array (FPGA) -based off-site parallel acceleration device performing a computational task in an embodiment of the present invention;
FIG. 2 is a diagram of an operational framework of a server in an embodiment of the present invention;
FIG. 3 is a block diagram of a data verification module in an embodiment of the invention;
FIG. 4 is a block diagram of a remote co-scheduling module in an embodiment of the present invention;
FIG. 5 is a general flow chart of an embodiment of the present invention in performing a computing task.
Description of the embodiments
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
The embodiment of the invention relates to a remote parallel acceleration system based on an FPGA accelerator, which constructs a collaborative computing architecture between a server and the FPGA accelerator by constructing a remote parallel acceleration device based on the FPGA accelerator of 'server+FPGA' in a plurality of areas.
The off-site parallel acceleration device based on the FPGA accelerator in the embodiment comprises a server and the FPGA accelerator. Wherein, the server includes: the receiving module is used for receiving a calculation task, wherein the calculation task comprises a power grid data file; and the construction module is used for establishing an equation and a matrix according to the power grid data file.
Because of the repeatable programming characteristic of the FPGA accelerator, the FPGA accelerator can build various power system algorithms at the same time, judge according to the calculation task in the input data and load the algorithm module in the FPGA accelerator. Taking load flow calculation as an example, fig. 1 describes a specific flow how to execute a calculation task based on a software and hardware collaborative acceleration architecture in a local server system. In the framework, a server is responsible for processing an input power grid data file, extracting a data matrix required by power flow calculation, such as power grid structural parameters, active power, reactive power, load data and the like, establishing a nonlinear power flow equation set, and establishing a sparse matrix required by iterative calculation by adopting a sparse matrix compression technology. And then, the dispatching and issuing of the calculation tasks are realized through a 'server+FPGA' two-stage architecture, and the control of chip resources, including the reading and writing of input/output data and the distribution of on-chip resources, is realized by using a chip support tool of an OpenCL and an FPGA accelerator. Tasks related to a large number of sparse matrix operations are distributed to the FPGA accelerator for processing, such as node injection power calculation and Jacobian matrix calculation, and logic judgment and simple calculation directly run on a server system, such as node unbalanced active/reactive power calculation and the like.
As shown in fig. 2, the server of the embodiment also realizes a data acquisition, data verification, resource allocation scheduling module and remote collaborative acceleration and mobilization, and the four functions are deployed at the servers of each region, so that the functions of power grid data acquisition, data format conversion, data verification, resource allocation of a 'server+fpga' software and hardware collaborative architecture, remote collaborative parallel acceleration and the like are provided.
The application of the embodiment has no hard requirement on the network, the data acquisition can acquire the power grid data file through a cloud platform and a manual input mode, the power grid data file can be integrated into a receiving module, the power grid data file is automatically acquired through the cloud platform or an input interface after a calculation task is received, and the data in the input txt, rdf, excel format files can be automatically extracted and a corresponding series of data matrixes can be generated after the power grid data file is acquired.
Because the power grid environment is complex, the problems of data loss, data abnormality and the like often occur in the processes of collecting, transmitting and storing the power data, and the operation, scheduling, analysis experiment and other works of the power distribution network are potentially influenced by the abnormal data. In this embodiment, a data verification module is further provided in the server, and may perform checksum conversion on the data in the grid data file. As shown in fig. 3, the data verification module in this embodiment includes: the data verification unit is used for verifying the data in the power grid data file and determining whether the power grid data file has data errors, data deletions and data anomalies; the feedback error reporting unit is used for reporting errors when the power grid data file has data errors; the data supplementing unit is used for supplementing data to the missing data in the power grid data file when the power grid data file has data missing; the data conversion unit is used for carrying out format conversion on abnormal data in the power grid data file when the power grid data file has data abnormality; and the data integration unit is used for integrating the correct data, the supplemented data and the format-converted data in the power grid data file into a new power grid data file. The units can perform operations such as filling, conversion or error reporting according to the necessity of missing data and a target calculation task, and finally integrate the operations into a calculation data file with correct data types.
The resource allocation scheduling module in this embodiment is configured to allocate the computing task according to the power grid data file and the local server resource condition, and allocate the computing task to the building module when the local server resource condition can satisfy the data size of the power grid data file; and when the condition of the local server resources can not meet the data volume of the power grid data file, distributing the calculation task to the remote co-scheduling module. The resource allocation scheduling module can plan whether to apply for collaborative acceleration to the remote server by preferentially judging whether the local server resources are excessively occupied and whether the overall operation efficiency is influenced by executing the calculation, if the local server resources are occupied and the currently operated calculation tasks are more or the local FPGA accelerator has the reasons of faults and the like, the calculation tasks are allocated to the remote collaborative scheduling module, and the remote collaborative scheduling module judges whether a collaborative acceleration request can be sent to the remote server.
According to the implementation mode, the remote parallel acceleration device based on the FPGA accelerator of the server and the FPGA accelerator is built in multiple places, and a plurality of FPGA collaborative computing platforms of the server-side remote networking are further built through a remote parallel acceleration scheme. Based on the above, the remote collaborative acceleration mobilizing module carried on the server side of each region can flexibly distribute calculation tasks to the local or remote server system according to the factors such as the local real-time hardware resource allowance, the remote idle resource amount, the remote collaborative acceleration priority and the like, so as to realize remote collaborative parallel calculation.
As shown in fig. 4, in this embodiment, on the basis of the local "server+fpga" heterogeneous parallel acceleration device based on the FPGA accelerator, a plurality of FPGA cooperative calculation call policies of the server-side heterogeneous networking are constructed, so as to implement automatic allocation of calculation tasks and scheduling of calculation resources between the server and the FPGA acceleration card. The off-site collaborative acceleration mobilization module in the present embodiment includes: the remote hardware resource receiving unit is used for receiving the resource condition of the remote server; the remote collaborative acceleration screening unit is used for determining a target remote server according to the resource condition of the remote server; the local hardware resource sending unit is used for sending the resource condition of the local server to the remote server; and the collaborative acceleration task transmission unit is used for sending the calculation task to the target remote server and receiving the calculation task sent by the remote server. Wherein, the offsite collaborative acceleration screening unit includes: an evaluation subunit, configured to evaluate the bearing capacity of each of the different servers according to the resource situation of the different servers; the sequencing subunit is used for sequencing the bearing capacity of the servers in different places from big to small; a comparing subunit, configured to compare the maximum bearing capacity with a hardware device occupation amount required for completing the computing task; and the determining subunit is used for taking the remote server corresponding to the maximum bearing capacity as the target remote server of the collaborative acceleration request when the maximum bearing capacity is larger than the occupation amount of the hardware equipment required for completing the calculation task.
Therefore, in the present embodiment, the areas monitor and monitor each other through the high-speed enterprise network, and issue the hardware idle resources of the server, and if it is determined that the cooperative acceleration is required in other areas, the load capacity of the server is estimated according to the total amount of resources, the current resource redundancy, the server performance and other data in each area, and the cooperative parallel acceleration is preferentially applied to the server with strong load capacity. By arranging the off-site collaborative acceleration mobilizing module, the consumption of CPU system resources during the operation of the platform can be reduced under the environment with complex power grid environment and large calculated amount, the collaborative efficiency of the provincial-urban two-level power grid system is improved, the overall calculation efficiency of the platform is improved, and a solution with higher instantaneity and green low carbon is provided for the power grid analysis system. Meanwhile, when a certain municipal management and control system fails, a resource calculation strategy with high robustness is provided, and the extremely high calculation efficiency of the FPGA accelerator provides guarantee for timely releasing different-place resources and reducing the calculation pressure of a server side.
The overall flow for the allocation of hardware resources, the logic followed by the scheduling, and entering the off-site collaborative acceleration scheduling when the local server performs multiple computing tasks is shown in fig. 5.
The computing task A represents an operation mode 1 of a local server under the condition that the hardware resource quantity is enough, and in the operation mode, the server acquires available idle resources of the current local server and preliminarily judges the occupation quantity n of hardware equipment needed by processing the computing task A according to the data quantity of the power grid data in the computing task A. If the current free resources of the local server are larger than the calculated occupation amount n, confirming a plurality of calculation modules which are required to be driven by the FPGA accelerator according to calculation task elements in the data, and sequentially arranging and inputting the data to a register end at a hardware end according to the calculation modules. Before the computing task a issues to the accelerator through the PCIe port, an event lock needs to be created to prevent other computing tasks from calling the occupied hardware resources until the event lock is released by the computing task a.
The computing task B is an example of the operation mode 2, when the computing task B is executed, the local server needs to confirm whether the current load exceeds preset overload thresholds, including system energy consumption flag parameters such as a CPU and a memory occupancy rate, so as to ensure the effectiveness of executing the computing task at present and prevent the system energy consumption from being blocked due to overhigh occupancy. If the energy consumption of the server is enough and the residual FPGA accelerators do not have fault alarm, the execution of the computing task B can be directly confirmed, and the residual available partial FPGA accelerators at the server end are distributed to execute the computing task according to the required hardware computing module. If the server judges that the load is higher, the idle resource of the local server is smaller than the calculation occupation amount n required by executing the calculation task B, and whether the cooperative acceleration of the remote server can be applied is judged. The remote collaborative acceleration mobilizing module sorts the remote resource information according to the current received remote resource information, evaluates the bearing capacity of the remote server according to the free resource quantity of the remote server, whether the calculation can be executed in an efficient mode (the free resource is larger than the required quantity n), the server load, the performance index and the like, and sorts the bearing capacity according to the size of the bearing capacity. If the remote server with the largest bearing capacity can meet the calculation occupation amount n required by the calculation task B, sending a remote collaborative acceleration request to the remote server with the largest bearing capacity; if the remote server with the largest bearing capacity can not meet the calculation occupation amount n required by the calculation task B, the condition of cooperative acceleration is not met, the local server returns to the local server to continue to execute. And when the local server does not have available free equipment at the moment, the computing task B enters a queuing waiting link, otherwise, the remaining part of available resources are selected to be allocated and the computing task B is executed. The calculation results can be stored by storing local data files or uploading the local data files to a cloud platform.
It is easy to find that the invention establishes an FPGA cooperative computing platform for a power system computing algorithm by utilizing the parallel pipeline computing characteristic of the FPGA, further deploys and develops a remote cooperative acceleration mobilizing module based on the architecture, establishes a plurality of FPGA cooperative computing platforms of a server-side remote networking, and realizes automatic distribution of computing tasks and scheduling of computing resources between the server and an FPGA accelerator. The invention provides a low-cost, low-power consumption and high-efficiency cooperative computing platform for the power system, promotes the digital transformation of the power grid, and ensures the safe and stable operation of the power system.

Claims (10)

1. The utility model provides a different place parallel accelerating device based on FPGA accelerator, includes server and FPGA accelerator, the server includes: the receiving module is used for receiving a calculation task, wherein the calculation task comprises a power grid data file; the construction module is used for establishing an equation and a matrix according to the power grid data file; the FPGA accelerator accelerates the calculation of the equation and the matrix by utilizing the parallel operation characteristic, and is characterized in that the server also comprises a resource allocation scheduling module and a remote collaborative scheduling module, wherein the resource allocation scheduling module is used for allocating the calculation tasks according to the power grid data file and the local server resource condition, and allocating the calculation tasks to the construction module when the local server resource condition can meet the data quantity of the power grid data file; when the condition of the local server resources can not meet the data volume of the power grid data file, distributing the calculation task to the remote collaborative scheduling module; and the remote collaborative scheduling module is used for sending a collaborative acceleration request to the remote server.
2. The FPGA accelerator-based off-site parallel acceleration apparatus of claim 1, wherein the resource allocation scheduling module comprises:
the hardware equipment occupation amount estimation unit is used for estimating the hardware equipment occupation amount required by completing the calculation task according to the data amount of the power grid data file;
the resource acquisition unit is used for acquiring the current idle resource quantity of the local server and the calculated quantity which can be processed by the FPGA accelerator in parallel;
the judging unit is used for judging whether the occupation amount of the hardware equipment required by completing the calculation task is larger than the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel;
the first execution unit is used for forwarding the calculation task to the construction module when the occupation amount of hardware equipment required for completing the calculation task is smaller than or equal to the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel;
and the second execution unit is used for forwarding the calculation task to the off-site collaborative scheduling module when the occupation amount of the hardware equipment required for completing the calculation task is larger than the current idle resource amount of the local server and the calculation amount which can be processed by the FPGA accelerator in parallel.
3. The off-site parallel acceleration apparatus based on an FPGA accelerator according to claim 1, wherein the off-site co-scheduling module comprises:
the remote hardware resource receiving unit is used for receiving the resource condition of the remote server;
the remote collaborative acceleration screening unit is used for determining a target remote server according to the resource condition of the remote server;
the local hardware resource sending unit is used for sending the resource condition of the local server to the remote server;
and the collaborative acceleration task transmission unit is used for sending the calculation task to the target remote server and receiving the calculation task sent by the remote server.
4. The off-site parallel acceleration apparatus based on an FPGA accelerator according to claim 3, wherein the off-site collaborative acceleration screening unit comprises:
an evaluation subunit, configured to evaluate the bearing capacity of each of the different servers according to the resource situation of the different servers;
the sequencing subunit is used for sequencing the bearing capacity of the servers in different places from big to small;
a comparing subunit, configured to compare the maximum bearing capacity with a hardware device occupation amount required for completing the computing task;
and the determining subunit is used for taking the remote server corresponding to the maximum bearing capacity as the target remote server of the collaborative acceleration request when the maximum bearing capacity is larger than the occupation amount of the hardware equipment required for completing the calculation task.
5. A FPGA accelerator based off-site parallel acceleration apparatus as claimed in claim 1 or 3, wherein the server resource conditions include available free resources of the server, energy consumption flags of the server and fault flags of the FPGA accelerator.
6. The FPGA accelerator-based off-site parallel acceleration apparatus of claim 1, wherein the server further comprises: and the data verification module is used for carrying out checksum conversion on the data in the power grid data file.
7. The FPGA accelerator-based off-site parallel acceleration apparatus of claim 6, wherein the data verification module comprises:
the data verification unit is used for verifying the data in the power grid data file and determining whether the power grid data file has data errors, data deletions and data anomalies;
the feedback error reporting unit is used for reporting errors when the power grid data file has data errors;
the data supplementing unit is used for supplementing data to the missing data in the power grid data file when the power grid data file has data missing;
the data conversion unit is used for carrying out format conversion on abnormal data in the power grid data file when the power grid data file has data abnormality;
and the data integration unit is used for integrating the correct data, the supplemented data and the format-converted data in the power grid data file into a new power grid data file.
8. The off-site parallel acceleration method based on the FPGA accelerator is characterized by applying the off-site parallel acceleration device based on the FPGA accelerator as claimed in any one of claims 1-7, and comprises the following steps:
receiving a calculation task, wherein the calculation task comprises a power grid data file;
acquiring a local server resource condition;
when the resource condition of the local server can meet the data volume of the power grid data file, distributing the calculation task to the construction module, and jointly completing the calculation task by the local server and the FPGA accelerator;
and when the resource condition of the local server can not meet the data volume of the power grid data file, distributing the calculation task to the remote co-scheduling module, and sending a co-acceleration request to a remote server.
9. The method for off-site parallel acceleration based on an FPGA accelerator according to claim 8, wherein before the issuing of the collaborative acceleration request to the off-site server, further comprises:
evaluating the bearing capacity of each remote server according to the resource condition of the remote server;
sequencing the bearing capacity of each different server from big to small;
comparing the maximum carrying capacity with the occupation amount of hardware equipment required for completing the calculation task;
and when the maximum bearing capacity is larger than the occupation amount of the hardware equipment required by completing the calculation task, using the remote server corresponding to the maximum bearing capacity as the target remote server of the collaborative acceleration request.
10. The remote parallel acceleration system based on the FPGA accelerator is characterized by comprising a plurality of remote parallel acceleration devices based on the FPGA accelerator as claimed in any one of claims 1-7, wherein the servers of the remote parallel acceleration devices based on the FPGA accelerator are in real-time communication connection in a wireless communication mode.
CN202410159842.5A 2024-02-04 2024-02-04 Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator Pending CN117707792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410159842.5A CN117707792A (en) 2024-02-04 2024-02-04 Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410159842.5A CN117707792A (en) 2024-02-04 2024-02-04 Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator

Publications (1)

Publication Number Publication Date
CN117707792A true CN117707792A (en) 2024-03-15

Family

ID=90146496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410159842.5A Pending CN117707792A (en) 2024-02-04 2024-02-04 Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator

Country Status (1)

Country Link
CN (1) CN117707792A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912767A (en) * 2016-04-07 2016-08-31 国家电网公司 BS architecture based multi-level grid distributed collaborative joint calculation method
US20200326992A1 (en) * 2019-04-12 2020-10-15 Huazhong University Of Science And Technology Acceleration method for fpga-based distributed stream processing system
CN113240381A (en) * 2021-04-14 2021-08-10 广东电网有限责任公司 Micro-grid power auditing system
CN113821318A (en) * 2021-08-19 2021-12-21 北京邮电大学 Internet of things cross-domain subtask combined collaborative computing method and system
CN115421926A (en) * 2022-09-30 2022-12-02 阿里巴巴(中国)有限公司 Task scheduling method, distributed system, electronic device and storage medium
CN115576675A (en) * 2022-11-04 2023-01-06 中国电子科技集团公司第十研究所 FPGA (field programmable Gate array) acceleration method for monitoring and controlling network edge cloud computing
CN115658323A (en) * 2022-11-15 2023-01-31 国网上海能源互联网研究院有限公司 FPGA load flow calculation acceleration architecture and method based on software and hardware cooperation
CN116932086A (en) * 2023-07-27 2023-10-24 深圳信息职业技术学院 Mobile edge computing and unloading method and system based on Harris eagle algorithm
CN117354260A (en) * 2023-09-27 2024-01-05 国家电网公司华中分部 Electromagnetic transient cross-domain distributed parallel computing scheduling method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912767A (en) * 2016-04-07 2016-08-31 国家电网公司 BS architecture based multi-level grid distributed collaborative joint calculation method
US20200326992A1 (en) * 2019-04-12 2020-10-15 Huazhong University Of Science And Technology Acceleration method for fpga-based distributed stream processing system
CN113240381A (en) * 2021-04-14 2021-08-10 广东电网有限责任公司 Micro-grid power auditing system
CN113821318A (en) * 2021-08-19 2021-12-21 北京邮电大学 Internet of things cross-domain subtask combined collaborative computing method and system
CN115421926A (en) * 2022-09-30 2022-12-02 阿里巴巴(中国)有限公司 Task scheduling method, distributed system, electronic device and storage medium
CN115576675A (en) * 2022-11-04 2023-01-06 中国电子科技集团公司第十研究所 FPGA (field programmable Gate array) acceleration method for monitoring and controlling network edge cloud computing
CN115658323A (en) * 2022-11-15 2023-01-31 国网上海能源互联网研究院有限公司 FPGA load flow calculation acceleration architecture and method based on software and hardware cooperation
CN116932086A (en) * 2023-07-27 2023-10-24 深圳信息职业技术学院 Mobile edge computing and unloading method and system based on Harris eagle algorithm
CN117354260A (en) * 2023-09-27 2024-01-05 国家电网公司华中分部 Electromagnetic transient cross-domain distributed parallel computing scheduling method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张松树 等: "电力系统运行方式计算协同系统的功能设计与实现", 电网技术, no. 10, 5 October 2012 (2012-10-05) *

Similar Documents

Publication Publication Date Title
CN111861793B (en) Distribution and utilization electric service distribution method and device based on cloud edge cooperative computing architecture
CN103491190A (en) Processing method for large-scale real-time concurrent charger monitoring data
CN102110021B (en) Automatic optimization method for cloud computing
CN109547240B (en) Intelligent device based on edge calculation and access and device analysis method
CN104657150A (en) Automatic operation and maintenance method under cluster environment
CN103957280A (en) Connection allocation and scheduling method of sensor network in Internet of things
CN112134754A (en) Pressure testing method and device, network equipment and storage medium
CN111324460B (en) Power monitoring control system and method based on cloud computing platform
CN105516317B (en) Efficient acquisition method for power consumption information with multi-level load sharing
CN101715252A (en) Cluster short message center and method for shunting disaster recovery therefor
CN114899885A (en) Power scheduling method, system and storage medium
CN109767002A (en) A kind of neural network accelerated method based on muti-piece FPGA collaboration processing
CN117707792A (en) Different-place parallel acceleration device, method and system based on FPGA (field programmable Gate array) accelerator
CN111130116B (en) Scheduling operation power flow checking method based on key topology change item identification
CN106127396A (en) A kind of method of intelligent grid medium cloud scheduler task
CN108170490A (en) A kind of IMA system datas loading framework and loading method
CN116346823A (en) Big data heterogeneous task scheduling method and system based on message queue
CN104283943A (en) Communication optimizing method for cluster server
CN109213105A (en) A kind of reconfigurable device realizes restructural method and dcs
CN102231126A (en) Method and system for implementing inter-core backup in multi-core processor
CN104507150A (en) Method for clustering virtual resources in baseband pooling
CN1960364A (en) Apparatus, system, and method for managing response latency
CN112311891A (en) Online intelligent patrol cloud-edge coordination system and method for transformer substation
CN112217286A (en) Electronic network command issuing system and method based on artificial intelligence
CN113205241A (en) Monitoring data real-time processing method, non-transient readable recording medium and data processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination