CN111476461A - Rapid calculation method for setting parameters of large power grid - Google Patents
Rapid calculation method for setting parameters of large power grid Download PDFInfo
- Publication number
- CN111476461A CN111476461A CN202010186620.4A CN202010186620A CN111476461A CN 111476461 A CN111476461 A CN 111476461A CN 202010186620 A CN202010186620 A CN 202010186620A CN 111476461 A CN111476461 A CN 111476461A
- Authority
- CN
- China
- Prior art keywords
- branch
- power grid
- mutual inductance
- matrix
- calculation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 54
- 239000011159 matrix material Substances 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000006870 function Effects 0.000 claims abstract description 21
- 230000008859 change Effects 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 10
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 claims abstract 4
- 230000008569 process Effects 0.000 claims description 18
- 230000009191 jumping Effects 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 238000007792 addition Methods 0.000 claims description 2
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 238000012937 correction Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 4
- 239000000872 buffer Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Engineering & Computer Science (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Administration (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- General Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention provides a method for quickly calculating setting parameters of a large power grid, which comprises the following steps: s1: when the power grid model is subjected to preset change, storing the change model of the power grid in an unprocessed branch data set; s2: when the processing of all the branches in the branch data set is completed, turning to S7, otherwise, turning to S3; s3: obtaining the type of each branch; s4: inserting the unprocessed branch and the type thereof into an additional queue; s5: the CPU calls a corresponding CUDA kernel function according to the type of the branch, and modifies the whole network matrix model; s6: reading back the calculation result of the GPU thread block to the memory, and transferring to S2; s7: and calculating setting parameters under the corresponding power grid model according to the corrected whole network matrix model. According to the method, under the cooperative framework of the CPU and the GPU, the generation and the correction of the matrix under the large power grid are accelerated, and the capability of batch calculation of the setting parameters is improved, so that the calculation efficiency of the setting calculation under the large power grid is comprehensively improved.
Description
Technical Field
The invention relates to the technical field of power system relay protection, in particular to a method for quickly calculating setting parameters of a large power grid.
Background
With the continuous construction and improvement of the relay protection cloud platform, a setting calculation data model suitable for multi-stage scheduling is produced, the power grid model is larger and larger, and the large power grid calculation efficiency gradually becomes the key point and the difficulty of the construction of the setting calculation cloud platform. The core of the setting calculation is concentrated on the calculation of the setting parameters (branch coefficient, current maximum value and branch equivalent impedance), and the setting parameter calculation depends on the generation and correction of a node impedance matrix, if branch overhaul is considered in the setting parameter calculation, the node impedance matrix needs to be corrected, meanwhile, the inversion of the power saving impedance matrix (node admittance matrix), the calculation of a node voltage matrix and a branch current matrix needs to be combined, and the calculation engineering is quite huge.
At present, when a branch is formed by a branch adding method, a first-order matrix is formed by starting from a grounding branch, and then a node impedance matrix of the original network is correspondingly modified when a branch is added to the original network. After all branches of the network are added, a node impedance matrix of the whole network is finally formed, and by combining node voltages before faults (when engineering approximate calculation is carried out, the bus voltages before faults are equal to rated values, namely the voltage per unit value of each node is 1, calculation is carried out), each sequence voltage, each sequence current, branch voltage, branch current and the like of the fault point can be obtained, and finally a setting parameter calculation result is obtained. When branches are newly added, maintained and modified, corresponding matrixes need to be corrected, and then calculation results of final setting parameters are obtained through calculation, and the dimensionality of each matrix depends on the number of nodes of the power grid.
Disclosure of Invention
The present invention is directed to solving at least one of the above problems.
Therefore, the invention aims to provide a method for rapidly calculating the setting parameters of the large power grid, which is based on a cooperative framework of a CPU and a GPU, accelerates the generation and the correction of a matrix under the large power grid, and improves the capability of batch calculation of the setting parameters, thereby comprehensively improving the calculation efficiency of the setting calculation under the large power grid.
In order to achieve the above object, an embodiment of the present invention provides a method for rapidly calculating a setting parameter of a large power grid, including the following steps: the method comprises the following steps: s1: judging the change condition of the power grid model in the CPU thread, and storing the change model of the power grid in an unprocessed branch data set when the power grid model is subjected to preset change, wherein the preset change at least comprises the following steps: one or more of generation, addition, modification, deletion and change of a power grid operation mode; s2: judging whether the processing of all branches in the branch data sets is finished, if so, jumping to the step S7, otherwise, jumping to the step S3; s3: sequentially selecting each unprocessed branch in the branch data set, and judging the mutual inductance attribute, the branch attribute and the grounding attribute of each unprocessed branch to obtain the type of each branch; s4: inserting the unprocessed branch and the type thereof into an additional queue; s5: the CPU monitors the additional queue, and calls a corresponding CUDA kernel function in the CPU according to the type of the branch to correct the matrix model of the whole network; s6: reading back the calculation result of the GPU thread block to the memory, and jumping to step S2; s7: and calculating setting parameters under the corresponding power grid model according to the corrected whole network matrix model.
In addition, the method for rapidly calculating the setting parameter of the large power grid according to the embodiment of the invention may further have the following additional technical features:
in some examples, the types of the branches include: the tree branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, the grounding chain branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, and the chain branches are mutual inductance.
In some examples, before step S6, further comprising: and the GPU thread block performs branch additional calculation on the original full-network matrix.
In some examples, after the GPU thread block performs branch appending calculation on the original full-network matrix, the method further includes: and (3) operating a numerical calculation part obtained by the GPU thread block on a CUDA parallel calculation function on the GPU to obtain kernel functions, and respectively realizing 6 CUDA kernel functions of ground tree branch addition, ground chain branch addition, ungrounded tree branch addition, ungrounded chain branch addition, mutual inductance tree branch addition and mutual inductance chain branch addition for calling of the CPU distribution thread.
In some examples, the GPU thread block performs branch appending computation on the original full mesh matrix, including: the non-mutual inductance ungrounded branch is added, and the specific process is as follows: adding a node j with impedance of zij at the node i, and adding 1 dimension to the matrix; adding a non-mutual inductance and non-grounding chain branch, and the specific process is as follows: adding ungrounded chain branches with impedance of zij between the nodes i and j, keeping the matrix dimension unchanged, and correcting the matrix model of the whole network; the method is characterized in that a mutual inductance-free grounding tree branch is additionally arranged, and the specific process is as follows: adding a grounding tree branch j with the impedance of z0j at a node i, wherein all off-diagonal elements of the jth row and the jth column of the matrix are 0, and the diagonal element is z0 j; adding a non-mutual inductance grounding chain branch, and the specific process is as follows: adding a grounding chain branch z0i to an original network node i, and setting the elements marked with j under all the matrixes as 0; the mutual inductance tree branch is added, and the specific process is as follows: adding a tree branch ij with mutual inductance to an original network node i, keeping a matrix corresponding to the original network node unchanged, and solving a column element corresponding to a newly added node j; adding a mutual inductance chain branch, and the specific process is as follows: when the original network adds the chain branches i-j, if the chain branches i-j have mutual inductance relation with the p-q branch circuit group in the original network, the order of the matrix is unchanged, and the matrix of the original network is corrected.
According to the method for rapidly calculating the setting parameters of the large power grid, disclosed by the embodiment of the invention, aiming at the problems of overlarge matrix dimension and overlow calculation efficiency in the calculation of the setting parameters under the large power grid, the advantages of a CPU and a GPU are fully combined, the CPU is used for carrying out complex logic control based on the cooperative architecture of the CPU and the GPU, and the GPU is used for carrying out simple but large-amount arithmetic operation, so that the CPU and the GPU work in parallel, the generation and the correction of the matrix under the large power grid are accelerated, the capability of calculating the setting parameters in batches is improved, and the calculation efficiency of the setting arithmetic number under the large power grid is comprehensively improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a method for fast calculation of large grid tuning parameters according to one embodiment of the invention;
FIG. 2 is a detailed flowchart illustrating a method for rapidly calculating a tuning parameter of a large power grid according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a bypass according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a tree branch containing mutual inductance according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a chain branch including mutual inductance according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a CPU and GPU architecture, according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The method for rapidly calculating the setting parameters of the large power grid according to the embodiment of the invention is described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for rapidly calculating a large power grid setting parameter according to an embodiment of the invention.
For convenience of understanding, before describing the method for rapidly calculating the tuning parameter of the large power grid according to the embodiment of the present invention, first, the related components and related terms are summarized.
A Central Processing Unit (CPU) is a final execution unit for information processing and program operation, and the CPU has a large amount of cache and a complex logic control unit, so that it is very good at logic control and serial operation, as shown in fig. 6.
A GPU (graphics processing unit) is designed based on a large throughput, and has many arithmetic operation units and few buffers, while the GPU supports a large number of threads to run simultaneously, and if they need to access the same data, the buffers join and these accesses naturally cause a problem of delay, as shown in fig. 6. Although there is a delay, since the number of arithmetic operation units is large, an effect of a very large throughput can be achieved.
Since the GPU has a large number of arithmetic operation units, it can simultaneously perform a large number of calculation works, and is excellent in large-scale concurrent calculations. The running speed of the program can be greatly improved by using the CPU to carry out complex logic control and using the GPU to carry out simple but large-amount arithmetic operation. Data interaction is performed between the CPU and the GPU through a shared space, as shown in fig. 6.
Based on this, the embodiment of the invention accelerates the generation and correction of the matrix under the large power grid under the cooperative architecture of the CPU and the GPU, and is used for improving the capability of batch calculation of the setting parameters, thereby comprehensively improving the calculation efficiency of the setting arithmetic under the large power grid. As shown in fig. 1, and in conjunction with the detailed flow diagram of fig. 2, the method includes the following steps:
step S1: judging the power grid model change condition in the CPU thread, and storing the change model of the power grid in an unprocessed branch data set when the power grid model is subjected to preset change, wherein the preset change at least comprises the following steps: and generating, adding, modifying, deleting and changing the operation mode of the power grid.
Step S2: and judging whether the processing of all the branches in all the branch data sets is finished, if so, jumping to the step S7, and otherwise, jumping to the step S3.
Step S3: and sequentially selecting each unprocessed branch in the branch data set, and judging the mutual inductance attribute, the branch attribute and the grounding attribute of each unprocessed branch to obtain the type of each branch.
In a specific embodiment, as shown in fig. 3 to 5, the types of the branches include: the tree branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, the grounding chain branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, and the chain branches are mutual inductance. Wherein figure 3 shows a bypass schematic; figure 4 shows a tree branch schematic with mutual inductance and figure 5 shows a chain branch schematic with mutual inductance.
It can be understood that the tree branch is that after a branch is added to the ungrounded node of the original network, only one new node (ungrounded point) is added to the network, no loop is added, and the corresponding impedance matrix is added by one step when the tree branch is added.
The grounding tree branch is that after a branch is added to a grounding node of the network, only one new node (non-grounding point) is added to the network, a loop is not added, and a corresponding impedance matrix is added by one step when the grounding tree branch is added.
The link branch is that after a branch is added to a non-grounded node in the original network, the network is only added with an independent loop, the number of the nodes is kept unchanged, but when the link branch is added, the matrix element value is changed.
The number of the nodes is kept unchanged, but when the grounding chain branch is added, the matrix element value is changed.
Step S4: the unprocessed leg and its type are inserted into the append queue.
Step S5: and monitoring the additional queue by the CPU, and calling a corresponding CUDA kernel function in the CPU according to the type of the branch so as to modify the matrix model of the whole network.
Specifically, the CUDA (computer Unified Device Architecture) programming model uses a CPU as a host and a GPU as a coprocessor or Device. There may be one host and several devices within one system. According to the characteristics of the two architectures, the CPU is responsible for performing transaction processing and serial computation with strong logicality, namely a logic control part, the GPU is focused on executing highly threaded parallel processing tasks, namely a numerical computation part operated by the GPU, and a CUDA parallel computation function operated on the GPU is a kernel function.
For the host side memory, the memory can be divided into Pageable memory (Pageable memory) and pagelocked memory (pinnedma). The page lock memory is never allocated to the low-speed virtual memory, and can be guaranteed to exist in the physical memory and communicate with the device side through DMA. The use of page lock memory can increase the data transmission bandwidth between the host side and the device side, as shown in fig. 6.
A complete CUDA program is composed of a series of device-side kernel functions and host-side serial processing. The CUDA maps the computational tasks of the GPU to a large number of threads which can be executed in parallel, and the threads are dynamically scheduled and executed by hardware, and in a CUDA programming model, a program manages concurrency through streaming. A stream is a series of operations that are performed sequentially, and different streams are performed in parallel. By asynchronous execution and flow, an execution unit and a memory control unit in the GPU can work simultaneously, and the utilization rate of GPU resources is improved. In addition, after control is returned to the host thread, the host thread can continue to operate without waiting for the GPU to finish running, so that the CPU and the GPU work in parallel.
In other words, in the CPU thread, a transaction and serial computation having strong logic, i.e., a logic control section, are performed. When a power grid model is generated, newly added or modified, a CPU thread carries out node numbering on the changed power grid model, the changed part of the power grid model is stored in an unprocessed branch set, the mutual inductance attribute, the branch attribute (branch or chain) and the grounding attribute of branches in the branch set are sequentially judged, the specific type of each branch is obtained and is inserted into an additional queue, the CPU continuously monitors the additional queue, and if the additional queue is not empty, related CUDA kernel functions are called according to the types of the branches to carry out matrix correction until all the branches are processed.
Step S6: and reading back the calculation result of the GPU thread block to the memory, and jumping to the step S2.
In an embodiment of the present invention, before step S6, the method further includes: and the GPU thread block performs branch additional calculation on the original full-network matrix.
Further, after the GPU thread block performs branch additional computation on the original full-network matrix, the method further includes: and (3) operating a numerical calculation part obtained by the GPU thread block on a CUDA parallel calculation function on the GPU to obtain kernel functions, and respectively realizing 6 CUDA kernel functions of ground tree branch addition, ground chain branch addition, ungrounded tree branch addition, ungrounded chain branch addition, mutual inductance tree branch addition and mutual inductance chain branch addition for calling of the CPU distribution thread.
In other words, the GPU thread block is responsible for performing branch addition calculation on the original matrix. Combining the advantages of CUDA in vector calculation and matrix operation, because the GPU is focused on executing highly threaded parallel processing tasks, the CUDA parallel calculation functions operated on the GPU are kernel functions by a numerical calculation part operated by the GPU, and 6 CUDA kernel functions of ground tree branch addition, ground chain branch addition, ungrounded tree branch addition, ungrounded chain branch addition, mutual inductance tree branch addition and mutual inductance chain branch addition are respectively realized for being called by CPU allocation threads.
Specifically, the branch additional calculation of the original full-network matrix by the GPU thread block includes:
(1) the non-mutual inductance ungrounded branch is added, and the specific process is as follows: adding node j with impedance zij at node i, the matrix is increased by 1 dimension.
(2) Adding a non-mutual inductance and non-grounding chain branch, and the specific process is as follows: ungrounded chain branches with impedance of zij are added between the nodes i and j, the matrix dimension is unchanged, and the whole network matrix model is corrected.
(3) The method is characterized in that a mutual inductance-free grounding tree branch is additionally arranged, and the specific process is as follows: at node i, add a ground tree branch j with impedance z0j, where all off-diagonal elements in the jth row and jth column of the matrix are 0 and the diagonal element is z0 j.
(4) Adding a non-mutual inductance grounding chain branch, and the specific process is as follows: adding a grounding chain branch z0i to the original network node i, and setting the element marked with j under all the matrixes as 0.
(5) The mutual inductance tree branch is added, and the specific process is as follows: and adding a tree branch ij with mutual inductance to the original network node i, keeping the matrix corresponding to the original network node unchanged, and solving the column element corresponding to the newly added node j.
(6) Adding a mutual inductance chain branch, and the specific process is as follows: when the original network adds the chain branches i-j, if the chain branches i-j have mutual inductance relation with the p-q branch circuit group in the original network, the order of the matrix is unchanged, and the matrix of the original network is corrected.
Step S7: and calculating setting parameters under the corresponding power grid model according to the corrected whole network matrix model.
In summary, according to the method for rapidly calculating the setting parameters of the large power grid, provided by the embodiment of the invention, aiming at the problems of overlarge matrix dimension and low calculation efficiency in the calculation of the setting parameters under the large power grid, the advantages of the CPU and the GPU are fully combined, based on the cooperative architecture of the CPU and the GPU, the CPU is used for performing complex logic control, the GPU is used for performing simple but large-amount arithmetic operation, the CPU and the GPU are enabled to work in parallel, the generation and the correction of the matrix under the large power grid are accelerated, the capability of calculating the setting parameters in batches is improved, and the calculation efficiency of the setting arithmetic operation under the large power grid is comprehensively improved.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (5)
1. A method for rapidly calculating setting parameters of a large power grid is characterized by comprising the following steps:
s1: judging the change condition of the power grid model in the CPU thread, and storing the change model of the power grid in an unprocessed branch data set when the power grid model is subjected to preset change, wherein the preset change at least comprises the following steps: one or more of generation, addition, modification, deletion and change of a power grid operation mode;
s2: judging whether the processing of all branches in the branch data sets is finished, if so, jumping to the step S7, otherwise, jumping to the step S3;
s3: sequentially selecting each unprocessed branch in the branch data set, and judging the mutual inductance attribute, the branch attribute and the grounding attribute of each unprocessed branch to obtain the type of each branch;
s4: inserting the unprocessed branch and the type thereof into an additional queue;
s5: the CPU monitors the additional queue, and calls a corresponding CUDA kernel function in the CPU according to the type of the branch to correct the matrix model of the whole network;
s6: reading back the calculation result of the GPU thread block to the memory, and jumping to step S2;
s7: and calculating setting parameters under the corresponding power grid model according to the corrected whole network matrix model.
2. The method for rapidly calculating the large power grid setting parameters according to claim 1, wherein the types of the branches comprise:
the tree branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, the grounding chain branches are not grounded through mutual inductance, the tree branches are grounded through mutual inductance, and the chain branches are mutual inductance.
3. The method for rapidly calculating the large power grid setting parameter according to claim 2, before step S6, further comprising: and the GPU thread block performs branch additional calculation on the original full-network matrix.
4. The method for rapidly calculating the setting parameters of the large power grid according to claim 3, wherein after the GPU thread block performs branch additional calculation on the original full-grid matrix, the method further comprises the following steps: and (3) operating a numerical calculation part obtained by the GPU thread block on a CUDA parallel calculation function on the GPU to obtain kernel functions, and respectively realizing 6 CUDA kernel functions of ground tree branch addition, ground chain branch addition, ungrounded tree branch addition, ungrounded chain branch addition, mutual inductance tree branch addition and mutual inductance chain branch addition for calling of the CPU distribution thread.
5. The method for rapidly calculating the setting parameters of the large power grid according to claim 4, wherein the GPU thread block performs branch additional calculation on an original full-grid matrix, and the method comprises the following steps:
the non-mutual inductance ungrounded branch is added, and the specific process is as follows: adding a node j with impedance of zij at the node i, and adding 1 dimension to the matrix;
adding a non-mutual inductance and non-grounding chain branch, and the specific process is as follows: adding ungrounded chain branches with impedance of zij between the nodes i and j, keeping the matrix dimension unchanged, and correcting the matrix model of the whole network;
the method is characterized in that a mutual inductance-free grounding tree branch is additionally arranged, and the specific process is as follows: adding a grounding tree branch j with the impedance of z0j at a node i, wherein all off-diagonal elements of the jth row and the jth column of the matrix are 0, and the diagonal element is z0 j;
adding a non-mutual inductance grounding chain branch, and the specific process is as follows: adding a grounding chain branch z0i to an original network node i, and setting the elements marked with j under all the matrixes as 0;
the mutual inductance tree branch is added, and the specific process is as follows: adding a tree branch ij with mutual inductance to an original network node i, keeping a matrix corresponding to the original network node unchanged, and solving a column element corresponding to a newly added node j;
adding a mutual inductance chain branch, and the specific process is as follows: when the original network adds the chain branches i-j, if the chain branches i-j have mutual inductance relation with the p-q branch circuit group in the original network, the order of the matrix is unchanged, and the matrix of the original network is corrected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010186620.4A CN111476461A (en) | 2020-03-17 | 2020-03-17 | Rapid calculation method for setting parameters of large power grid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010186620.4A CN111476461A (en) | 2020-03-17 | 2020-03-17 | Rapid calculation method for setting parameters of large power grid |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111476461A true CN111476461A (en) | 2020-07-31 |
Family
ID=71748235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010186620.4A Pending CN111476461A (en) | 2020-03-17 | 2020-03-17 | Rapid calculation method for setting parameters of large power grid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476461A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112366665A (en) * | 2020-11-05 | 2021-02-12 | 北京中恒博瑞数字电力科技有限公司 | Method and device for quickly breaking N-1 turns of line in relay protection setting calculation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867179A (en) * | 2010-06-02 | 2010-10-20 | 北京中恒博瑞数字电力科技有限公司 | Relay protection entire network optimal constant value automatic adjusting and coordinating method |
CN102074939A (en) * | 2010-11-17 | 2011-05-25 | 华北电网有限公司 | Online examination method of relay protection setting value based on dynamic short-circuit current |
-
2020
- 2020-03-17 CN CN202010186620.4A patent/CN111476461A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867179A (en) * | 2010-06-02 | 2010-10-20 | 北京中恒博瑞数字电力科技有限公司 | Relay protection entire network optimal constant value automatic adjusting and coordinating method |
CN102074939A (en) * | 2010-11-17 | 2011-05-25 | 华北电网有限公司 | Online examination method of relay protection setting value based on dynamic short-circuit current |
Non-Patent Citations (1)
Title |
---|
邱智勇 等: "CPU+GPU 架构下节点阻抗矩阵生成及节点编号优化方法" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112366665A (en) * | 2020-11-05 | 2021-02-12 | 北京中恒博瑞数字电力科技有限公司 | Method and device for quickly breaking N-1 turns of line in relay protection setting calculation |
CN112366665B (en) * | 2020-11-05 | 2022-07-12 | 北京中恒博瑞数字电力科技有限公司 | Method and device for quickly breaking N-1 turns of line in relay protection setting calculation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018133348A1 (en) | Static security analysis computation method, apparatus, and computer storage medium | |
CN105576648B (en) | Static security analysis double-layer parallel method based on GPU-CPU heterogeneous computing platform | |
CN103617150A (en) | GPU (graphic processing unit) based parallel power flow calculation system and method for large-scale power system | |
KR20090033139A (en) | System, method and computer program product for performing a scan operation | |
WO2022001141A1 (en) | Gpu communication method and device, and medium | |
CN115390788A (en) | Sparse matrix multiplication distribution system of graph convolution neural network based on FPGA | |
WO2023093623A1 (en) | Computation graph optimization method, data processing method and related product | |
CN108984483B (en) | Electric power system sparse matrix solving method and system based on DAG and matrix rearrangement | |
CN115237580B (en) | Intelligent calculation-oriented flow parallel training self-adaptive adjustment system and method | |
CN112100450A (en) | Graph calculation data segmentation method, terminal device and storage medium | |
Vartziotis et al. | Improved GETMe by adaptive mesh smoothing | |
CN110704023B (en) | Matrix block division method and device based on topological sorting | |
CN111476461A (en) | Rapid calculation method for setting parameters of large power grid | |
Limaye et al. | A general parallel solution to the integral transformation and second‐order Mo/ller–Plesset energy evaluation on distributed memory parallel machines | |
CN113553288B (en) | Two-layer blocking multicolor parallel optimization method for HPCG benchmark test | |
CN111181164A (en) | Improved master-slave split transmission and distribution cooperative power flow calculation method and system | |
CN113254391B (en) | Neural network accelerator convolution calculation and data loading parallel method and device | |
CN107563095A (en) | A kind of non-linear layout method of large scale integrated circuit | |
CN108879691B (en) | Large-scale continuous power flow calculation method and device | |
CN109522630B (en) | Power system transient stability simulation parallel computing method based on diagonal edge adding form | |
CN115718986B (en) | Multi-core parallel time domain simulation method based on distributed memory architecture | |
CN108985622B (en) | Power system sparse matrix parallel solving method and system based on DAG | |
CN113778518A (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN112115596A (en) | Big data parallel optimization system and method based on block coordinate descent method | |
CN111478333B (en) | Parallel static security analysis method for improving power distribution network recovery after disaster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |