CN107368368A - A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method - Google Patents
A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method Download PDFInfo
- Publication number
- CN107368368A CN107368368A CN201710478830.9A CN201710478830A CN107368368A CN 107368368 A CN107368368 A CN 107368368A CN 201710478830 A CN201710478830 A CN 201710478830A CN 107368368 A CN107368368 A CN 107368368A
- Authority
- CN
- China
- Prior art keywords
- matrix
- gpu
- back substitution
- threads
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
- G06F17/12—Simultaneous equations, e.g. systems of linear equations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/545—Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Operations Research (AREA)
- Computing Systems (AREA)
- Complex Calculations (AREA)
Abstract
The invention discloses a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms to accelerate back substitution method, and methods described comprises the following steps:According to the LU symbol decomposition results of system of linear equations coefficient matrix, i.e., upper triangular transformation matrix U1Sparsity structure, to matrix U1Each row carries out parallelization layering, U1~UNWith identical sparsity structure and parallelization layering result;Data needed for LU back substitution computings are transferred to GPU by CPU;Task is distributed and device memory optimization:Will be to matrix U1~UNBack substitution processor active task be assigned in a large amount of threads on GPU and perform, and used according to access principles memory optimization is merged;By the sequence starting layering LU back substitution computing kernel functions Batch_LUbackward that level is incremental in GPU.The present invention improves the LU back substitution operation efficiencies of direction of energy system of linear equations, solves the problems, such as that Load flow calculation takes big in Operation of Electric Systems analysis.
Description
Technical field
The invention belongs to High performance computing in power system application field, more particularly to a kind of a large amount of sparse upper triangle sides of isomorphism
The GPU of journey group accelerates back substitution method.
Background technology
Load flow calculation is most widely used, most basic and most important a kind of electric computing in power system.In power train
In the research of the method for operation of uniting and programme, it is required for carrying out Load flow calculation to compare the method for operation or plan power supply plan
Feasibility, reliability and economy.Meanwhile for the running status of real-time electric power monitoring system, it is also desirable to carry out a large amount of and fast
The Load flow calculation of speed.Therefore, in the method for operation of programming and planning and schedule system, using offline Load flow calculation;In electricity
In the real-time monitoring of Force system running status, then calculated using online power flow.
And in actual production process, offline trend and online power flow calculating all have this to compare the calculating speed of trend
High requirement.It is being related to planning and designing and complexity situations such as in the offline trend for arranging the method for operation, landing scheme because of equipment, is needing
Want the species of simulation run more, Load flow calculation amount is big, and single Load flow calculation time effects integrally emulate duration;And in power system
The online power flow carried out in operation is calculated to calculating temporal sensitivity height, it is necessary to provide calculation of tidal current in real time, is such as being envisioned
In accident, the equipment Load flow calculation out of service to the influence of static security, system needs to calculate trend under a large amount of forecast accidents
Distribution, and the method for operation Adjusted Option of anticipation is made in real time.
GPU is a kind of many-core parallel processor, will be considerably beyond CPU in the quantity of processing unit.GPU traditionally is only
Responsible figure renders, and CPU has all been given in most processing.Present GPU has developed into a kind of multinuclear, multithreading, tool
There are powerful calculating ability and high bandwidth of memory, programmable processor.Under universal computer model, associations of the GPU as CPU
Processor works, and is decomposed by task reasonable distribution and completes high-performance calculation.
It is calculating section critically important during electric power system tide calculates, wherein back substitution computing time that sparse vectors, which solve,
Number is more.After carrying out LU symbol decomposition to equation system matrix number, the sparsity structure of triangular transformation matrix U is obtained, according to U battle arrays
Sparsity structure, to U matrix rows carry out parallelization layering, wherein the calculating of the row in every layer is separate, do not rely on pass
System, it can naturally be handled by parallel calculating, be adapted to GPU to accelerate.
It would therefore be highly desirable to solve the above problems.
The content of the invention
Goal of the invention:It is an object of the invention to provide a kind of batch trend update equation group suitable for static security analysis
Back substitution computing, the calculating time of trend can be reduced and the sparse upper trigonometric equation group of a large amount of isomorphisms of Load flow calculation speed can be lifted
GPU accelerate back substitution method.
Load flow calculation:Electrodynamic noun, refer in given power system network topology, component parameters and generating, load parameter
Under the conditions of, calculate the distribution of active power, reactive power and voltage in power network.
GPU:Graphics processor (English:GraphicsProcessingUnit, abbreviation:GPU).
Technical scheme:To realize object above, the invention discloses a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms
Accelerate back substitution method, methods described comprises the following steps:
(1) according to the LU symbol decomposition results of system of linear equations coefficient matrix, i.e., upper triangular transformation matrix U1Sparse knot
Structure, to matrix U1Each row carries out parallelization layering, U1~UNWith identical sparsity structure and parallelization layering result;
(2) data needed for LU back substitution computings are transferred to GPU by CPU;
(3) task distribution and device memory optimization:Will be to matrix U1~UNBack substitution processor active task be assigned to it is big on GPU
Performed in amount thread, and used according to access principles memory optimization is merged;
(4) by the sequence starting layering LU back substitution computing kernel functions Batch_LUbackward that level is incremental in GPU.
Wherein, in the step (1), parallelization is layered U1N rows be integrated into M layers, the row belonged in same layer is simultaneously
Row carries out back substitution computing;The quantity of every layer of row included is L (k), and k represents level number, stores in kth layer all line numbers to mapping table
Mapk。
Preferably, in the step (2), data include matrix U needed for described LU back substitutions1~UN, matrix dimensionality n, matrix
U1Sparsity structure, matrix U1Parallelization layering result, forward calculation result y1~yN。
Furthermore in the step (3), by N number of isomorphism sparse matrix U1~UNLU back substitution tasks distribute to same thread
The different threads processing of block;To ensure to merge internal storage access, by matrix U1~UNContinuous storage forms one in logic in internal memory
For the big matrix of N rows, transposition operation is then carried out.
Further, in the step (4), LU back substitution computing kernel functions are defined as Batch_LUbackward<Nblocks,
Nthreads>, its thread block size NthreadsIt is fixed as 128;When calculating k layers, thread number of blocks Nblocks=L (k), always
Number of threads is:Nblocks×Nthreads;Start kernel function Batch_LUbackward<L (k), Nthreads>Belong to decompose
All rows of k layers;Batch_LUbackward<L (k), Nthreads>Specific calculation process be:
(4.1) the thread index that CUDA is distributed in thread block index blockID and thread block for each thread automatically
threadID;
(4.2) blockID and threadID are assigned to variable bid and t, bid lines is indexed by bid and t afterwards
T threads in journey block, 128 threads in bid thread blocks are responsible for matrix U1~UNJth=Mapk(bid) row back substitution
Computing, wherein:T threads are responsible for calculating matrix UtJth row back substitution computing, t=threadID+m × 128, (m=0,
1 ..., N/128);
In t threads in (4.3) bid thread blocks, judge whether t is less than N, less than continuing executing with, the otherwise line
Journey is out of service;
(4.4) variable i is incremented to n from j+1, and if only if UtDuring (j, i) ≠ 0, using formula xt(j)=yt(j)-xt(i)
×Ut(j, i) calculates back substitution result xtJ-th of element xt(j);
(4.5) formula x is usedt(j)=xt(j)/Ut(j, j) updates xt(j)。
Beneficial effect:Compared with prior art, the present invention has following remarkable advantage:The present invention is according to a large amount of same first
The LU symbol decomposition results of the identical sparse format Jacobian matrix of structure sparse matrix, the i.e. sparse format of U1 battle arrays, will in CPU
Matrix U 1 carries out parallelization layering, and result is transmitted into GPU, reduces computings of the GPU to logical operation;Furthermore by batch matrix
Back substitution operation be assigned in substantial amounts of thread and perform, and device memory is optimized according to GPU memory access mode and used, made
GPU, which is realized, merges memory access, improves internal memory operation speed;By the sequence starting layering LU back substitution computings that level is incremental in last GPU
Kernel function Batch_LUbackward, improve the calculating speed of back substitution computing.The present invention takes the mould that CPU and GPU is combined
Formula, overall flow is controlled by CPU and handles basic data, GPU is responsible for the upper triangular transformation matrix layering of sparse vectors
Back substitution computing, the LU back substitution operation efficiencies of direction of energy system of linear equations are improved, solve tide in Operation of Electric Systems analysis
Stream calculation takes the problem of big.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the present invention;
Fig. 2 is example used in the present invention;
Fig. 3 is that kernel function task of the present invention distributes schematic diagram and internal memory optimization schematic diagram.
Embodiment
Technical scheme is described further below in conjunction with the accompanying drawings.
As shown in figure 1, a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms of the present invention accelerates back substitution method, this method
It is divided into following steps to implement:
Step 1:Sparse matrix U parallelizations are layered in CPU
According to the LU symbol decomposition results of system of linear equations coefficient matrix in CPU, obtain with identical sparsity structure
Upper triangular transformation matrix U1~UN, according to U1Non-zero meta structure to upper triangular transformation matrix U1Each row carries out parallelization layering, and
Rowization is layered upper triangular matrix U1N rows be assigned in M layers, the row belonged in same layer carries out back substitution computing parallel;Every layer
Comprising the quantity of row be L (k), k represents level number;All line numbers are stored in kth layer to mapping table Mapk。
Wherein, the parallelization principle of stratification is referring to " Direct Methods for Sparse Linear Systems "
Timothy A.Davis, SIAM, Philadelphia, 2006, " for design of Parallel Algorithms and the system knot of irregular problem
Structure optimizes ", Chen Xiaoming.
Step 2:Data needed for push calculation before LU are transferred to GPU by CPU
CPU reads electrical network basic data, and by matrix U1Layering result and electrical network basic data start in kernel function
GPU is disposably transferred to before performing, reduces the data interaction between CPU and GPU.Required data include:Matrix U1~UN, square
Battle array dimension n, matrix U1Sparsity structure, matrix U1Parallelization layering result, forward calculation result y1~yN。
Step 3:Task is distributed and device memory optimization
Illustrate specific task allocation model exemplified by push is calculated before the lower triangular matrix that dimension as shown in Figure 2 is 8.Will
N number of isomorphism sparse matrix U1~UNSame a line before push away operation distribute to same thread block different threads processing.Tool
Body allocation model is as shown in Figure 3:5th thread block is responsible for calculating sparse matrix U1~UNThe 5th row;Visited to ensure to merge internal memory
Ask, by matrix U1~UNContinuous storage composition one is the big matrix of N rows in logic in internal memory, then carries out transposition operation, such as
Shown in Fig. 3, the data that 32 threads in a thread beam in the 5th thread block are read continuously are deposited in internal memory, are improved
Internal memory memory access speed.
Step 4:By the sequence starting layering LU back substitution computing kernel functions Batch_ that level is incremental in GPU
LUbackward。
LU back substitution computing kernel functions are defined as Batch_LUbackward<Nblocks, Nthreads>, its thread block size
NthreadsIt is fixed as 128;When calculating k layers, thread number of blocks Nblocks=L (k), total number of threads are:Nblocks×
Nthreads;Start kernel function Batch_LUbackward<L (k), Nthreads>To decompose all rows for belonging to kth layer.
Batch_LUbackward<L (k), Nthreads>Specific calculation process be:
(4.1) the thread index that CUDA is distributed in thread block index blockID and thread block for each thread automatically
threadID;
(4.2) blockID and threadID are assigned to variable bid and t, bid lines is indexed by bid and t afterwards
T threads in journey block, 128 threads in bid thread blocks are responsible for matrix U1~UNJth=Mapk(bid) row back substitution
Computing, wherein:T threads are responsible for calculating matrix UtJth row back substitution computing, t=threadID+m × 128, (m=0,
1 ..., N/128);
In t threads in (4.3) bid thread blocks, judge whether t is less than N, less than continuing executing with, the otherwise line
Journey is out of service;
(4.4) variable i is incremented to n from j+1, and if only if UtDuring (j, i) ≠ 0, using formula xt(j)=yt(j)-xt(i)
×Ut(j, i) calculates back substitution result xtJ-th of element xt(j);
(4.5) formula x is usedt(j)=xt(j)/Ut(j, j) updates xt(j)。
Claims (5)
1. a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method, it is characterised in that:Methods described is included such as
Lower step:
(1) according to the LU symbol decomposition results of system of linear equations coefficient matrix, i.e., upper triangular transformation matrix U1Sparsity structure, it is right
Matrix U1Each row carries out parallelization layering, U1~UNWith identical sparsity structure and parallelization layering result;
(2) data needed for LU back substitution computings are transferred to GPU by CPU;
(3) task distribution and device memory optimization:Will be to matrix U1~UNBack substitution processor active task be assigned to a large amount of lines on GPU
Performed in journey, and used according to access principles memory optimization is merged;
(4) by the sequence starting layering LU back substitution computing kernel functions Batch_LUbackward that level is incremental in GPU.
2. a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms according to claim 1 accelerates back substitution method, its feature
It is:In the step (1), parallelization is layered U1N rows be integrated into M layers, the row belonged in same layer is returned parallel
For computing;The quantity of every layer of row included is L (k), and k represents level number, stores in kth layer all line numbers to mapping table Mapk。
3. a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms according to claim 1 accelerates back substitution method, its feature
It is:In the step (2), data include matrix U needed for described LU back substitutions1~UN, matrix dimensionality n, matrix U1Sparse knot
Structure, matrix U1Parallelization layering result, forward calculation result y1~yN。
4. a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms according to claim 1 accelerates back substitution method, its feature
It is:In the step (3), by N number of isomorphism sparse matrix U1~UNLU back substitution tasks distribute to the difference of same thread block
Thread process;To ensure to merge internal storage access, by matrix U1~UNContinuous storage composition one is in logic N rows in internal memory
Big matrix, then carry out transposition operation.
5. a kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms according to claim 1 accelerates back substitution method, its feature
It is:In the step (4), LU back substitution computing kernel functions are defined as Batch_LUbackward<Nblocks, Nthreads>, its
Thread block size NthreadsIt is fixed as 128;When calculating k layers, thread number of blocks Nblocks=L (k), total number of threads
For:Nblocks×Nthreads;Start kernel function Batch_LUbackward<L (k), Nthreads>To decompose the institute for belonging to kth layer
There is row;Batch_LUbackward<L (k), Nthreads>Specific calculation process be:
(4.1) thread that CUDA is distributed in thread block index blockID and thread block for each thread automatically indexes threadID;
(4.2) blockID and threadID are assigned to variable bid and t, bid thread blocks is indexed by bid and t afterwards
In t threads, 128 threads in bid thread blocks are responsible for matrix U1~UNJth=Mapk(bid) row back substitution is transported
Calculate, wherein:T threads are responsible for calculating matrix UtJth row back substitution computing, t=threadID+m × 128, (m=0,1 ...,
N/128);
In t threads in (4.3) bid thread blocks, judge whether t is less than N, less than continuing executing with, otherwise the thread moves back
Go out operation;
(4.4) variable i is incremented to n from j+1, and if only if UtDuring (j, i) ≠ 0, using formula xt(j)=yt(j)-xt(i)×Ut
(j, i) calculates back substitution result xtJ-th of element xt(j);
(4.5) formula x is usedt(j)=xt(j)/Ut(j, j) updates xt(j)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710478830.9A CN107368368A (en) | 2017-06-22 | 2017-06-22 | A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710478830.9A CN107368368A (en) | 2017-06-22 | 2017-06-22 | A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107368368A true CN107368368A (en) | 2017-11-21 |
Family
ID=60305597
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710478830.9A Pending CN107368368A (en) | 2017-06-22 | 2017-06-22 | A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107368368A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108279977A (en) * | 2017-12-29 | 2018-07-13 | 深圳市德兰明海科技有限公司 | A kind of data processing method, device and controller based on RTOS |
CN118069969A (en) * | 2024-04-25 | 2024-05-24 | 北京理工大学 | GPU-based hierarchical media green function rapid calculation method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106026107A (en) * | 2016-07-26 | 2016-10-12 | 东南大学 | QR decomposition method of power flow Jacobian matrix for GPU acceleration |
CN106157176A (en) * | 2016-07-26 | 2016-11-23 | 东南大学 | The LU decomposition method of the direction of energy Jacobian matrix that a kind of GPU accelerates |
CN106354479A (en) * | 2016-08-12 | 2017-01-25 | 东南大学 | GPU acceleration QR decomposition method for a large number of isomorphic sparse matrixes |
CN106407158A (en) * | 2016-09-12 | 2017-02-15 | 东南大学 | GPU accelerated method for performing batch processing of isomorphic sparse matrixes multiplied by full vectors |
-
2017
- 2017-06-22 CN CN201710478830.9A patent/CN107368368A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106026107A (en) * | 2016-07-26 | 2016-10-12 | 东南大学 | QR decomposition method of power flow Jacobian matrix for GPU acceleration |
CN106157176A (en) * | 2016-07-26 | 2016-11-23 | 东南大学 | The LU decomposition method of the direction of energy Jacobian matrix that a kind of GPU accelerates |
CN106354479A (en) * | 2016-08-12 | 2017-01-25 | 东南大学 | GPU acceleration QR decomposition method for a large number of isomorphic sparse matrixes |
CN106407158A (en) * | 2016-09-12 | 2017-02-15 | 东南大学 | GPU accelerated method for performing batch processing of isomorphic sparse matrixes multiplied by full vectors |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108279977A (en) * | 2017-12-29 | 2018-07-13 | 深圳市德兰明海科技有限公司 | A kind of data processing method, device and controller based on RTOS |
CN118069969A (en) * | 2024-04-25 | 2024-05-24 | 北京理工大学 | GPU-based hierarchical media green function rapid calculation method and device |
CN118069969B (en) * | 2024-04-25 | 2024-07-09 | 北京理工大学 | GPU-based hierarchical media green function rapid calculation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106157176B (en) | A kind of LU decomposition method for the direction of energy Jacobian matrix that GPU accelerates | |
CN106407158B (en) | A kind of batch processing isomorphism sparse matrix that GPU accelerates multiplies the processing method of full vector | |
CN107368454A (en) | A kind of GPU of the sparse lower trigonometric equation group of a large amount of isomorphisms pushes away method before accelerating | |
CN107977737A (en) | Distribution transformer load Forecasting Methodology based on mxnet frame depth neutral nets | |
CN106026107B (en) | A kind of QR decomposition method for the direction of energy Jacobian matrix that GPU accelerates | |
CN106874113A (en) | A kind of many GPU heterogeneous schemas static security analysis computational methods of CPU+ | |
CN101694940A (en) | Optimal power flow implementation method considering transient security constraints | |
CN105391057B (en) | A kind of GPU threaded design methods that direction of energy Jacobi battle array calculates | |
CN104484234B (en) | A kind of more wavefront tidal current computing methods and system based on GPU | |
CN105375461A (en) | Active power distribution network power supply capacity real-time assessment method based on prediction technology | |
CN106354479B (en) | A kind of GPU acceleration QR decomposition method of a large amount of isomorphism sparse matrixes | |
CN106505575A (en) | A kind of Line Flow economic load dispatching method based on Granule Computing | |
CN103066595A (en) | Optimization method of extra-high voltage transient stability control | |
CN107368368A (en) | A kind of GPU of the sparse upper trigonometric equation group of a large amount of isomorphisms accelerates back substitution method | |
CN103279824A (en) | Modeling method for relay protection setting calculation system | |
CN108520105B (en) | Active power distribution network multi-rate real-time simulation method based on FPGA | |
Chen et al. | Multi-source and heterogeneous data integration model for big data analytics in power DCS | |
CN107423259A (en) | A kind of GPU of domino optimization accelerates trigonometric equation group back substitution method on electric power | |
CN108879691A (en) | A kind of method and device that extensive continuous tide calculates | |
CN107368455A (en) | Trigonometric equation group back substitution method on the direction of energy that a kind of GPU accelerates | |
CN107392429A (en) | Under the direction of energy that a kind of GPU accelerates method is pushed away before trigonometric equation group | |
CN105046583A (en) | Power grid model partitioning method suitable for distributed real-time data processing | |
Sumpavakup et al. | A hybrid cultural-based bee colony algorithm for solving the optimal power flow | |
CN106058856B (en) | A kind of method of quick analysis power grid static security | |
CN115051360A (en) | Online computing method and device for operation risk of electric power system based on integrated knowledge migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171121 |
|
RJ01 | Rejection of invention patent application after publication |