CN116345469A - Power grid power flow adjustment method based on graph neural network - Google Patents

Power grid power flow adjustment method based on graph neural network Download PDF

Info

Publication number
CN116345469A
CN116345469A CN202310386777.5A CN202310386777A CN116345469A CN 116345469 A CN116345469 A CN 116345469A CN 202310386777 A CN202310386777 A CN 202310386777A CN 116345469 A CN116345469 A CN 116345469A
Authority
CN
China
Prior art keywords
nodes
node
average value
normal engine
power grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310386777.5A
Other languages
Chinese (zh)
Inventor
王宏志
郑胜文
刘怀远
丁小欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310386777.5A priority Critical patent/CN116345469A/en
Publication of CN116345469A publication Critical patent/CN116345469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/04Circuit arrangements for ac mains or ac distribution networks for connecting networks of the same frequency but supplied from different sources
    • H02J3/06Controlling transfer of power between connected networks; Controlling sharing of load between connected networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/10Power transmission or distribution systems management focussing at grid-level, e.g. load flow analysis, node profile computation, meshed network optimisation, active network management or spinning reserve management
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Power Engineering (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

A power grid power flow adjustment method based on a graph neural network belongs to the technical field of power grid control. In order to solve the problems of manpower resource consumption and very low speed in the current mode of manually adjusting the power flow of the power grid. The invention utilizes a graph neural network model to conduct graph embedding processing on the power grid data of the power grid heterograph nodes; then, aiming at the non-converged power grid data after the graph embedding processing, carrying out power grid parameter adjustment based on a reinforcement learning mode; when the graph neural network model is subjected to graph embedding processing, the node sampling of the power grid is performed by restarting a random walk mode, different parameters of the same node are converted into characteristics with the same dimension, then the characteristics corresponding to the converted different parameters are aggregated in a non-sequential order, then the neighbor nodes of the same type are selected for aggregation aiming at the target node, and finally the neighbor nodes of different types and the target node parameters are aggregated with weight. The method is mainly used for adjusting the power flow of the power grid.

Description

Power grid power flow adjustment method based on graph neural network
Technical Field
The invention relates to a power grid tide adjusting method, and belongs to the technical field of power grid control.
Background
The power grid is a very complex and important system, and research on how the power grid effectively and stably operates is very important, and the power system is designed, planned and optimized based on the power system, so that the power grid has important significance.
The power flow calculation of the power system is a set of high-order nonlinear equations, and iteration is an efficient way of calculating such equations, so that if a successful power flow calculation is desired, it is ensured that it is convergent, so that the calculation can be finally successfully performed and an answer can be given. However, the rapid increase of the technological economy makes the degree of urbanization higher and higher, so that the power grid has a larger scale, and the power grid trend is always rapidly increasing, so that convergence of the power grid trend is often difficult to achieve. There are generally two non-converging forms: the first form is because various problems with the algorithm result in that there is no way to get the correct solution if there is a reasonable solution; the second form is not well understood due to problems with the various parameter conditions of the grid. When faced with the second form, modifications to the given parameter conditions (such as active power generation, reactive power generation modifications of the generator, etc.) are required to converge the tidal current calculation.
At present, the trend is not converged mainly by manually adjusting the trend, the adjustment process is very dependent on expert experience, a large amount of complex trial-and-error processes are required, the expert experience is very dependent, and a large amount of manpower resources are consumed and the speed is very low. The large power grid in China is huge in scale and is not converged, the number of parameters is often difficult to imagine, the parameters which need to be changed are not countable for the large power grid, and manual adjustment is carried out to ensure that the convergence of tide is a serious test, the speed is extremely low and the manual consumption is very high. It can be seen from the above points that the conventional method for calculating and adjusting the power flow has a lot of intolerable problems, and a method capable of freeing up manpower is highly needed, and the automatic adjustment of the power grid parameters is realized by using the powerful performance of the computer so that the power grid power flow can be successfully converged, so that a large amount of manpower resources can be saved, and the precision can be improved, so that the automatic adjustment for converging the power flow calculation has important significance.
The artificial intelligence technology utilizes the strong calculation power of a computer, so that the artificial intelligence can be considered to be applied to the trend convergence adjustment to release manpower, and the application of the artificial intelligence to the trend convergence adjustment of the power system is quite relevant, and the trend calculation convergence adjustment of the power system is carried out in a starting stage, although some achievements exist, many papers are in an try stage, most papers cannot be used in the actual trend calculation convergence adjustment of the power system, and the trend convergence adjustment is realized in many times by means of improvement in mathematics, so that the trend convergence adjustment of the artificial intelligence is quite feasible. Moreover, in current research on trend convergence adjustment, the topology of the power system is rarely considered. Therefore, the current methods have some drawbacks:
(1) Artificial intelligence is often involved in power system trend calculation convergence adjustment in an attempt phase, and is difficult to apply to an actual power system.
(2) For feature selection, no better analysis exists in the current work to demonstrate how to select features, most of which rely on empirical selection.
(3) Algorithms often do not take into account the topology of the power grid.
Disclosure of Invention
The invention aims to solve the problems that the current manual adjustment method of the power flow of the power grid consumes human resources and is low in speed, and the current manual intelligent adjustment method is difficult to apply to an actual power system.
The utility model relates to a power grid power flow adjustment method based on a graph neural network, which comprises the steps of firstly utilizing a graph neural network model to conduct graph embedding processing on power grid data of different graph nodes of a power grid; then, aiming at the non-converged power grid data after the graph embedding processing, carrying out power grid parameter adjustment based on a reinforcement learning mode;
the process of performing graph embedding processing on the power grid data of the power grid abnormal pattern nodes by utilizing the graph neural network model comprises the following steps:
node sampling of the power grid is carried out by restarting the random walk mode;
for each node in the target node and the neighbor node, converting different parameters of the same node into characteristics with the same dimension; then, the features corresponding to the converted different parameters are aggregated in a non-sequential manner, and the node features f are finally obtained by using average pool operation on the obtained node matrix 1 (v);
Then selecting the neighbor nodes of the same type for aggregation aiming at the target node, and obtaining the feature aggregation of the neighbor nodes of the same type of the target node by using the average pool operation on the obtained node matrix
Figure BDA0004174309790000021
The node characteristic f corresponding to each node in the target node and the neighbor node is to be obtained 1 (v) Feature aggregation for same type neighbor nodes as target nodes
Figure BDA0004174309790000022
And (5) polymerizing the materials in a weighted manner to obtain a final embedding result.
Further, the process of sampling the nodes of the power grid by restarting the random walk comprises the following steps:
carrying out random walk on each target node to obtain neighbor nodes, counting the neighbor nodes according to the types of the neighbor nodes, and stopping random walk when the number of each category is minimum; in the process of the random walk, the target node starts to walk, the target node is sampled to the neighbor node with the set probability, otherwise, the original target node is returned, the walk length of the random walk is limited, and the target node can be returned when the walk length reaches the maximum limit;
then, for the number of different types of neighbors, selecting a minimum number of nodes from the types of nodes more than the minimum number according to the sampled times, and setting the number as k t T represents a node type.
Further, the aggregation process of the features corresponding to the converted different parameters in a non-sequential manner is realized by using a BILSTM network.
Further, the node characteristic f 1 (v) The following are provided:
Figure BDA0004174309790000031
wherein f 1 (v)∈R d×1 D represents the dimension of content embedding; x is x i Represents the ith attribute, C, of node v v Attribute set representing node v, |C v I is C v Is a capacity of (2);
Figure BDA0004174309790000032
representative pair of features x i Is>
Figure BDA0004174309790000033
Representing the concatenation of two part matrices;
Figure BDA0004174309790000034
indicating forward LSTM and backward LSTM in BILSTM.
Further, selecting the neighbor nodes of the same type for the target node to aggregate, and obtaining the feature aggregation of the neighbor nodes of the same type of the target node by using the average pool operation on the obtained node matrix, wherein the feature aggregation comprises the following steps:
with VN t (v) To express a set of sampled t-type neighbor nodes of the target node v by
Figure BDA0004174309790000035
To aggregate neighbor nodes v' e VN of the same type t (v) The node characteristic corresponding to v' is marked as an embedded value f 1 (v') selecting BILSTM to process the feature expression of nodes in the same type and finally adding an average pool operation to get feature aggregation +.>
Figure BDA0004174309790000036
Further, the said
Figure BDA0004174309790000037
Further, the node characteristics f corresponding to each of the target node and the neighbor node will be 1 (v) Feature aggregation f of same type neighbor nodes as the target node 2 t (v) In the process of weight aggregation, different weights are applied to different types of neighbor node groups by using an attention mechanism, the operation is carried out according to the attention mechanism, the feature expressions of the different types of neighbor node groups and the feature expressions of the target node are aggregated firstly, then different results are aggregated together, and the final output embedding result is as follows:
Figure BDA0004174309790000038
wherein ε v ∈R d×1 Representing the final embedded value of the target node, a v,* Representing the importance of different types of node groups to the target node, i.e., the aggregate weight.
Further, aggregate weight a v,* The determination of (2) comprises the steps of:
setting the characteristic expression of different types of node groups
Figure BDA0004174309790000039
Wherein->
Figure BDA00041743097900000310
Feature expression representing different types of node groups, a v,* The expression of (2) is as follows:
Figure BDA00041743097900000311
wherein u is R 2d×1 Is the attention parameter.
Further, the graph neural network model needs to maximize the distance between nodes of different types and minimize the distance between nodes of the same type in the training process, the distance is calculated by using the finally obtained embedments of the different nodes, and the training targets are as follows:
Figure BDA0004174309790000041
wherein the conditional probability p (v c V; Θ) is defined as a softmax function:
Figure BDA0004174309790000042
wherein V is t Is the collection of node types in the graph, Θ represents ε v Embedding an output node represented by a formula; v c For a set VN t (v) In (c) is a node epsilon vc Representation for node v c A corresponding embedding result;
and updating model parameters through an Adam optimizer, and repeating training iteration until the change between two continuous training groups is smaller than a change threshold value, thereby obtaining the graph neural network for processing the power grid abnormal pattern.
Further, in the process of adjusting the power grid parameters based on the reinforcement Learning mode, the active power generation and the reactive power generation of the generator are adjusted according to the Q-Learning algorithm;
when the Q table is built, the rows represent a state, and the columns correspond to Q values of different actions in the state, and the Q values are the return of corresponding behavior estimation in the state;
performing dimension reduction processing on graph embedding of each node obtained by the graph neural network, reducing the graph embedding to be three-dimensional, wherein the graph embedding of the dimension reduction of the engine is recorded as [ x, y, z ], and 8 states exist in a Q table: (1) x is greater than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes; (2) x is greater than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is less than the average value of z of the normal engine nodes; (3) x is greater than the average value of x of the normal engine nodes, y is less than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes; (4) x is smaller than the average value of x of the normal engine nodes, y is larger than the average value of y of the normal engine nodes, and z is larger than the average value of z of the normal engine nodes; (5) x is greater than the average value of x of the normal engine nodes, y is less than the average value of y of the normal engine nodes, and z is less than the average value of z of the normal engine nodes; (6) x is smaller than the average value of x of the normal engine nodes, y is larger than the average value of y of the normal engine nodes, and z is smaller than the average value of z of the normal engine nodes; (7) x is smaller than the average value of x of the normal engine nodes, y is smaller than the average value of y of the normal engine nodes, and z is larger than the average value of z of the normal engine nodes; (8) x is smaller than the average value of x of the normal engine nodes, y is smaller than the average value of y of the normal engine nodes, and z is smaller than the average value of z of the normal engine nodes;
setting a probability epsilon, when the learning main body needs to perform action selection in a certain state, performing action with the maximum Q value according to the probability, otherwise, randomly selecting in all actions; after the action is selected, updating the Q value of the action selected in the current state;
there are four actions in the Q table: the method comprises the steps of (1) increasing active power generation, keeping the reactive power generation unchanged, (2) reducing the active power generation, keeping the reactive power generation unchanged, (3) keeping the active power generation unchanged, keeping the reactive power generation increased, and (4) keeping the active power generation unchanged, and reducing the reactive power generation; in terms of return setting, the distance between the graph embedding of the abnormal generator and the center vector of the graph embedding of the normal generator is reserved at each iteration, the center vector from the abnormal engine to the normal engine after iteration updating is calculated and compared with the previously reserved distance, and if the distance is smaller, the return is set to be 1, otherwise, the return is set to be-1.
The beneficial effects are that:
according to the invention, the power grid is regarded as heterogeneous graph data, the characteristic expression is carried out through the graph neural network, and the reinforcement learning is utilized to carry out parameter adjustment, so that the method is different from other modes of adjusting the power grid data trend through artificial intelligence, and thus, all key information of the power grid data is fully utilized instead of only using node parameters, so that the expression is more accurate, the reinforcement learning environment feedback step is more accurate, the reinforcement learning accuracy is improved, the reinforcement learning search space is reduced, and the efficiency is improved.
Meanwhile, the invention captures the relation between node diagram embeddings of the power grid, utilizes the powerful node classification function of the graph neural network, has effective screening effect on abnormal engine data, and can effectively find out the key part of the non-convergence of the power grid and adjust the key part in a targeted way, thereby being more rapid and effective than the traditional method.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
fig. 2 is a schematic diagram of a network structure according to the present invention.
Detailed Description
The first embodiment is as follows:
the power grid power flow adjustment method based on the graph neural network in the embodiment is a method for adjusting power grid power flow by utilizing an alien graph neural network and reinforcement learning, and compared with an isomorphic graph, the alien graph has greatly improved complexity, and the alien graph has the following challenges:
(1) Different types of nodes in the heterograms have different types and different numbers of parameters. For example, 7 bus parameters, 17 generator parameters, and both character type, integer type and floating point type. Thus, a great challenge is presented to the traditional graph neural network, all that is required is to keep the feature dimensions of different types of nodes consistent, so that the feature dimensions can be changed into reasonable input data, so the 1 st challenge is: how nodes in the heterograms are kept dimensionally consistent and aggregated.
(2) Different node types in different patterns have different characteristic dimensions and cannot be unified, so the 2 nd challenge is: how to effectively aggregate parameters of different types and dimensions in nodes.
(3) Nodes of one type in an heterogram often have fixed type neighbors, that is, it is difficult to aggregate node information to all types. The power grid graph data constructed in the previous chapter, the generator, the transformer and the load are only connected with buses, and only alternating current lines exist among the buses, so that if only first-order neighbors are aggregated, the influence of far neighbors is weakened, and the node expression may be inaccurate. The 3 rd challenge is: how to design the sampling mode so that nodes in the different patterns can fully receive the information of different types of neighbor nodes.
(4) Neighbors of different types of nodes in the heterograms may have their own weighted aggregate influence on the target node and cannot be considered to have the same influence. For example, bus bars are closely related to the nodes they are responsible for concentrating the power (generators, transformers, etc.), but less to ac lines, which contain different bus bars and thus have a more general embedding. Most of the current graph neural networks do not consider applying weights to neighbors to achieve different influences of the neighbors on the target node. The 4 th challenge is therefore: how to find a method to add weight to neighbor nodes during aggregation so that different types of neighbors have various degrees of influence.
(one), constructing and training a network model of the power grid heterogeneous map data:
for the four challenges mentioned above, from the operational process aspect of the graph neural network, the network model that processes grid heterogeneous graph data should contain five parts: (1) sampling different types of neighbor nodes; (2) Aggregating different dimension parameters of the same node (target node and neighbor node); (3) aggregating parameters of neighbor nodes of the same type; (4) The parameters of different types of neighbor nodes and target nodes are weighted and aggregated; (5) designing a loss function and training targets.
In response to these challenges, the network architecture of the present invention is shown in fig. 2.
Most of the aggregation methods adopted by the graph neural network are to remove the parameter information from the first-order neighbors of the nodes, however, many unreasonable places are caused for the abnormal composition: (1) The node type of the target node connection is relatively fixed, and the condition that some types of nodes cannot be sampled can occur, for example, a generator is only connected with a bus and is not directly connected to a transformer, an alternating current line and the like, so that sampling information is incomplete, and the expression effect of graph embedding is affected. (2) The different sampling sequences of the different nodes may result in that the node sampled earlier may not obtain the complete sampling information, while the node sampled later may be oversampled and affected by the weakly related node.
The node type refers to the type of each node in the power grid graph, and comprises a load, a generator, a transformer, a bus and an alternating current line. The node type of a certain target node is relatively fixed, which means that in the power grid diagram, the generator, the load, the transformer and the alternating current line are connected with the bus only, and the bus is connected with the generator, the load, the transformer and the alternating current line.
Aiming at the problems, the invention adopts a node sampling strategy realized by restarting random walk: firstly, a rule is made for the number of neighbors to be sampled by a target node, the statistical number (the number of neighbor nodes of each type) is grouped according to the node type, when the number of all the neighbor nodes reaches the minimum, the Walk process is stopped, and Walk (v) is used for representing the neighbor nodes obtained after the v node restarts random Walk. And setting a probability p, starting to walk from the target node, sampling to the neighbor node by the probability p, otherwise, returning to the original target node, wherein the random walk has the maximum limit of the length, and the walk length reaches the maximum limit and also returns to the target node. After the minimum stop walk is reached, counting the number of neighbors of different types, selecting a minimum number of nodes from the nodes of more types than the minimum number according to the sampled times, and setting the number as k t T represents a node type.
Stopping the travel procedure when the number of all types of neighbors reaches the minimum limit: because each target node gets some neighbor nodes after carrying out random walk, the neighbor nodes are counted according to the types of the neighbor nodes, and the random walk is stopped when the number of each category is minimum, for example, the target node is a generator, the neighbor nodes of 3 bus types, the neighbor nodes of 1 transformer type, the neighbor nodes of 1 load type and the neighbor nodes of 1 alternating current line are found, and then the random walk can be stopped, otherwise, the random walk is always found.
Selecting a minimum number of node descriptions according to the sampled times from the types of nodes with more than the minimum number: since random walks have uncertainty, for example, a minimum number of bus type neighbor nodes is 3, but before finding the minimum of all types of neighbor nodes, i might find 4 or 5 or more bus neighbor nodes, at which time we need to select the minimum number of nodes, i.e. 3, from these bus neighbor nodes, at which time i need to rank these sampled bus neighbor nodes by the number of times they are sampled, and select the 3 highest of them by the number of times they are sampled from high to low.
As explained above for node types and at a minimum, the implementation of random walk is simply that the neighbor nodes of each node need to be sampled, what they do. First, to specify a sampled target node, several (i.e., minimal) neighbor nodes of its different types are each required. For example, a generator node is specified that requires sampling to 3 bus type neighbor nodes, 1 transformer type neighbor node, 1 load type neighbor node, 1 ac line neighbor node (this specification is not fixed and requires self-tuning based on the data set). These numbers are then used as termination conditions for the random walk, i.e. neighbor sampling for the target node is stopped whenever the number of different types of neighbor nodes sampled to the target node is all at a minimum for each type.
A sampling manner will now be described, where Walk (v) represents a neighbor node obtained after a v node performs a restart random Walk. And setting a probability p, starting to walk from the target node, sampling to the neighbor node by the probability p, otherwise, returning to the original target node, wherein the random walk has the maximum limit of the length, and the walk length reaches the maximum limit and also returns to the target node. After the above-mentioned termination condition is reached, counting the number of different types of neighbor nodes, if the number is just equal to the minimum, not processing, if the number of the neighbor nodes of a certain type is larger than the minimum of the neighbor nodes of the certain type, sorting the neighbor nodes of the certain type from high to low according to the sampling times of the neighbor nodes of the certain type, and then selecting the first few nodes equal to the minimum number.
The above challenges are effectively solved by the sampling approach of the present invention, since firstly all types of nodes can be sampled in this way and secondly the nearest k to the target node can be sampled in a given value t And finally, counting neighbors of the same type, and facilitating subsequent parameter aggregation operation on the nodes of the same type so as to finally realize the graph embedded calculation of the target node.
For most of graph neural networks, graph data with the same dimension characteristics of nodes are processed, and the graph data cannot be processed when the nodes have different dimensions and different types of characteristic parameters, so that the different-graph neural networks need to find a method for aggregating different types of parameters in the same node.
In order to solve the problem, the key characteristics of each element are screened by carrying out characteristic engineering on the power grid data, the key characteristics of each element are processed by using a fully connected network, and consistent output dimensions are ensured (for example, the characteristics of the transformer are represented by different neural networks or other processing modes in the same dimensions, for example, the characteristics of the transformer are represented by useful characters in the transformer, the transformer can be changed into one-hot variables at the moment, and then the full connected network with parameters is adopted for processing, so that the output dimensions of the characteristics of different types are kept consistent regardless of the processing modes, and the basis for aggregating the characteristics of different types is obtained.
All the features which have become the same dimension are then aggregated into the value of the node, and this process can be set to f 1 (v) A. The invention relates to a method for producing a fibre-reinforced plastic composite First, no matter what model we use, we have to aggregate node features in a certain order, we cannot determine how to arrange the input order of features, and according to the characteristics that BILSTM will not be affected by the feature input order, BILSTM can be used to obtain the result of the previous stepThe features are aggregated, so that deep feature interactions are captured, and more excellent node representations are obtained. To sum up, f 1 (v) Can be expressed by the formula (1):
Figure BDA0004174309790000081
wherein f 1 (v)∈R d×1 D represents the dimension of content embedding; x is x i The i-th attribute representing node v is the i-th attribute value before processing into the feature of the same dimension, where each attribute value of node v is processed by bidirectional LSTM and then added together and divided by |C v |;
Figure BDA0004174309790000082
Representative pair of features x i The conversion function of (a) can be full connectivity network, convolution network, direct parameter conversion, etc, +.>
Figure BDA0004174309790000083
Representing the concatenation of two part matrices; c (C) v Attribute set representing node v, |C v I is C v Is a capacity of (2);
Figure BDA0004174309790000084
indicating forward LSTM and backward LSTM in BILSTM.
The LSTM formula is shown in (2):
Figure BDA0004174309790000085
wherein h is i ∈R (d/2)×1 Is the output hidden state of the i-th attribute,
Figure BDA0004174309790000094
represents a hadamard product; u (U) i ∈R (d/2)×df And W is i ∈R (d/2)×(d/2) And b j ∈R (d/2)×1 B is the weight of the network model j ∈R (d/2)×1 For the bias of the network model, j ε { z, f, o, c }, U i 、W i 、b j Is a parameter to be learned; z i 、f i And o i Forgetting gate vectors, input gate vectors and output gate vectors of the ith attribute feature respectively; c i Is the state of the i-th attribute.
Summarizing, firstly, a mode (direct conversion or neural network) is adopted to convert different types of features into the same dimension, then BILSTM is used to aggregate the converted different features in a non-sequential manner, and finally, an average pool operation is used on the obtained node matrix to finally obtain the node features. Different BILSTM's will also be used to aggregate features therein, depending on the node type. This has the advantage that firstly, additional parameters can be flexibly added, and the model is very convenient to expand. And the model is simpler, the complexity is lower, and the follow-up calculation is facilitated. Finally, it can handle unordered heterogeneous graph feature information to obtain better feature expression. This part corresponds to the NN-1 network of fig. 2.
Aggregating parameters of neighbor nodes of the same type:
by this step we have obtained node expressions for each node, which have the same dimensions, since each different type of neighbor node is assigned a different weight hereinafter, because they have different impact on the target node, this section selects the same type of neighbor node to aggregate first.
We can use VN t (v) To express the sampled set of t-type neighbor nodes of the target node v, then using
Figure BDA0004174309790000091
To aggregate neighbor nodes v' e VN of the same type t (v) A. The invention relates to a method for producing a fibre-reinforced plastic composite In the last step, the embedded value f of the node is finally obtained 1 (v'),v'∈VN t (v) The feature expressions of different nodes of the same type can be handled in the same way here, namely: to reduce order to feature representation due to node disorder in the same type of nodeInfluence, BILSTM is selected to process the feature expression of the nodes in the same type, and finally an average pool operation is added to obtain feature aggregation of the neighbor nodes of the same type of the target node, +.>
Figure BDA0004174309790000092
The expression is shown in formula (3):
Figure BDA0004174309790000093
wherein the formula of the LSTM module is the same as (2) except for the inputs and parameter settings. For different types of neighbor node sets, different BILSTM is adopted for processing, and the BILSTM has excellent processing effect on unordered node sets. This part corresponds to the NN-2 network of fig. 2.
The different types of neighbor nodes and target node parameters are weighted and aggregated:
in the previous step, the feature expressions of the same type of neighbor groups of the target node are finally obtained, and according to the challenges faced by the different composition of the previous description, for the graph embedding of the target node, different types of neighbors have different influences on the target node, so that different weights can be applied to different types of neighbor node groups by using an attention mechanism at this time, the feature expressions of different types of neighbor groups and the feature expressions of the target node are required to be aggregated firstly and then different results are aggregated together according to the attention mechanism, and the final output embedding formula is shown in formula (4):
Figure BDA0004174309790000101
wherein ε v ∈R d×1 Representing the final embedded value of the target node, a v,* Representing the importance of different types of node groups to the target node, i.e., the aggregate weight.
Setting the characteristic expression of different types of node groups
Figure BDA0004174309790000102
Wherein->
Figure BDA0004174309790000103
Feature expression representing different types of node groups, a v,* The expression of (2) is shown in formula (5):
Figure BDA0004174309790000104
wherein u is R 2d×1 Is the attention parameter.
Finally, the embedded value of the target node can be obtained, the dimension of the embedded value is d, and the embedded value is consistent with the dimension d in the previous steps. This part corresponds to the NN-3 network of fig. 2.
Loss function and training objective:
from the above and the desire to generate graph embeddings, it is known that it is desirable to maximize the distance between different types of nodes and minimize the distance between the desired types of nodes, the distance calculated using the resulting embeddings of the different nodes, so that a training objective is defined as shown in equation (6):
Figure BDA0004174309790000105
wherein the conditional probability p (v c V; Θ) is defined as a softmax function, as shown in equation (7):
Figure BDA0004174309790000106
wherein V is t Is the collection of node types in the graph, Θ represents epsilon above v The output nodes represented by the formulas are embedded.
Model parameters are updated through an Adam optimizer, training iteration is repeated until the change between two continuous training groups is small enough, and then a graph neural network for processing the power grid abnormal pattern can be obtained.
(II) power grid parameter adjustment based on reinforcement learning:
after the steps, the graph neural network model of the power grid data under the condition of embedding and processing convergence of the graph of the power grid abnormal graph nodes is obtained. According to experience knowledge, the data which mainly need to be adjusted are active power generation and reactive power generation of the generator, and experiments prove that the graph embedded representation of the abnormal generator in the non-convergence data set is large in distance from the normal data set, and cannot be accurately classified into the generator. Therefore, it is necessary to adjust the active power generation and the reactive power generation of the generator using reinforcement learning. At this time, active power generation and reactive power generation of the generator are used as learning subjects (agents), and the map obtained through the trained map neural network is embedded into the environment.
From these factors, we need a value-based reinforcement Learning strategy, comprehensively considering, selecting Q-Learning algorithm to adjust the non-convergent grid data.
The Q-Learning algorithm flow is as follows: initializing a Q table, selecting an action a, obtaining rewards after executing the action, updating the Q table, returning to the step of selecting the action a, and performing iteration until the state reaches a final value. As will be described in detail below.
When the Q-table is built, the rows represent a state and the columns correspond to the Q-values of the different actions in that state, which are the rewards of the corresponding behavior estimates in that state.
The rows represent a state related to graph embedding of the grid graph after the heterograph neural network processing. For a grid graph, after the treatment of the heterogram neural network, each node is represented as a point with the same dimension, for example, the generator is represented as [1, 1].
Typically, for one type of node, the distance of their graph embedding vectors will be very close, while for a different type of node, the graph embedding distance will be very far, however, for non-convergent grid data (generally, the data of the generator needs to be adjusted, so we only need to pay attention to the graph embedding of the generator), after the graph neural network processing, the graph embedding of the generator needs to be adjusted deviates from the graph embedding of the normal generator, so that the adjustment needs to be performed.
In order to adjust the non-converged power grid data by using Q-Learning conveniently, the dimension reduction processing is continuously carried out on the graph embedding of each node obtained by the graph neural network, the dimension is reduced to be three-dimensional, and at the moment, 8 states exist in the Q table. The engine to be regulated is set as [ x, y, z ], and eight states are (1)x is larger than the average value of x of the normal engine node, y is larger than the average value of y of the normal engine node, and z is larger than the average value of z of the normal engine node. (2) x is greater than the average value of x for the normal engine node, y is greater than the average value of y for the normal engine node, and z is less than the average value of z for the normal engine node. (3) x is greater than the average value of x for the normal engine node, y is less than the average value of y for the normal engine node, and z is greater than the average value of z for the normal engine node. (4) x is less than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes. (5) x is greater than the average value of x for the normal engine node, y is less than the average value of y for the normal engine node, and z is less than the average value of z for the normal engine node. (6) x is less than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is less than the average value of z of the normal engine nodes. (7) x is less than the average value of x of the normal engine nodes, y is less than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes. (8) x is less than the average value of x for the normal engine node, y is less than the average value of y for the normal engine node, and z is less than the average value of z for the normal engine node.
Setting a probability epsilon, and when the learning main body needs to perform action selection in a certain state, performing action with the maximum Q value according to the probability, otherwise, randomly selecting in all actions to effectively prevent the occurrence of the situation of local optimum. After the action is selected, the Q value of the action selected in the current state can be updated, and the update formula is shown as formula (8):
Q(s,a)←Q(s,a)+α[r+γmaxQ(s 1 ,a 1 )-Q(s,a)](8)
i.e. the Q value of the action in this state = value before update + learning step size (estimate-actual value), wherein the actual value is also the value before update of the action in this state, the estimate is the return that can be obtained by taking the current action in the current state and the Q value that takes the same action as the current in the next state, α in the formula is the learning step size, i.e. learning rate, γ is the attenuation coefficient, for attenuating the future return.
According to the Q-Learning algorithm, the following method is designed to adjust the active power generation and the reactive power generation of the generator. Because both the active power generation and the reactive power generation can be increased or reduced, there are actually four actions in the Q meter, namely (1) the active power generation is increased, the reactive power generation is unchanged, (2) the active power generation is reduced, the reactive power generation is unchanged, (3) the active power generation is unchanged, the reactive power generation is increased, (4) the active power generation is unchanged, and the reactive power generation is reduced. In terms of return setting, the distance between the graph embedding of the abnormal generator and the center vector of the graph embedding of the normal generator should be reserved at each iteration, the center vector from the abnormal engine to the normal engine after iteration updating is calculated and compared with the previously reserved distance, and if the distance is smaller, the return is set to be 1, otherwise, the return is set to be-1.
The grid can be constructed as a heterogeneous graph with features at vertices in terms of data representation, whereas existing machine learning algorithms appear to catch the fly when dealing with complex graph data in matrix form. The invention considers that the graph neural network is adopted to process the power grid system, a plurality of tasks of the graph neural network need to perform graph embedding operation on graph data, the graph embedding aims at representing network nodes in a low-dimensional vector space, meanwhile, the network topology structure and node information are protected, the subsequent image analysis task is convenient, and the graph neural network can well achieve the aim of protecting the network topology structure and node information in training. The invention provides a method for adjusting the power flow of the power grid by utilizing the heterogram pattern neural network and reinforcement learning, and simultaneously considers the topological structure and node characteristics of the power grid.
The key point of the invention is that the power grid is regarded as heterogeneous graph data, the characteristic expression is carried out through the graph neural network, and the reinforcement learning is utilized to carry out parameter adjustment, thus being different from other modes of adjusting the power grid data trend through artificial intelligence, thus fully utilizing all key information of the power grid data, not only using node parameters, leading the expression to be more accurate, leading the environmental feedback step of the reinforcement learning to be more accurate, improving the accuracy of the reinforcement learning, reducing the search space of the reinforcement learning and improving the efficiency.
Meanwhile, the key points of the method also capture the relation among node diagram embeddings of the power grid sharply, and the powerful node classification function of the graph neural network is utilized, so that the method has an effective screening effect on abnormal engine data, and therefore, key parts of the power grid which are not converged can be effectively found out and adjusted in a targeted manner, and the method is more rapid and effective than the traditional method.
The present invention is capable of other and further embodiments and its several details are capable of modification and variation in light of the present invention, as will be apparent to those skilled in the art, without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. The power grid power flow adjustment method based on the graph neural network is characterized in that a graph neural network model is utilized to conduct graph embedding processing on power grid data of power grid heterograph nodes; then, aiming at the non-converged power grid data after the graph embedding processing, carrying out power grid parameter adjustment based on a reinforcement learning mode;
the process of performing graph embedding processing on the power grid data of the power grid abnormal pattern nodes by utilizing the graph neural network model comprises the following steps:
node sampling of the power grid is carried out by restarting the random walk mode;
for each node in the target node and the neighbor node, converting different parameters of the same node into characteristics with the same dimension; then, the features corresponding to the converted different parameters are aggregated in a non-sequential manner, and the average pool operation is used for the most obtaining of the node matrixFinally obtaining node characteristic f 1 (v);
Then selecting neighbor nodes of the same type for aggregation aiming at the target node, and obtaining characteristic aggregation f of the neighbor nodes of the same type of the target node by using average pool operation on the obtained node matrix 2 t (v);
The node characteristic f corresponding to each node in the target node and the neighbor node is to be obtained 1 (v) Feature aggregation f of same type neighbor nodes as the target node 2 t (v) And (5) polymerizing the materials in a weighted manner to obtain a final embedding result.
2. The method for adjusting power grid power flow based on graph neural network according to claim 1, wherein the process of sampling nodes of the power grid by restarting random walk comprises the following steps:
carrying out random walk on each target node to obtain neighbor nodes, counting the neighbor nodes according to the types of the neighbor nodes, and stopping random walk when the number of each category is minimum; in the process of the random walk, the target node starts to walk, the target node is sampled to the neighbor node with the set probability, otherwise, the original target node is returned, the walk length of the random walk is limited, and the target node can be returned when the walk length reaches the maximum limit;
then, for the number of different types of neighbors, selecting a minimum number of nodes from the types of nodes more than the minimum number according to the sampled times, and setting the number as k t T represents a node type.
3. The grid power flow adjustment method based on the graph neural network according to claim 2, wherein the process of performing unordered aggregation on the features corresponding to the converted different parameters is realized by using a BILSTM network.
4. A method for regulating power flow of a power grid based on a graph neural network according to claim 3, wherein the node characteristic f 1 (v) The following are provided:
Figure FDA0004174309770000011
wherein f 1 (v)∈R d×1 D represents the dimension of content embedding; x is x i Represents the ith attribute, C, of node v v Attribute set representing node v, |C v I is C v Is a capacity of (2);
Figure FDA0004174309770000021
representative pair of features x i Is>
Figure FDA0004174309770000022
Representing the concatenation of two part matrices;
Figure FDA0004174309770000023
indicating forward LSTM and backward LSTM in BILSTM.
5. The grid power flow adjustment method based on the graph neural network according to claim 4, wherein the process of selecting the neighboring nodes of the same type for the target node to aggregate, and obtaining the feature aggregation of the neighboring nodes of the same type of the target node on the obtained node matrix by using the average pool operation comprises the following steps:
with VN t (v) To express the sampled set of t-type neighbor nodes of the target node v, using f 2 t To aggregate neighbor nodes v' e VN of the same type t (v) The node characteristic corresponding to v' is marked as an embedded value f 1 (v') selecting BILSTM to process feature expressions of nodes in the same type and finally adding an average pool operation to obtain feature aggregation of neighbor nodes of the same type of target node
Figure FDA0004174309770000024
6. The method for adjusting power grid power flow based on a graph neural network according to claim 5, wherein the method comprises the following steps of
Figure FDA0004174309770000025
7. The grid power flow adjustment method based on the graph neural network according to claim 6, wherein the node characteristics f corresponding to each of the target node and the neighbor nodes are to be calculated 1 (v) Feature aggregation f of same type neighbor nodes as the target node 2 t (v) In the process of weight aggregation, different weights are applied to different types of neighbor node groups by using an attention mechanism, the operation is carried out according to the attention mechanism, the feature expressions of the different types of neighbor node groups and the feature expressions of the target node are aggregated firstly, then different results are aggregated together, and the final output embedding result is as follows:
Figure FDA0004174309770000026
wherein ε v ∈R d×1 Representing the final embedded value of the target node,
Figure FDA0004174309770000027
representing the importance of different types of node groups to the target node, i.e., the aggregate weight.
8. The grid power flow adjustment method based on the graph neural network according to claim 7, wherein the aggregation weights are
Figure FDA0004174309770000028
The determination of (2) comprises the steps of:
setting the characteristic expression of different types of node groups
Figure FDA0004174309770000029
Wherein->
Figure FDA00041743097700000210
Characteristic expression representing different types of node groups, +.>
Figure FDA00041743097700000211
The expression of (2) is as follows:
Figure FDA00041743097700000212
wherein u is R 2d×1 Is the attention parameter.
9. The grid power flow adjustment method based on the graph neural network according to claim 8, wherein the graph neural network model needs to maximize distances between nodes of different types and minimize distances between nodes of the same type in a training process, the distances are calculated by using finally obtained embedments of the different nodes, and training targets are as follows:
Figure FDA0004174309770000031
wherein the conditional probability p (v c V; Θ) is defined as a softmax function:
Figure FDA0004174309770000032
wherein V is t Is the collection of node types in the graph, Θ represents ε v Embedding an output node represented by a formula; v c For a set VN t (v) Is provided with a plurality of nodes, wherein the nodes,
Figure FDA0004174309770000033
representation for node v c A corresponding embedding result;
and updating model parameters through an Adam optimizer, and repeating training iteration until the change between two continuous training groups is smaller than a change threshold value, thereby obtaining the graph neural network for processing the power grid abnormal pattern.
10. The grid power flow adjustment method based on the graph neural network according to any one of claims 1 to 9, characterized in that in the process of adjusting the grid parameters based on the reinforcement Learning mode, the active power generation and the reactive power generation of the generator are adjusted according to the Q-Learning algorithm;
when the Q table is established, the row represents a state, and the column corresponds to the Q value of different actions in the state, and the Q value is the return of the corresponding action estimation in the state;
performing dimension reduction processing on graph embedding of each node obtained by the graph neural network, reducing the graph embedding to be three-dimensional, wherein the graph embedding of the dimension reduction of the engine is recorded as [ x, y, z ], and 8 states exist in a Q table: (1) x is greater than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes; (2) x is greater than the average value of x of the normal engine nodes, y is greater than the average value of y of the normal engine nodes, and z is less than the average value of z of the normal engine nodes; (3) x is greater than the average value of x of the normal engine nodes, y is less than the average value of y of the normal engine nodes, and z is greater than the average value of z of the normal engine nodes; (4) x is smaller than the average value of x of the normal engine nodes, y is larger than the average value of y of the normal engine nodes, and z is larger than the average value of z of the normal engine nodes; (5) x is greater than the average value of x of the normal engine nodes, y is less than the average value of y of the normal engine nodes, and z is less than the average value of z of the normal engine nodes; (6) x is smaller than the average value of x of the normal engine nodes, y is larger than the average value of y of the normal engine nodes, and z is smaller than the average value of z of the normal engine nodes; (7) x is smaller than the average value of x of the normal engine nodes, y is smaller than the average value of y of the normal engine nodes, and z is larger than the average value of z of the normal engine nodes; (8) x is smaller than the average value of x of the normal engine nodes, y is smaller than the average value of y of the normal engine nodes, and z is smaller than the average value of z of the normal engine nodes;
setting a probability epsilon, when the learning main body needs to perform action selection in a certain state, performing action with the maximum Q value according to the probability, otherwise, randomly selecting in all actions; after the action is selected, updating the Q value of the action selected in the current state;
there are four actions in the Q table: the method comprises the steps of (1) increasing active power generation, keeping the reactive power generation unchanged, (2) reducing the active power generation, keeping the reactive power generation unchanged, (3) keeping the active power generation unchanged, keeping the reactive power generation increased, and (4) keeping the active power generation unchanged, and reducing the reactive power generation; in terms of return setting, the distance between the graph embedding of the abnormal generator and the center vector of the graph embedding of the normal generator is reserved at each iteration, the center vector from the abnormal engine to the normal engine after iteration updating is calculated and compared with the previously reserved distance, and if the distance is smaller, the return is set to be 1, otherwise, the return is set to be-1.
CN202310386777.5A 2023-04-12 2023-04-12 Power grid power flow adjustment method based on graph neural network Pending CN116345469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310386777.5A CN116345469A (en) 2023-04-12 2023-04-12 Power grid power flow adjustment method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310386777.5A CN116345469A (en) 2023-04-12 2023-04-12 Power grid power flow adjustment method based on graph neural network

Publications (1)

Publication Number Publication Date
CN116345469A true CN116345469A (en) 2023-06-27

Family

ID=86882320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310386777.5A Pending CN116345469A (en) 2023-04-12 2023-04-12 Power grid power flow adjustment method based on graph neural network

Country Status (1)

Country Link
CN (1) CN116345469A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034179A (en) * 2023-10-10 2023-11-10 国网山东省电力公司营销服务中心(计量中心) Abnormal electric quantity identification and tracing method and system based on graph neural network
CN117198406A (en) * 2023-09-21 2023-12-08 亦康(北京)医药科技有限公司 Feature screening method, system, electronic equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117198406A (en) * 2023-09-21 2023-12-08 亦康(北京)医药科技有限公司 Feature screening method, system, electronic equipment and medium
CN117198406B (en) * 2023-09-21 2024-06-11 亦康(北京)医药科技有限公司 Feature screening method, system, electronic equipment and medium
CN117034179A (en) * 2023-10-10 2023-11-10 国网山东省电力公司营销服务中心(计量中心) Abnormal electric quantity identification and tracing method and system based on graph neural network
CN117034179B (en) * 2023-10-10 2024-02-02 国网山东省电力公司营销服务中心(计量中心) Abnormal electric quantity identification and tracing method and system based on graph neural network

Similar Documents

Publication Publication Date Title
CN116345469A (en) Power grid power flow adjustment method based on graph neural network
Mathur et al. A comprehensive analysis of intelligent controllers for load frequency control
Xie et al. Imitation and transfer Q-learning-based parameter identification for composite load modeling
CN114036850A (en) Runoff prediction method based on VECGM
CN114792158A (en) Multi-wind-farm short-term power prediction method based on space-time fusion graph neural network
CN114970351A (en) Power grid flow adjustment method based on attention mechanism and deep reinforcement learning
CN116894504A (en) Wind power cluster power ultra-short-term prediction model establishment method
CN113627685B (en) Wind driven generator power prediction method considering wind power internet load limit
CN111027229A (en) Wind power curve fitting method based on sparse heteroscedastic multi-spline regression
CN115018111A (en) Wind power prediction method and system integrating deep learning and self-adaptive modeling mechanisms
CN112330042B (en) Power distribution network reconstruction method based on self-adaptive fuzzy C-means clustering scene division
CN112202196B (en) Quantum deep reinforcement learning control method of doubly-fed wind generator
CN112418504B (en) Wind speed prediction method based on mixed variable selection optimization deep belief network
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
Morales-Hernández et al. Online learning of windmill time series using Long Short-term Cognitive Networks
CN116599860A (en) Network traffic gray prediction method based on reinforcement learning
CN116937638A (en) Energy regulation and control method based on fuzzy logic reasoning
Lee et al. Solar irradiance forecasting based on electromagnetism-like neural networks
CN115526399A (en) Wind power output prediction method for convolution attention fusion confidence domain enhanced migration
CN114265674A (en) Task planning method based on reinforcement learning under time sequence logic constraint and related device
CN113762646A (en) Photovoltaic short-term power intelligent prediction method and system
CN113095596A (en) Photovoltaic power prediction method based on multi-stage Gate-SA-TCN
Ding et al. Photovoltaic array power prediction model based on EEMD and PSO-KELM
CN112183843A (en) Thermal power plant load optimal distribution method based on hybrid intelligent algorithm
Niu et al. Neural architecture search based on particle swarm optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination