CN116960989B - Power load prediction method, device and equipment for power station and storage medium - Google Patents

Power load prediction method, device and equipment for power station and storage medium Download PDF

Info

Publication number
CN116960989B
CN116960989B CN202311217166.4A CN202311217166A CN116960989B CN 116960989 B CN116960989 B CN 116960989B CN 202311217166 A CN202311217166 A CN 202311217166A CN 116960989 B CN116960989 B CN 116960989B
Authority
CN
China
Prior art keywords
training
preset
power load
data
random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311217166.4A
Other languages
Chinese (zh)
Other versions
CN116960989A (en
Inventor
吴智泉
张喜平
陈克锐
赵咏年
王振刚
王松
吴春
朱琳
吴文韬
王潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Power Investment Green Energy Technology Co ltd
Original Assignee
Yunnan Power Investment Green Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Power Investment Green Energy Technology Co ltd filed Critical Yunnan Power Investment Green Energy Technology Co ltd
Priority to CN202311217166.4A priority Critical patent/CN116960989B/en
Publication of CN116960989A publication Critical patent/CN116960989A/en
Application granted granted Critical
Publication of CN116960989B publication Critical patent/CN116960989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The application discloses a power load prediction method, a device, equipment and a storage medium for a power station, and the technical field of new energy power generation is designed. The error rate of the predictive model of the present application is smaller than that of the conventional neural network model.

Description

Power load prediction method, device and equipment for power station and storage medium
Technical Field
The application relates to the technical field of new energy, in particular to a power load prediction method, device and equipment for a power station and a storage medium.
Background
Along with the wide market trend of new energy power industry, the accuracy of power load prediction is focused gradually, the accuracy directly influences the network layout and the rationality of operation, and the improvement of the accuracy of load prediction has important significance for the economic and safe operation of a power system; with the rapid development of the economy and society, the demand of electric power energy is increasing. Therefore, the power supply aspect must be prepared sufficiently to meet the power demand, so that the power load prediction becomes one of the important contents in aspects of power system planning, scheduling, operation and the like.
The power scheduling of power load prediction refers to that the power load is predicted, and corresponding scheduling is performed according to a prediction result so as to meet the requirements of a power system. In the process of power load prediction, a model is built to predict future power load demands by collecting and analyzing historical power load data.
At present, the main power load prediction model adopts an artificial intelligent model based on big data, for example, a neural network construction model is used for predicting the power load, but the traditional neural network model has low learning convergence speed and a network topological structure is not easy to determine, so that the prediction result of the neural network prediction in the aspect of the power load is generally higher in error rate.
Disclosure of Invention
The application mainly aims to provide a power load prediction method, device, equipment and storage medium for a power station, which are used for solving the problems that the traditional neural network model in the prior art is low in learning convergence speed and is difficult to determine in network topology structure, so that the prediction result of the neural network prediction in the aspect of power load is high in general error rate.
In order to achieve the above object, the present application provides the following technical solutions:
a power load prediction method of a power plant, the power load prediction method comprising:
acquiring a plurality of historical load data of the preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set;
dividing the normalized data set into a training set, a verification set and a test set according to a preset proportion;
defining a topological relation of a neural network model, wherein the topological relation comprises an input layer, an implicit layer and an output layer which are sequentially connected through signals;
outputting the training set to the input layer, training for a first preset number of times through the neural network model, and respectively acquiring root mean square errors of the training set and the current training result based on each training;
Acquiring minimum values in all root mean square errors, and acquiring training results corresponding to the minimum values as an optimal model;
iteratively updating the number of hidden nodes of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
and obtaining the number of hidden nodes after iteration updating is completed, substituting the number into the optimal model, and obtaining the power load prediction model based on the preset time period.
As a further improvement of the present application, a plurality of historical load data of the preset area in a preset time period is obtained through a preset strategy, and normalization processing is performed on all the historical load data to obtain a normalized data set, including:
all historical load data were normalized according to equation (1):
(1);
wherein,for the power load value of the normalized dataset, < >>Electric load values for all history load data, +.>For the average of all historical load data, +.>Standard deviation for all historical load data.
As a further improvement of the present application, the neural network model is characterized by the formula (2):
(2);
wherein,modeling the neural network; / >Is the +.>A plurality of input nodes, each input node corresponding to a set of training data of the training set,/->Is the +.>The input node is to the ++>Preset weights of the input nodes; />For connecting to the +.>A threshold of the input nodes;is a transfer function, and->
As a further improvement of the present application, the root mean square error is characterized by formula (3):
(3);
wherein,for the root mean square error>For the number of sets of training data, +.>Is->True value of group training data, +.>Is->Training results after training of the group training data is completed.
As a further improvement of the present application, iteratively updating the number of hidden nodes of the hidden layer by a particle swarm algorithm so that a root mean square error between the optimal model and the test set is less than or equal to a preset threshold, including:
according to the formula (4), a plurality of random solutions are respectively given to the number of the hidden nodes, and the result of all random solutions is defined as that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
(4);
wherein,for the set of all random solutions, +.>For each random solution, respectively->Label for random solution- >The number of all random solutions; />For the set of velocities of all random solutions, +.>The speed of each random solution;
initializing the positions and the speeds of all random solutions;
updating the position and the speed of each random solution based on the same random solution according to formula (5):
(5);
wherein,in +.>Speed of walking->In +.>Speed inertia of steps,/->Is an inertia coefficient>Self-cognition characterization of the current random solution, < >>Social cognitive characterization for the current random solution; />And->Are all the learning factors of the human body,is a random number with a preset value range, +.>Optimal solution already obtained for the current random solution, < ->The optimal solution obtained for all random solutions;
iterating a second preset number of times according to the formula (5) to update eachEach->
Respectively judge eachWhether the first difference value compared with the previous iteration is smaller than or equal to a first preset adaptation threshold value;
if yes, respectively judging eachWhether the second difference value compared with the previous iteration is smaller than or equal to a second preset adaptation threshold value;
if yes, judging that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
acquiring each separately And obtaining the number of the nodes with the highest duty ratio, wherein the number of the nodes with the highest duty ratio is the hidden node number.
As a further improvement of the present application, iterating a second preset number of times according to the formula (5) to update eachEach->Comprising:
optimizing according to (6) based on each iterationThe coefficient of inertia
(6);
Wherein,for the optimized inertia coefficient, +.>For initial inertia factor, +.>For the inertia coefficient iterating to the maximum number, +.>Is the maximum number of iterations.
As a further improvement of the present application, the method includes obtaining the number of hidden nodes after the iteration update is completed and substituting the number into the optimal model to obtain a power load prediction model based on the preset time period, and then includes:
generating a coordinate layout by taking natural time as a horizontal axis and taking a power load value as a vertical axis;
outputting a power load prediction model based on the preset time period to the coordinate layout to generate a visual prediction curve;
and sending the visual prediction curve to an external visual terminal.
In order to achieve the above purpose, the present application further provides the following technical solutions:
an electric power load prediction device for a power plant, the electric power load prediction device for a power plant being applied to the electric power load prediction method for a power plant described above, the electric power load prediction device for a power plant comprising:
The normalized data set acquisition module is used for acquiring a plurality of historical load data of the preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set;
the normalization data set dividing module is used for dividing the normalization data set into a training set, a verification set and a test set according to a preset proportion;
the topological relation definition module is used for defining the topological relation of the neural network model, and the topological relation comprises an input layer, an implicit layer and an output layer which are connected in sequence in a signal manner;
the neural network model training module is used for outputting the training set to the input layer, training the neural network model for a first preset number of times, and respectively acquiring root mean square errors of the verification set and the current training result based on each training;
the optimal model acquisition module is used for acquiring minimum values in all root mean square errors and acquiring training results corresponding to the minimum values as an optimal model;
the hidden node quantity optimizing module is used for iteratively updating the hidden node quantity of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
And the power load prediction model acquisition module is used for acquiring the number of hidden nodes after the iteration update is completed and substituting the number into the optimal model to obtain the power load prediction model based on the preset time period.
In order to achieve the above purpose, the present application further provides the following technical solutions:
an electronic device comprising a processor, a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored in the memory, implements the power load prediction method for a power plant as described above.
In order to achieve the above purpose, the present application further provides the following technical solutions:
a storage medium having stored therein program instructions which, when executed by a processor, implement a power load prediction method capable of implementing a power plant as described above.
According to the method, a normalized data set is divided into a training set, a verification set and a test set according to a preset proportion, the training set is output to an input layer, training of a first preset number of times is carried out through a neural network model, root mean square errors of training results of the verification set and the current time are respectively obtained based on each training, minimum values in all root mean square errors are obtained, training results corresponding to the minimum values are obtained to serve as an optimal model, the number of hidden nodes of the hidden layer is iteratively updated through a particle swarm algorithm, so that the root mean square errors of the optimal model and the test set are smaller than or equal to a preset threshold value, the number of hidden nodes after iteration update is obtained and substituted into the optimal model, and the power load prediction model based on a preset time period is obtained. The application carries out iterative updating on hidden layers of the neural network model based on the particle swarm algorithm, so that the particle swarm algorithm finds the relatively best hidden layer number in the preset iteration times, and the particle swarm algorithm can find the optimal hidden layer number by a self-adaptive decreasing weight and contraction factor method, so that the error rate of the final power load prediction model is smaller than that of the traditional neural network model.
Drawings
FIG. 1 is a schematic flow chart of one embodiment of a power load prediction method for a power plant of the present application;
FIG. 2 is a functional block diagram of one embodiment of a power load prediction apparatus of a power plant of the present application;
FIG. 3 is a schematic diagram of an embodiment of an electronic device of the present application;
FIG. 4 is a schematic diagram illustrating the structure of an embodiment of a storage medium according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "first," "second," and "third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
As shown in fig. 1, the present embodiment provides an embodiment of a power load prediction method of a power plant, in which the power load prediction method includes the steps of:
step S1, acquiring a plurality of historical load data of a preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set.
Preferably, the preset strategy comprises a method for directly acquiring the load data of the national netlist through a public channel, deriving the load data through a user load side management platform, acquiring the power grid gateway table data by adopting a non-invasive infrared probe, referencing the typical load data of the industry and the like.
And S2, dividing the normalized data set into a training set, a verification set and a test set according to a preset proportion.
Preferably, during actual use, 70% is generally employed: 15%: the normalized dataset was scaled by 15%, i.e., 70% of the data was the training set, 15% of the data was the validation set, and 15% of the data was the test set.
And step S3, defining a topological relation of the neural network model, wherein the topological relation comprises an input layer, an implicit layer and an output layer which are sequentially connected through signals.
And S4, outputting the training set to an input layer, training for a first preset number of times through a neural network model, and respectively acquiring root mean square errors of a verification set and a current training result based on each training.
And S5, acquiring minimum values in all root mean square errors, and acquiring training results corresponding to the minimum values as an optimal model.
And S6, iteratively updating the number of hidden nodes of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value.
And S7, acquiring the number of hidden nodes after iteration updating is completed, substituting the number into an optimal model, and obtaining a power load prediction model based on a preset time period.
Further, the step S1 specifically includes the following steps:
step S11, carrying out standard normalization on all the historical load data according to the formula (1):
(1)。
Wherein,for normalizing the power load value of the dataset, +.>For all of the power load values of the historical load data,for the average of all historical load data, +.>Standard deviation for all historical load data.
Preferably, the normalization method of zero-mean normalization (Z-score normalization) is preferred in this embodiment, and this method gives the mean (mean) and standard deviation (standard deviation) of the raw data to normalize the data, and the processed data conforms to the standard normal distribution, that is, the mean is 0 and the standard deviation is 1. For the normalization method, batch normalization (batch normalization) can be used, compared with simple normalization when training is performed on the prior neural network, only normalization processing is performed on input layer data, but no normalization processing is performed on middle layers, although normalization processing is performed on a data set of input nodes, the data distribution of the input data after matrix multiplication is likely to be greatly changed, and the change of the data distribution is larger and larger along with the continuous deepening of the network layer number of an underlying layer, so that the normalization processing performed on the middle layers of the neural network by batch normalization has better training effect.
Further, the neural network model is characterized by formula (2):
(2)。
wherein,is a neural network model; />Is the->Each input node corresponds to one group of training data of the training set, ++>Is the->Input node to hidden layer +.>Preset weights of the input nodes; />To connect to the +.>A threshold of the input nodes; />Is a transfer function, and
preferably, the number in brackets of the symbol corner mark in formula (2) of this embodiment is the number of layers, for example, the corner mark (1) is the first layer, i.e. the input layer, and the corner marks (1, 2) are the first layer to the second layer, i.e. the input layer to the hidden layer.
Preferably, the training of the present embodiment aims to train the electricity usage habit of the preset area by taking the natural years and natural months as time nodes for the preset area and through historical electricity load data, so as to summarize and predict the electricity load of the preset area in the future.
Preferably, training a model to train a neural network typically requires providing a large amount of data, i.e., a data set; the data sets are generally divided into three categories, namely training set (training set), validation set (validation set) and test set (test set) as described above.
One epoch is a process equal to one training time using all samples in the training set, and the training time refers to one forward propagation (forward pass) and one backward propagation (back pass); when the number of samples (i.e. training sets) of one epoch is too large, excessive time may be consumed for performing one training, and it is not necessary to use all data of the training set for each training, the whole training set needs to be divided into a plurality of small blocks, that is, a plurality of latches for training; one epoch is made up of one or more latches, which are part of a training set, with only a portion of the data being used for each training process, i.e., one latch, and one latch being trained as an iteration.
Preferably, the neural network training specifically includes a Perceptron (Perceptron), the Perceptron is composed of two layers of neurons, an input layer receives an external input signal and transmits the external input signal to an output layer, the output layer is an M-P neuron, and if the formula (1) is a step function, then:
---①。
preferably, given a training data set, then the weights(/>=1, 2,..n), and training threshold +.>Can be obtained by learning->It can be understood that a weight corresponding to a fixed value with a fixed input of-1, 0 +. >
It should be noted that, the formula (1) herein is not in communication with the symbol meanings of the other formulas in the embodiments, and the formula (1) is merely schematically illustrated and does not participate in the calculation of the other formulas.
Preferably, the number of times of training the neural network in this embodiment may be set to 1000 times.
Preferably, the learning rate of 1 st to 500 th epochs may be set to 0.01, the learning rate of 501 st to 750 th epochs may be set to 0.001, and the learning rate of 751 th to 1000 th epochs may be set to 0.0001.
It can be understood that the neural network training of this embodiment mainly includes the following ideas:
(1) the weights and bias terms in the network are initialized.
Initializing parameter values (the weight value of the output unit, the bias item, the weight value of the hidden unit and the bias item are all parameters of the model) to activate forward propagation to obtain the output value of each layer of elements, and further obtaining the value of the loss function.
(2) And activating forward propagation to obtain the output value of each layer and the expected value of the loss function of each layer.
(3) And calculating an error term of the output unit and an error term of the hidden unit according to the loss function.
Each error is calculated, the gradient of the parameter with respect to the loss function is calculated or the partial derivative is calculated according to the calculus chain law. Solving partial derivatives for vectors or matrixes in the composite function, wherein the editing derivatives of the internal functions of the composite function are always multiplied left; for scalar bias derivative in the composite function, the derivative of the internal function of the composite function can be multiplied left or right.
(4) The weights and bias terms in the neural network are updated.
(5) Repeating the steps (2) - (4) until the loss function is smaller than a preset threshold or the iteration times are used up, and outputting the parameters at the moment to obtain the current optimal parameters.
Further, the root mean square error is characterized by formula (3):
(3)。
wherein,is root mean square error>For the number of sets of training data, +.>Is->True value of group training data, +.>Is->Training results after training of the group training data is completed.
Further, the step S6 specifically includes the following steps:
step S61, a plurality of random solutions are respectively given to the number of the hidden nodes according to the formula (4), and the result of all the random solutions is defined as that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value.
(4)。
Wherein,for the set of all random solutions, +.>For each random solution, respectively->Label for random solution->The number of all random solutions; />Velocity for all random solutionsSet of->The velocity of each random solution is separate.
Step S62, initializing the positions and velocities of all random solutions.
Step S63, updating the position and the speed of each random solution according to equation (5) based on the same random solution:
(5)。
wherein,in +. >Speed of walking->In +.>Speed inertia of steps,/->Is an inertia coefficient>Self-cognition characterization of the current random solution, < >>Social cognitive characterization for the current random solution; />And->Are all the learning factors of the human body,for presetting to fetchRandom number of value range,/->Optimal solution already obtained for the current random solution, < ->The optimal solution that has been obtained for all random solutions.
Preferably, the method comprises the steps of,is in a preset value range of [0,1 ]],/>The value range of (2) is [0,0.5 ]]Preferably 0.4;the value range of (2) is [0.5,1 ]]Preferably 0.8.
Step S64, iterating a second preset number of times according to formula (5) to update eachEach->
Step S65, judging eachCompared with the first difference value of the last iteration is less than or equal to a first preset adaptation threshold value, if each +.>Step S66 is performed when the first difference value compared to the previous iteration is less than or equal to the first preset adaptation threshold.
Step S66, judging eachCompared with the second difference value of the last iteration, if the second difference value is smaller than or equal to a second preset adaptation threshold value, if each +.>And comparing with the second difference value of the previous iteration being smaller than or equal to the second preset adaptation threshold value, executing step S67.
And S67, judging that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value.
Step S68, respectively obtaining eachAnd obtaining the number of the nodes with the highest duty ratio, wherein the number of the nodes with the highest duty ratio is the number of the hidden nodes.
Further, the step S64 specifically includes the following steps:
step S641, optimizing the inertia coefficients according to (6) based on each iteration
(6)。
Wherein,for the optimized inertia coefficient, +.>For initial inertia factor, +.>For the inertia coefficient iterating to the maximum number, +.>Is the maximum number of iterations.
Further, after step S7, the method further includes the following steps:
in step S10, a coordinate layout is generated with the natural time on the horizontal axis and the power load value on the vertical axis.
Step S20, outputting the power load prediction model based on the preset time period to the coordinate layout to generate a visual prediction curve.
And step S30, transmitting the visual prediction curve to an external visual terminal.
Preferably, the model establishment, the model iteration and the algorithm iteration can be realized through MATLAB.
According to the embodiment, the normalized data set is divided into a training set, a verification set and a test set according to the preset proportion, the training set is output to an input layer, training of the first preset times is conducted through a neural network model, root mean square errors of training results of the verification set and the current time are obtained based on each training, minimum values in all root mean square errors are obtained, training results corresponding to the minimum values are obtained to serve as an optimal model, the number of hidden nodes of the hidden layer is iteratively updated through a particle swarm algorithm, so that the root mean square errors of the optimal model and the test set are smaller than or equal to a preset threshold value, the number of hidden nodes after iteration update is obtained and substituted into the optimal model, and the power load prediction model based on a preset time period is obtained. In the embodiment, iterative updating is performed on hidden layers of the neural network model based on the particle swarm algorithm, so that the particle swarm algorithm finds the relatively best hidden layer number within the preset iteration times, and the particle swarm algorithm can find the optimal hidden layer number through a self-adaptive decreasing weight and contraction factor method, so that the error rate of the final power load prediction model is smaller than that of the traditional neural network model. In the actual operation process, the power load prediction model of a certain area is dynamically simulated by MATLAB, the error rate of the traditional neural network model is 0.41% after 1000 times of dynamic simulation, and the error rate of the embodiment is 0.16%.
As shown in fig. 2, the present embodiment provides an embodiment of a power load prediction apparatus of a power plant, in which the power load prediction apparatus of a power plant is applied to a power load prediction method of a power plant as described above, and the power load prediction apparatus of a power plant includes a normalized data set acquisition module 1, a normalized data set division module 2, a topology relationship definition module 3, a neural network model training module 4, an optimal model acquisition module 5, an implicit node number optimizing module 6, and a power load prediction model acquisition module 7 that are electrically connected in order.
The normalized data set acquisition module 1 is used for acquiring a plurality of historical load data of a preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set; the normalized data set dividing module 2 is used for dividing the normalized data set into a training set, a verification set and a test set according to a preset proportion; the topological relation definition module 3 is used for defining the topological relation of the neural network model, and the topological relation comprises an input layer, an implicit layer and an output layer which are connected in sequence in a signal manner; the neural network model training module 4 is used for outputting a training set to an input layer, training for a first preset number of times through the neural network model, and respectively acquiring root mean square errors of a verification set and a current training result based on each training; the optimal model obtaining module 5 is used for obtaining minimum values in all root mean square errors and obtaining training results corresponding to the minimum values as an optimal model; the hidden node quantity optimizing module 6 is used for iteratively updating the hidden node quantity of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value; the power load prediction model acquisition module 7 is used for acquiring the number of hidden nodes after iteration update is completed and substituting the number into an optimal model to obtain a power load prediction model based on a preset time period.
Further, the normalized data set acquisition module 1 is specifically configured to perform standard normalization on all the historical load data according to the formula (1):
(1)。
wherein,for normalizing the power load value of the dataset, +.>For all of the power load values of the historical load data,for the average of all historical load data, +.>Standard deviation for all historical load data.
Further, the neural network model training module 4 is mounted with a neural network model (2) and a root mean square error (3):
(2)。
wherein,is a neural network model; />Is the->Each input node corresponds to one group of training data of the training set, ++>Is the->Input node to hidden layer +.>Preset weights of the input nodes; />To connect to the +.>A threshold of the input nodes; />Is a transfer function, and
further, the root mean square error is characterized by formula (3):
(3)。
wherein,is root mean square error>For the number of sets of training data, +.>Is->True value of group training data, +.>Is->Training results after training of the group training data is completed.
Further, the hidden node quantity optimizing module comprises a first hidden node quantity optimizing sub-module, a second hidden node quantity optimizing sub-module, a third hidden node quantity optimizing sub-module, a fourth hidden node quantity optimizing sub-module, a fifth hidden node quantity optimizing sub-module, a sixth hidden node quantity optimizing sub-module, a seventh hidden node quantity optimizing sub-module and an eighth hidden node quantity optimizing sub-module which are electrically connected in sequence; the first hidden node quantity optimizing sub-module is electrically connected with the optimal model acquisition module, and the eighth hidden node quantity optimizing sub-module is electrically connected with the power load prediction model acquisition module.
The first implicit node quantity optimizing submodule is used for respectively endowing a plurality of random solutions to the implicit node quantity according to a formula (4), and defining the result of all the random solutions as that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value.
(4)。
Wherein,for the set of all random solutions, +.>For each random solution, respectively->Label for random solution->The number of all random solutions; />For the set of velocities of all random solutions, +.>The velocity of each random solution is separate.
The second implicit node quantity optimizing submodule is used for initializing the positions and the speeds of all random solutions.
The third hidden node quantity optimizing submodule is used for updating the position and the speed of each random solution based on the same random solution according to the formula (5):
(5)。
wherein,in +.>Speed of walking->In +.>Speed inertia of steps,/->Is an inertia coefficient>Self-cognition characterization of the current random solution, < >>Social cognitive characterization for the current random solution; />And->Are all the learning factors of the human body,is a random number with a preset value range, +.>Optimal solution already obtained for the current random solution, < ->The optimal solution that has been obtained for all random solutions.
A fourth hidden node quantity optimizing sub-module for iterating a second preset number of times according to (5) to update eachEach->
A fifth hidden node quantity optimizing submodule is used for judging eachWhether the first difference value compared with the previous iteration is smaller than or equal to a first preset adaptation threshold.
A sixth hidden node quantity optimizing submodule is used for eachComparing with the first difference value of the previous iteration being smaller than or equal to the first preset adaptation threshold value, judging each +.>Whether the second difference value compared to the last iteration is less than or equal to a second preset adaptation threshold.
A seventh hidden node quantity optimizing submodule is used for eachAnd comparing the second difference value of the last iteration with a second preset adaptation threshold value, and judging that the root mean square error of the optimal model and the test set is smaller than or equal to the preset threshold value.
An eighth hidden node quantity optimizing sub-module for respectively acquiring eachAnd obtaining the number of the nodes with the highest duty ratio, wherein the number of the nodes with the highest duty ratio is the number of the hidden nodes.
Further, the fourth hidden node quantity optimizing submodule is specifically used for optimizing the inertia coefficient according to the formula (6) respectively based on each iteration
(6)。
Wherein,for the optimized inertia coefficient, +.>For initial inertia factor, +.>For the inertia coefficient iterating to the maximum number, +.>Is the maximum number of iterations.
Further, the power load prediction device further comprises a coordinate layout generation module, a visual prediction curve generation module and a visual prediction curve transmission module which are electrically connected in sequence; the coordinate layout generating module is electrically connected with the power load prediction model obtaining module.
The coordinate layout generation module is used for generating a coordinate layout by taking the natural time as a horizontal axis and taking the power load value as a vertical axis; the visualized prediction curve generation module is used for outputting the electric load prediction model based on the preset time period to the coordinate layout to generate a visualized prediction curve; the visual prediction curve sending module is used for sending the visual prediction curve to an external visual terminal.
It should be noted that, the present embodiment is a functional module embodiment based on the foregoing method embodiment, and the preferred, expanded, limited, and exemplified portions of the present embodiment may be referred to the foregoing embodiments, which is not repeated herein.
According to the embodiment, the normalized data set is divided into a training set, a verification set and a test set according to the preset proportion, the training set is output to an input layer, training of the first preset times is conducted through a neural network model, root mean square errors of training results of the verification set and the current time are obtained based on each training, minimum values in all root mean square errors are obtained, training results corresponding to the minimum values are obtained to serve as an optimal model, the number of hidden nodes of the hidden layer is iteratively updated through a particle swarm algorithm, so that the root mean square errors of the optimal model and the test set are smaller than or equal to a preset threshold value, the number of hidden nodes after iteration update is obtained and substituted into the optimal model, and the power load prediction model based on a preset time period is obtained. In the embodiment, iterative updating is performed on hidden layers of the neural network model based on the particle swarm algorithm, so that the particle swarm algorithm finds the relatively best hidden layer number within the preset iteration times, and the particle swarm algorithm can find the optimal hidden layer number through a self-adaptive decreasing weight and contraction factor method, so that the error rate of the final power load prediction model is smaller than that of the traditional neural network model. In the actual operation process, the power load prediction model of a certain area is dynamically simulated by MATLAB, the error rate of the traditional neural network model is 0.41% after 1000 times of dynamic simulation, and the error rate of the embodiment is 0.24%.
Fig. 3 illustrates an embodiment of the electronic device of the application, see fig. 3, the electronic device 8 comprising a processor 81 and a memory 82 coupled to the processor 81.
The memory 82 stores program instructions for implementing the power load prediction method of the power plant of any of the embodiments described above.
The processor 81 is arranged to execute program instructions stored in the memory 82 for power load prediction of the power plant.
The processor 81 may also be referred to as a CPU (Central Processing Unit ). The processor 81 may be an integrated circuit chip with signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Further, fig. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present application, and referring to fig. 4, the storage medium 9 according to an embodiment of the present application stores a program instruction 91 capable of implementing all the methods described above, where the program instruction 91 may be stored in the storage medium in the form of a software product, and includes several instructions for making a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and the patent scope of the application is not limited thereto, but is also covered by the patent protection scope of the application, as long as the equivalent structure or equivalent flow changes made by the description and the drawings of the application or the direct or indirect application in other related technical fields are adopted.
The embodiments of the present application have been described in detail above, but they are merely examples, and the present application is not limited to the above-described embodiments. It will be apparent to those skilled in the art that any equivalent modifications or substitutions to this application are within the scope of the application, and therefore, all equivalent changes and modifications, improvements, etc. that do not depart from the spirit and scope of the principles of the application are intended to be covered by this application.

Claims (9)

1. A power load prediction method for a power plant, the power load prediction method comprising:
acquiring a plurality of historical load data of a preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set;
dividing the normalized data set into a training set, a verification set and a test set according to a preset proportion;
defining a topological relation of a neural network model, wherein the topological relation comprises an input layer, an implicit layer and an output layer which are sequentially connected through signals;
outputting the training set to the input layer, training for a first preset number of times through the neural network model, and respectively acquiring root mean square errors of the training set and the current training result based on each training;
Acquiring minimum values in all root mean square errors, and acquiring training results corresponding to the minimum values as an optimal model;
iteratively updating the number of hidden nodes of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
acquiring the number of hidden nodes after iteration updating is completed, and substituting the number into the optimal model to obtain a power load prediction model based on the preset time period;
iteratively updating the number of hidden nodes of the hidden layer through a particle swarm algorithm to ensure that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value, wherein the method comprises the following steps:
according to the formula (4), a plurality of random solutions are respectively given to the number of the hidden nodes, and the result of all random solutions is defined as that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
(4);
wherein,for the set of all random solutions, +.>For each random solution, respectively->The labels that are random solutions are presented,the number of all random solutions; />For the set of velocities of all random solutions, +.>The speed of each random solution;
initializing the positions and the speeds of all random solutions;
Updating the position and the speed of each random solution based on the same random solution according to formula (5):
(5);
wherein,in +.>Speed of walking->In +.>Speed inertia of steps,/->Is an inertia coefficient>For the self-cognitive characterization of the current random solution,social cognitive characterization for the current random solution; />And->Are all the learning factors of the human body,is a random number with a preset value range, +.>Optimal solution already obtained for the current random solution, < ->The optimal solution obtained for all random solutions;
iterating a second preset number of times according to the formula (5) to update eachEach->
Respectively judge eachWhether the first difference value compared with the previous iteration is less than or equal to a first preset adaptationA threshold value;
if yes, respectively judging eachWhether the second difference value compared with the previous iteration is smaller than or equal to a second preset adaptation threshold value;
if yes, judging that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
acquiring each separatelyAnd obtaining the number of the nodes with the highest duty ratio, wherein the number of the nodes with the highest duty ratio is the hidden node number.
2. The method according to claim 1, wherein obtaining a plurality of historical load data of the preset area within a preset time period through a preset strategy, and normalizing all the historical load data to obtain a normalized data set, comprises:
All historical load data were normalized according to equation (1):
(1);
wherein,for the power load value of the normalized dataset, < >>For all of the power load values of the historical load data,for the average of all historical load data, +.>Standard deviation for all historical load data.
3. The method of claim 1, wherein the neural network model is characterized by the formula (2):
(2);
wherein,modeling the neural network; />Is the +.>A plurality of input nodes, each input node corresponding to a set of training data of the training set,/->Is the +.>Input nodes to the hidden layerPreset weights of the input nodes; />For connecting to the +.>A threshold of the input nodes;is a transfer function, and->
4. A method of predicting electrical load as claimed in claim 3, wherein the root mean square error is characterized by the formula (3):
(3);
wherein,for the root mean square error>For the number of sets of training data, +.>Is->True value of group training data, +.>Is->Training results after training of the group training data is completed.
5. The power load prediction method according to claim 1, wherein a second preset number of iterations is iterated according to the formula (5) to update each Each->Comprising:
optimizing the inertia coefficient according to (6) separately once based on each iteration
(6);
Wherein,for the optimized inertia coefficient, +.>For initial inertia factor, +.>For the inertia coefficient iterating to the maximum number, +.>Is the maximum number of iterations.
6. The power load prediction method according to claim 1, wherein obtaining the number of hidden nodes after the iteration update is completed and substituting the number into the optimal model to obtain a power load prediction model based on the preset time period, and then comprising:
generating a coordinate layout by taking natural time as a horizontal axis and taking a power load value as a vertical axis;
outputting a power load prediction model based on the preset time period to the coordinate layout to generate a visual prediction curve;
and sending the visual prediction curve to an external visual terminal.
7. A power load predicting device of a power plant, the power load predicting device of a power plant being applied to the power load predicting method of a power plant as claimed in any one of claims 1 to 6, characterized in that the power load predicting device of a power plant comprises:
the normalized data set acquisition module is used for acquiring a plurality of historical load data of the preset area in a preset time period through a preset strategy, and carrying out normalization processing on all the historical load data to obtain a normalized data set;
The normalization data set dividing module is used for dividing the normalization data set into a training set, a verification set and a test set according to a preset proportion;
the topological relation definition module is used for defining the topological relation of the neural network model, and the topological relation comprises an input layer, an implicit layer and an output layer which are connected in sequence in a signal manner;
the neural network model training module is used for outputting the training set to the input layer, training the neural network model for a first preset number of times, and respectively acquiring root mean square errors of the verification set and the current training result based on each training;
the optimal model acquisition module is used for acquiring minimum values in all root mean square errors and acquiring training results corresponding to the minimum values as an optimal model;
the hidden node quantity optimizing module is used for iteratively updating the hidden node quantity of the hidden layer through a particle swarm algorithm so that the root mean square error of the optimal model and the test set is smaller than or equal to a preset threshold value;
and the power load prediction model acquisition module is used for acquiring the number of hidden nodes after the iteration update is completed and substituting the number into the optimal model to obtain the power load prediction model based on the preset time period.
8. An electronic device comprising a processor, and a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored in the memory, implements a power load prediction method of a power plant as claimed in any one of claims 1 to 6.
9. A storage medium having stored therein program instructions which, when executed by a processor, implement a method of power load prediction enabling a power plant as claimed in any one of claims 1 to 6.
CN202311217166.4A 2023-09-20 2023-09-20 Power load prediction method, device and equipment for power station and storage medium Active CN116960989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311217166.4A CN116960989B (en) 2023-09-20 2023-09-20 Power load prediction method, device and equipment for power station and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311217166.4A CN116960989B (en) 2023-09-20 2023-09-20 Power load prediction method, device and equipment for power station and storage medium

Publications (2)

Publication Number Publication Date
CN116960989A CN116960989A (en) 2023-10-27
CN116960989B true CN116960989B (en) 2023-12-01

Family

ID=88460510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311217166.4A Active CN116960989B (en) 2023-09-20 2023-09-20 Power load prediction method, device and equipment for power station and storage medium

Country Status (1)

Country Link
CN (1) CN116960989B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117293975B (en) * 2023-11-22 2024-02-02 云南华尔贝光电技术有限公司 Charging and discharging adjustment method, device and equipment for lithium battery and storage medium
CN117454806B (en) * 2023-12-25 2024-03-19 中国电建集团昆明勘测设计研究院有限公司 Method, device, equipment and storage medium for calculating real-time water flow resistance of river channel

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569963A (en) * 2019-08-13 2019-12-13 哈尔滨工程大学 DGRU neural network for preventing data information loss and prediction method thereof
CN112734128A (en) * 2021-01-19 2021-04-30 重庆大学 7-day power load peak value prediction method based on optimized RBF
CN113326968A (en) * 2021-04-20 2021-08-31 国网浙江省电力有限公司台州供电公司 Bus short-term load prediction method and device based on PSO inertia weight adjustment
CN115186803A (en) * 2022-07-29 2022-10-14 武汉理工大学 Data center computing power load demand combination prediction method and system considering PUE
CN115935810A (en) * 2022-11-25 2023-04-07 太原理工大学 Power medium-term load prediction method and system based on attention mechanism fusion characteristics
CN116702937A (en) * 2022-12-14 2023-09-05 国网湖北省电力有限公司荆门供电公司 Photovoltaic output day-ahead prediction method based on K-means mean value clustering and BP neural network optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569963A (en) * 2019-08-13 2019-12-13 哈尔滨工程大学 DGRU neural network for preventing data information loss and prediction method thereof
CN112734128A (en) * 2021-01-19 2021-04-30 重庆大学 7-day power load peak value prediction method based on optimized RBF
CN113326968A (en) * 2021-04-20 2021-08-31 国网浙江省电力有限公司台州供电公司 Bus short-term load prediction method and device based on PSO inertia weight adjustment
CN115186803A (en) * 2022-07-29 2022-10-14 武汉理工大学 Data center computing power load demand combination prediction method and system considering PUE
CN115935810A (en) * 2022-11-25 2023-04-07 太原理工大学 Power medium-term load prediction method and system based on attention mechanism fusion characteristics
CN116702937A (en) * 2022-12-14 2023-09-05 国网湖北省电力有限公司荆门供电公司 Photovoltaic output day-ahead prediction method based on K-means mean value clustering and BP neural network optimization

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
克隆选择粒子群优化BP神经网络电力需求预测;李翔;崔吉峰;熊军;杨淑霞;杨尚东;;湖南大学学报(自然科学版)(第06期);全文 *
吕婵.基于 BP 神经网络的短期负荷预测.中国优秀硕士学位论文全文数据库 工程科技II辑.2009,C042-277 . *
基于 BP 神经网络的短期负荷预测;吕婵;中国优秀硕士学位论文全文数据库 工程科技II辑;C042-277 *
基于个体位置变异的粒子群算法;郑俊观等;石家庄铁道大学学报(自然科学版);第32卷(第1期);63-68 *
基于遗传算法神经网络的地源热泵夏季低负荷运行性能预测分析;董艳芳等;科学技术与工程;第22卷(第12期);第4984-4991页 *
李翔 ; 崔吉峰 ; 熊军 ; 杨淑霞 ; 杨尚东 ; .克隆选择粒子群优化BP神经网络电力需求预测.湖南大学学报(自然科学版).2008,(第06期),全文. *

Also Published As

Publication number Publication date
CN116960989A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN116960989B (en) Power load prediction method, device and equipment for power station and storage medium
Niu et al. Uncertainty modeling for chaotic time series based on optimal multi-input multi-output architecture: Application to offshore wind speed
Babalik et al. A multi-objective artificial algae algorithm
CN105653689A (en) User communication influence determination method and device
CN112365033B (en) Wind power interval prediction method, system and storage medium
Lan et al. Minimum risk criterion for uncertain production planning problems
CN104679947A (en) Automatic generation method of cable component optimizing structure of mechanical and electrical product
Khairuddin et al. A novel method for ATC computations in a large-scale power system
Huang et al. Probabilistic state estimation approach for AC/MTDC distribution system using deep belief network with non-Gaussian uncertainties
Deng et al. An intelligent aerator algorithm inspired-by deep learning
Zuluaga et al. Bayesian probabilistic power flow analysis using Jacobian approximate Bayesian computation
Green II et al. Intelligent state space pruning for Monte Carlo simulation with applications in composite power system reliability
Zhang et al. Towards multi-scenario power system stability analysis: An unsupervised transfer learning method combining DGAT and data augmentation
Yang et al. A cable layout optimization method for electronic systems based on ensemble learning and improved differential evolution algorithm
CN111210051B (en) User electricity consumption behavior prediction method and system
CN107491841A (en) Nonlinear optimization method and storage medium
CN116707331A (en) Inverter output voltage high-precision adjusting method and system based on model prediction
CN112149896A (en) Attention mechanism-based mechanical equipment multi-working-condition fault prediction method
Simons et al. Single and multi-objective genetic operators in object-oriented conceptual software design
CN116224126A (en) Method and system for estimating health state of lithium ion battery of electric automobile
CN114880929A (en) Deep reinforcement learning-based multi-energy flow optimization intelligent simulation method and system
Khodayar et al. Deep generative graph learning for power grid synthesis
CN112036936A (en) Deep Q network-based generator bidding behavior simulation method and system
CN105930613A (en) Equivalent modeling method for distributed power generation system
Zhang et al. Automatic synthesis of reversible logic circuit based on genetic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant