CN113723593A - Load shedding prediction method and system based on neural network - Google Patents

Load shedding prediction method and system based on neural network Download PDF

Info

Publication number
CN113723593A
CN113723593A CN202110990090.3A CN202110990090A CN113723593A CN 113723593 A CN113723593 A CN 113723593A CN 202110990090 A CN202110990090 A CN 202110990090A CN 113723593 A CN113723593 A CN 113723593A
Authority
CN
China
Prior art keywords
neural network
prediction model
training
load shedding
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110990090.3A
Other languages
Chinese (zh)
Other versions
CN113723593B (en
Inventor
瞿寒冰
王博
林祺蓉
贾玉健
赵普
张钰莹
李莉
杨福
施雨
李广
乔荣飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Jinan Power Supply Co of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Jinan Power Supply Co of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Jinan Power Supply Co of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202110990090.3A priority Critical patent/CN113723593B/en
Publication of CN113723593A publication Critical patent/CN113723593A/en
Application granted granted Critical
Publication of CN113723593B publication Critical patent/CN113723593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention belongs to the technical field of power grid risk analysis and evaluation, and provides a load shedding prediction method and method based on a neural network. Dividing a preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set; carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model; and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.

Description

Load shedding prediction method and system based on neural network
Technical Field
The invention belongs to the technical field of power grid risk analysis and evaluation, and particularly relates to a load shedding prediction method and system based on a neural network.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Through the development of many years, the neural network stands out from one corner of the algorithm world with the unique characteristics from a simple artificial neural network to various deep neural networks in deep learning. The neural network has a wide application range, and the body shadow of the neural network is not reduced from simple linear fitting and nonlinear fitting to face recognition, voice recognition, semantic recognition and the like which are developed vigorously in recent years. The neural network has important application in the field of power systems by virtue of the characteristic that the neural network is good at solving complex problems and wide application range.
In the reliability evaluation of the power system, a large amount of load shedding operation needs to be carried out, and in order to improve the capability of rapid model delivery, a simplified model structure is considered, so that the purpose of rapid training is achieved. The significant advantage of CNN over traditional neural networks is that feature extraction can be done with reduced dimensionality, which is done by convolutional and pooling layers. The role of the pooling layer is to reduce the dimensions of the data as a down-sampling method on the one hand and to introduce translation invariance on the other hand, which is very important in the classification problem. When the scale of the power system is not very large, the requirement for dimensionality reduction in the learning process is not high, and when regression output is mainly considered, the system task is more sensitive to position. Therefore, the existing CNN network still cannot meet the accuracy and efficiency of load shedding prediction.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a load shedding prediction method and system based on a neural network, which can improve the accuracy and efficiency of load shedding calculation.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a load shedding prediction method based on a neural network.
A load shedding prediction method based on a neural network comprises the following steps:
dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
Further, the load shedding prediction model comprises: an improved fast regressive convolutional neural network or an improved joint convolutional neural network.
Further, the improved fast regression convolutional neural network comprises: and the three layers of convolution, the full connection layer and the output layer which are connected in sequence adopt a back propagation algorithm to adjust the network parameter size of the improved fast regression convolution neural network.
Further, the performance of the improved model of the fast regression convolutional neural network is optimized using the root mean square error function as the loss function.
Further, the improved joint convolutional neural network comprises: the device comprises a classifier and a regressor, wherein the result obtained by the output layer of the classifier through a Softmax function and the structure obtained by the output layer of the regressor through a linear function are subjected to combined judgment to obtain a load shedding result; wherein, the standard of the joint judgment is as follows: the judgment result given by the classifier is used as the main basis for judging whether the load is cut or not, and the result given by the regressor is combined to obtain whether the load is cut or not and the load cut.
Further, the classifier includes: a 3 × 3 × 8 convolutional layer, a pooling layer, a 3 × 3 × 16 convolutional layer, a pooling layer, a 3 × 3 × 32 convolutional layer, a fully-connected layer, and an output layer, which are connected in this order, the regressor including: a 5 × 5 × 8 convolutional layer, a 5 × 5 × 16 convolutional layer, a 5 × 5 × 32 convolutional layer, a fully-connected layer, and an output layer, which are connected in this order.
Furthermore, the performance of the classifier is optimized by adopting a cross entropy loss function, and the performance of the regressor is optimized by adopting a mean square error loss function.
A second aspect of the invention provides a load shedding prediction system based on a neural network.
A neural network-based load shedding prediction system, comprising:
a dataset partitioning module configured to: dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
a model training module configured to: carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
an output module configured to: and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the neural network-based load shedding prediction method according to the first aspect.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the neural network based load shedding prediction method according to the first aspect when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
in order to improve the accuracy and efficiency of load shedding prediction, the invention provides two improved models, namely an improved Fast Regression Convolutional Neural Network (FRCNN) and an improved combined convolutional neural network (UCNN). The FRCNN is optimized aiming at the speed, is mainly suitable for scenes with high speed requirements, reduces the performance loss as much as possible on the basis of keeping a feature extraction structure, and has obvious speed advantage on the premise of ensuring the accuracy as shown by experimental results; the UCNN is optimized aiming at the precision, is mainly suitable for scenes with over-sensitive speed, and carries out double authentication on the result by comprehensively classifying and regressing conditions, thereby ensuring the accuracy of model output to the maximum extent.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a convolutional neural network model in an embodiment of the present invention;
FIG. 2 is a flow chart of load shedding prediction based on CNN in the embodiment of the present invention;
FIG. 3 is a ReLU function image in an embodiment of the present invention;
FIG. 4 is a diagram of an RMSE iteration process for convolution kernels of different sizes in an embodiment of the present invention;
FIG. 5 is a training representation of different batch sizes in an embodiment of the present invention;
FIG. 6 is a block diagram of an improved fast regressive convolutional neural network in an embodiment of the present invention;
FIG. 7 is a block diagram of an improved joint convolutional neural network in an embodiment of the present invention;
FIG. 8 is a diagram of an IEEE-RTS79 test system in accordance with an embodiment of the present invention;
fig. 9 is a flowchart of a load shedding prediction method based on a neural network in an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It is noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and systems according to various embodiments of the present disclosure. It should be noted that each block in the flowchart or block diagrams may represent a module, a segment, or a portion of code, which may comprise one or more executable instructions for implementing the logical function specified in the respective embodiment. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example one
As shown in fig. 9, the present embodiment provides a load shedding prediction method based on a neural network, and the present embodiment is illustrated by applying the method to a server, it is to be understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network server, cloud communication, middleware service, a domain name service, a security service CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In this embodiment, the method includes the steps of:
s101: dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
s102: carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
s103: and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
In order to achieve the object of the present invention, the following scheme can be adopted.
1. Load shedding prediction method based on convolutional neural network
A convolutional neural network model (CNN) is constructed for load shedding prediction, the construction of an initial characteristic data set is considered firstly, and similar to the load shedding prediction process of the traditional BP neural network, the generator output related parameters, the line capacity related parameters, the load related parameters, the element failure rate related parameters, the element working state and the like are selected to form the original characteristic data set. The construction of the data set has certain influence on the result, and the invention can achieve good effect by training with data generated by a random process.
The convolutional neural network model construction for load shedding prediction is shown in fig. 1. For the CNN model, two types of parameters need to be considered, one type is related to a network structure which is expanded around the size of a convolution kernel and the number of layers of convolution layers; another class is parameters related to training, including weights, learning rates, regularization methods, etc. The size and the composite number of the convolution kernels influence the size of a receptive field when the characteristics are extracted, the size of the convolution kernels in the initial model is 3 multiplied by 3, and the number of the convolution kernels in the two convolution layers is 16 and 32 respectively. The number of convolutional layers can affect the overall efficiency of feature extraction, and in consideration of the reasonable utilization of computing resources, the initial model of the invention selects two convolutional layers and two pooling layers.
The load shedding judgment and prediction model flow based on the convolutional neural network is shown in fig. 2.
The specific process is as follows:
(1) firstly, a large number of system load shedding data sets are generated through a CPLEX-based direct current load shedding algorithm, and relevant information is extracted. Assume that the sample input data matrix X ═ Xij]n×NWhere N is the number of samples, N is the number of features, xijRepresenting the jth feature of the ith sample. For the input data set, normalization processing is required, and the formula of the maximum and minimum normalization method is as follows:
Figure BDA0003232009820000071
(2) labeling each sample, and setting a classification label and a regression label for a model containing classification and regression respectively, wherein 1 in the classification label represents that load shedding is needed, and 0 represents that the load shedding is not needed; the regression label is the load reduction amount. And selecting an applicable label from the data set according to different structures of the model to be trained.
(3) A required data set is generated. For each group of data set, the data set is generally divided into three parts, namely a training set, a verification set and a test set, wherein the training set is used for the back propagation training of each batch in each iteration, the verification set is used for the training effect verification after the iteration times are fixed, errors are corrected through the representation of the model on the verification set, network parameters are further corrected, and the effect of preventing overfitting is achieved, and the test set is generally used for testing the final performance of the model after the model is trained. The three data sets are generally divided into three mutually exclusive subsets by a method of a set-out method according to a certain proportion, and corresponding data sets are generated after the proportions of the three data sets are set.
(4) Constructed according to the network architecture of fig. 2. The network is supervised trained using a back propagation algorithm. And performing iterative training by using the training set and the verification set, and outputting a load shedding prediction model after the accuracy is met or a certain number of iterations is reached.
(5) And for the load shedding model obtained in the last step, after the accuracy of the model is verified to reach the standard by the test set, deploying the model on the system. And inputting the system state scene parameters into the CNN load shedding prediction model which is trained offline, and giving a load shedding result by the model.
2. Model optimization
(1) Activating a function
Placing the activation function behind the computation of the convolutional and pooling layers may enable the model to have greater non-linear problem solving capabilities. The ReLU function is usually used as an activation function, which is a piecewise linear function, and the gradient vanishing can be effectively avoided, and the formula is shown in formula (2), and the function image is shown in fig. 3.
f(x)=max(0,x) (2)
Above 0, the gradient of the ReLU function is constant, effectively avoiding the occurrence of gradient vanishing. Compared with other activation functions, the ReLU function has smaller calculation cost, and the convergence speed of gradient random gradient decline in the error back propagation process is obviously increased.
(2) Loss function
The back propagation in the training of CNN is based on the loss function calculation, which is the basis of the network parameter update. Commonly used loss functions are the cross-entropy loss function and the root mean square error function RMSE. The cross entropy loss function is expressed as follows:
Figure BDA0003232009820000091
wherein k isiIs a category of the ith sample.
The root mean square error function refers to the square root of the expected value of the square of the difference between the output estimate and the true tag value, and is expressed as follows:
Figure BDA0003232009820000092
(3) adam optimization algorithm
The learning rate is very important in the supervised training process of CNN, which continuously updates parameters through back propagation. The process is generally learned by an algorithm with a constant learning rate, namely a random gradient descent algorithm, but the fixed learning rate has a large influence on the model, and if the learning rate is too high, the model cannot be converged; if the learning rate is too small, the convergence rate becomes extremely slow, and the training rate becomes slow. The Adam optimization algorithm is a self-adaptive learning rate algorithm capable of adaptively adjusting the learning rate in the learning process, the model training speed is accelerated in the early stage, and the model training precision is improved in the later stage. More importantly, the Adam optimization algorithm is simple in parameter adjustment, and the vast majority of problems can be processed by the aid of the default parameters.
(4) Regularization
During the training process of the network, there are many factors that may cause the generation of overfitting, which in turn may cause the generalization capability of the model to be weakened. To solve this problem, consider adding a parameter index R (ω) of the reaction model complexity to the loss function, and this added index is called the regularization term. Two commonly used regularization methods are available, one being L1Regularization, and the calculation formula is as follows:
Figure BDA0003232009820000101
another is L2Regularization, and the calculation formula is as follows:
Figure BDA0003232009820000102
3. model evaluation index
In order to quantitatively evaluate each model, the following model evaluation performance index was set. Considering that "whether load shedding is required" and "the amount of load shedding" are a progressive relationship in an actual power system, it is considered thatFor CNN-based load shedding model, the hit rate P is providedoAs an index of the model classification performance, the percentage of correctly judged samples to the total samples is expressed, and expression (7) is defined, and for a regression-only model, it is considered that the output load shedding value is positive and the required load shedding value is present. Using the misalignment ratio PNAs an index of regression performance of the model, a deviation between an output value and an actual value is expressed, equation (8) is defined, and an average misalignment rate P is usedNAThe rate of misalignment of the model as a whole. Using the reliability PACAs an index of the overall performance of the model, the misalignment rate P representing correct judgmentN<20% of the samples in the total sample percentage, for regression-only models, by misalignment P onlyN<The percentage of 20% of the samples to the total samples was calculated.
Figure BDA0003232009820000103
Wherein, TRTo predict the number of samples whose sample states (if and if) agree with the tag state, TFThe number of samples whose sample states are inconsistent with the tag state is predicted.
Figure BDA0003232009820000111
Wherein, EENSOLoad shedding value, EENS, for model outputlThe load in the sample label is clipped.
Improvement of CNN model and Performance analysis
The load shedding model based on the CNN comprises two input layers and two pooling layers, which is a common structure of the CNN, and the change of parameters such as the number of convolution layers, the size of a convolution kernel, the depth of the convolution kernel, the structure of the pooling layers and the like can change the structure of a network, so that the network performance is influenced.
(1) Different network architectures
The network structure of CNNs varies mainly among hidden layers, i.e., many levels between the input layer and the output layer. In the field of CNN wide application, the convolutional neural network mostly adopts small and deep convolutional kernels and network structures, because although a larger convolutional kernel has a larger receptive field, the small convolutional kernels can also complete the task completed by a large convolutional kernel by increasing the number, and can effectively reduce the number of parameters, and the deep network can improve more nonlinear fitting capability due to the nesting of multiple layers of activation functions, so that the network has better generalization capability. However, the deepening of the network layer number can lead to the sharp increase of the training overhead, so that the reasonable model selection of the model structure aiming at specific problems is necessary.
In order to optimize the basic CNN load shedding model described above, a small sample quantitative analysis of the main parameters is performed here around the variables of the basic model. In the IEEE-RTS79 system, 8000 sets of fault samples requiring load shedding are randomly generated, of which 5000 sets are used as a training data set, 1000 sets are used as a verification data set, 2000 sets are used as a backup test data set, and all models in this section are trained based on the same data set.
First, the behavior of the CNN model under different numbers of convolution layers. The training is performed by using convolutional layers with different layers, parameters such as minimum training batch and the like use default values, 5000 training samples, the batch size is 128, each round of iteration is 39 times, the highest training round is set to be 30 rounds, and the training is performed by using the setting under the condition that no additional explanation is provided. Training models with different convolutional layer numbers, recording the final RMSE of the verification set and the time T used for trainingtrainThe relevant parameters and results are shown in table 1.
TABLE 1 CNN model representation of different convolution layer numbers
Figure BDA0003232009820000121
It can be seen from the table that the same size convolution kernel is used, the number of convolution layers is increased one by one, and the number of convolution kernels per layer is kept to be 16, and in five cases, the last iteration stop is reached, which is 1170. Under the premise that other parameters are kept unchanged, the RMSE is gradually reduced and the consumed time is gradually increased along with the increase of the number of the convolution layers. When the convolutional layer exceeds three layers, the time increase is close to linearity, the work of feature extraction is completed on the former convolutional layer, and the latter convolutional layer essentially increases the capability of nonlinear fitting, but the marginal effect of the feature extraction is in a descending state in consideration of the increase of time consumption.
Second, the effect of the convolution kernel size and the number of convolution kernels on the CNN model performance. Firstly, fixing a network hierarchical structure, respectively using different numbers of convolution kernels with the same size for training, and adopting default values for parameters. Record the final RMSE of the validation set and the time T taken for trainingtrainThe relevant parameters and results are shown in table 2.
TABLE 2 CNN model representation of convolution kernels of the same size and different numbers
Figure BDA0003232009820000131
Figure BDA0003232009820000141
As can be seen from the above table, the training time is significantly increased when the number of convolution kernels is increased, but when the number of convolution layers is less than three, the training effect is not significantly improved by increasing the number of convolution kernels. The reason may be that the number of convolution layers is too small, the bottleneck limiting the performance of the current model is not in the number of convolution kernels, the number of convolution layers is small, so that the number of hidden layers is reduced, the nonlinear fitting performance of the model is greatly influenced, and the increase of the number of layers and the increase of the convolution kernels are both caused. By comparing the data with 3 layers of convolution layers in table 1 and table 2, it can be found that reasonably setting the number of convolution kernels can achieve a better training effect in a shorter training time.
The size of the convolution kernel directly affects the receptive field, and because of the ease in engineering implementation, odd-sized convolution kernels are often used because it is easier to determine the center point. Dividing the number of convolution kernelsCNN with trilayer convolutional layers, 8, 16 and 32, respectively, underwent a change in the size of the convolutional kernel, recording the final RMSE of the validation set and the time T taken for trainingtrainThe relevant parameters and results are shown in table 3. Wherein the single-core parameters are 1, 9 and 25 respectively, when the network depth is not particularly large, the increase of the number of parameters does not significantly affect the performance.
TABLE 3 CNN model representation of convolution kernels of different sizes
Figure BDA0003232009820000142
Figure BDA0003232009820000151
As can be seen from the above table, with the increase of the size of the convolution kernel, the training time of the model is increased to a certain extent, but the training effect is also improved significantly, fig. 4 shows the training process under three conditions, RMSE is a numerical value calculated by the training set in each iteration, and the advantage of the model with the convolution kernel size of 5 × 5 in the training iteration can be seen visually from the graph.
And thirdly, the influence of the number of pooling layers on the CNN model performance. Pooling the pooling layer with translation invariance enables the model to have better generalization capability and better performance on classification problems. However, in the regression problem, the existence of pooling layers may obscure the training parameters, resulting in a slight decrease in training performance for small samples. The results of testing several sets of models are shown in table 4.
TABLE 4 influence of the number of pooling layers on the CNN model Performance
Figure BDA0003232009820000152
Figure BDA0003232009820000161
The main function of the pooling layer is downsampling, and the downsampling layer is generally placed behind the convolution layer to realize dimension reduction, so that the training speed is improved to a certain extent. As can be seen from table 4, the training speeds of other models with the same parameters are greatly different between 0 and 2 pooling layers, and the models with pooling layers are shorter in training. In two models with the same convolution kernel size and convolution layer configuration, the training effect of the pooling layer number of 0 is slightly better than the training effect of the pooling layer number of 2, which indicates that in the regression problem of pursuing output accuracy, the number of pooling layers can be reduced, and a certain training time is sacrificed to ensure the output accuracy of the model.
(2) Principal parameters
The most quantitative parameters in the CNN are parameters of each layer which are continuously updated in the training iteration process, and the learning rate, the batch size and the iteration times are mainly considered when the network is optimized. These parameters will affect the training time, convergence effect and convergence speed, and the final performance of the model, and the above data is still used for the related tests.
First, the impact of batch size on the model. In the back-propagation process, the gradient is calculated after averaging the loss functions obtained in the batch each time, so that the batch size determines the degree of gradient flattening between adjacent iterations. Different batch sizes are respectively used for training, several representative batch size training performances are selected and are drawn in fig. 5, wherein the horizontal axis is iteration times, the vertical axis is RMSE of a training set, the same previous 1000 iterations are intercepted for more visual comparison, and the actual iteration times of each situation are different.
As can be seen from the figure, too small batch size results in relatively large difference between adjacent batches, which results in severe gradient oscillation condition of two adjacent iterations, and is not favorable for convergence; the larger the batch size is, the smaller the difference between the adjacent batches is, the smaller the gradient oscillation condition is, which is beneficial to the convergence of the model, but if the batch size is extremely large, the difference between the adjacent batches is too small, the gradients of the two adjacent batches are not different, and the local minimum value is easy to fall into, so that the training effect of the model is reduced if the batch size is too large.
Second, the impact of the learning rate on the model. Too high or too low a learning rate can have serious consequences on the training of the model. Therefore, the training process of the invention uses the adaptive Adam algorithm, and the learning rate can be automatically adjusted according to the current iteration condition, so as to provide the optimal learning rate for the model in the training process as much as possible and obtain the optimal training effect.
And thirdly, the influence of the maximum iteration times on the model. The training of the model is stopped in two cases, the first is that the number of iterations reaches a set maximum value, and the second is that the loss falls within a set threshold. Increasing the number of iterations may improve the convergence probability of the model, but if the model falls into local optimum or oscillates too early due to other reasons and cannot be converged, the increase of the number of iterations only consumes resources wastefully.
5. Fast regression convolutional neural network
In the reliability evaluation of the power system, a large amount of load shedding operation needs to be carried out, and in order to improve the capability of rapid model delivery, a simplified model structure is considered, so that the purpose of rapid training is achieved. The significant advantage of CNN over traditional neural networks is that feature extraction can be done with reduced dimensionality, which is done by convolutional and pooling layers. The role of the pooling layer is to reduce the dimensions of the data as a down-sampling method on the one hand and to introduce translation invariance on the other hand, which is very important in the classification problem. When the scale of the power system is not very large, the requirement for dimensionality reduction in the learning process is not high, and when regression output is mainly considered, the system task is more sensitive to position.
Aiming at the rapid calculation of reliability evaluation, an improved rapid regression convolutional neural network (FRCNN) is designed, a pooling layer in the model of FIG. 1 is removed, a convolutional layer and a full-link layer are reserved, a convolutional layer is added, the size of parameters is still adjusted by adopting a back propagation algorithm, and a root mean square error function (RMSE) is selected as a loss function. The convolution layer is reserved to improve the capability of feature extraction, and the elimination of the pooling layer can influence the training time to a certain extent, but sampling on the feature data of the pooling layer is not needed, so that the step complexity is reduced, the output layer performs regression output by using a linear function, the accuracy of an output result can be effectively guaranteed, and meanwhile, the model has quick output capability during operation. The network structure is shown in fig. 6.
The FRCNN network selects a multi-convolution and zero-pooling network structure with reference to the aforementioned parameters mainly considered in the CNN network design. To improve accuracy, pooling layers, which are primarily used to reduce overfitting, are removed, reducing model complexity structurally. The FRCNN network not only reserves the most core feature extraction in the convolutional neural network, but also simplifies the structure which has certain influence on the output accuracy, and can quickly output the result while ensuring the precision.
6. Joint convolutional neural network
Due to the uncertainty of the training samples, in the reliability evaluation, if the logic for constructing the data set is incomplete, the generalization capability of the model may be insufficient, and the accuracy of the output result may be affected. For example, if the data set is too small or if the data set is constructed without considering high-order faults, the model may have too large deviation of the output result in some critical states.
In order to improve the reliability of the output result, an improved joint convolution neural network (UCNN) is designed. The network structure is shown in fig. 7.
The UCNN is divided into a classifier and a regressor, the classifier uses cross entropy as a loss function to carry out two-layer convolution and two-layer pooling, and the accuracy of a classification result is guaranteed to the maximum extent; the regressor uses MSE as a loss function to ensure the accuracy of regression data to the maximum extent. And performing combined evaluation on the classifier and the regressor before output, wherein the comprehensive evaluation standard is as follows: and taking the judgment result given by the classifier as the main basis for judging whether the load is cut or not, and combining the result given by the regressor to give the output of whether the load is cut or not and how much the load is cut.
In the UCNN, a network combining classification and regression is included, so that the method has double verification when outputting results, and ensures higher accuracy. In the aspect of parameter selection, a model structure capable of obtaining the optimal accuracy rate is selected by the regressor and the classifier respectively, different loss functions are used as error calculation formulas of parameter reverse training respectively, and balance is achieved in the usability and the usability of the network. On the aspect of model performance, with the increase of the number of layers, compared with FRCNN, the training time is increased; but the speed disadvantage of the network is not obvious after the network is trained and deployed.
In order to verify the technical solution of the present embodiment, the present embodiment adopts an IEEE-TRS79 test system to evaluate the performance of the network computing load shedding, where the test system is shown in fig. 8. The test system comprises two voltage classes of 230kV and 138kV and consists of 32 generator sets, 24 buses and 38 transmission lines.
And constructing a data set by using the non-repeated data obtained by random sampling, and uniformly setting classification labels and regression labels for the data set in order to keep consistency, wherein only a regression network does not train the classification labels. The total number of samples is 43800, 60% of the samples are extracted after random scrambling, namely 26280 samples are used as training data sets; taking 10% of the samples, namely 4380 samples, as a verification data set; the remaining 30%, namely 13140 samples, were used as the test data set. The training data set and the verification data set are used for participating in model training and parameter updating in the training process, and the test data set is obtained by a leave-out method and is used for testing the final performance of the model.
And training by respectively using the BP neural network, the convolutional neural network, the fast regression convolutional neural network and the joint convolutional neural network, and counting to obtain the performance indexes of the networks. In addition, the CPLEX is used for realizing the optimal load shedding algorithm, the load shedding calculation is carried out on the samples in the test set, the total consumed time is recorded, and the average consumed time T is calculatedavAnd simultaneously, recording the running time of the test set on the four deployed networks. The above criteria are listed in Table 5.
TABLE 5 network load shedding performance index
Figure BDA0003232009820000201
As can be seen from table 5, the average time consumed for calculating the load shedding of the neural network after training is shorter than that of the CPLEX-based dc load shedding algorithm, and the reliability is generally higher. The BP neural network has a simple network structure, can only carry out fitting operation, has weak generalization capability and has frequent misjudgment under certain conditions; the CNN has obvious improvement on the reliability due to the characteristic extraction of the convolutional layer; the FRCNN is relatively balanced in performance, and relatively good speed is achieved under the condition that consideration is guaranteed; and the UCNN optimizes the regressor and the classifier respectively, so that higher hit rate and reliability are achieved.
The above results show that the load shedding calculation based on deep learning has certain advantages over the traditional optimal load shedding algorithm in the running time after model deployment, and can ensure higher reliability, while the time advantage is further amplified when reliability evaluation is performed, and the influence of the misalignment rate on the reliability evaluation is almost negligible.
In summary, the invention provides a load shedding prediction method, and researches the influence of the size and number of convolution kernels, the number of convolution layers, training parameters and the like on the performance of a model. And aiming at different requirements on section load calculation in the reliability evaluation of the power system, two improved networks of FRCNN and UCNN are respectively provided. The FRCNN is optimized aiming at the speed, is mainly suitable for scenes with high speed requirements, reduces the performance loss as much as possible on the basis of keeping a feature extraction structure, and has obvious speed advantage on the premise of ensuring the accuracy as shown by experimental results; the UCNN is optimized aiming at the precision, is mainly suitable for scenes with over-sensitive speed, and carries out double authentication on the result by comprehensively classifying and regressing conditions, thereby ensuring the accuracy of model output to the maximum extent. In the aspect of model training optimization, a proper algorithm and matched parameters are selected for training, detailed comparative analysis is carried out on a network structure and hyper-parameters, and the performance of load shedding calculation is improved. In the aspect of evaluating indexes, in order to observe the performances of different models visually and clearly, the hit rate, the misalignment rate and the reliability rate are respectively provided for carrying out three-dimensional measurement. The example analysis shows that the FRCNN and the UCNN provided by the patent achieve the expected optimization effect on the computational load shedding.
Example two
The embodiment provides a load shedding prediction system based on a neural network.
A neural network-based load shedding prediction system, comprising:
a dataset partitioning module configured to: dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
a model training module configured to: carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
an output module configured to: and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
It should be noted here that the data set partitioning module, the model training module and the output module correspond to steps S101 to S103 in the first embodiment, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the neural network-based load shedding calculation method as described in the first embodiment above.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps in the neural network-based load shedding calculation method according to the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A load shedding prediction method based on a neural network is characterized by comprising the following steps:
dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
2. The neural network-based load shedding prediction method according to claim 1, wherein the load shedding prediction model comprises: an improved fast regressive convolutional neural network or an improved joint convolutional neural network.
3. The neural network-based load shedding prediction method of claim 1, wherein the improved fast regression convolutional neural network comprises: and the three layers of convolution, the full connection layer and the output layer which are connected in sequence adopt a back propagation algorithm to adjust the network parameter size of the improved fast regression convolution neural network.
4. The neural network-based load shedding prediction method of claim 3, wherein the performance of the improved model of the fast regression convolutional neural network is optimized using a root mean square error function as the loss function.
5. The neural network-based load shedding prediction method of claim 2, wherein the improved joint convolutional neural network comprises: the device comprises a classifier and a regressor, wherein the result obtained by the output layer of the classifier through a Softmax function and the structure obtained by the output layer of the regressor through a linear function are subjected to combined judgment to obtain a load shedding result; wherein, the standard of the joint judgment is as follows: the judgment result given by the classifier is used as the main basis for judging whether the load is cut or not, and the result given by the regressor is combined to obtain whether the load is cut or not and the load cut.
6. The neural network-based load shedding prediction method of claim 5, wherein the classifier comprises: a 3 × 3 × 8 convolutional layer, a pooling layer, a 3 × 3 × 16 convolutional layer, a pooling layer, a 3 × 3 × 32 convolutional layer, a fully-connected layer, and an output layer, which are connected in this order, the regressor including: a 5 × 5 × 8 convolutional layer, a 5 × 5 × 16 convolutional layer, a 5 × 5 × 32 convolutional layer, a fully-connected layer, and an output layer, which are connected in this order.
7. The neural network-based load shedding prediction method of claim 5, wherein the performance of the classifier is optimized by using a cross entropy loss function, and the performance of the regressor is optimized by using a mean square error loss function.
8. A load shedding prediction system based on a neural network, comprising:
a dataset partitioning module configured to: dividing the preprocessed historical power grid system state scene parameter data set into a test set, a training set and a verification set;
a model training module configured to: carrying out back propagation training on the tangential load prediction model based on the training set, verifying the training effect of the tangential load prediction model by adopting the verification set when the set iteration times are met, and adjusting the network parameters of the tangential load prediction model according to the expression of the tangential load prediction model on the verification set until the adjusted tangential load prediction model meets the set requirement to obtain a trained tangential load prediction model;
an output module configured to: and on the basis of the state scene operation data of the power grid system, verifying the trained load shedding prediction model with the standard accuracy by using the test set to obtain a load shedding result.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the neural network-based load shedding calculation method according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the neural network based load shedding prediction method according to any one of claims 1-7.
CN202110990090.3A 2021-08-26 2021-08-26 Cut load prediction method and system based on neural network Active CN113723593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110990090.3A CN113723593B (en) 2021-08-26 2021-08-26 Cut load prediction method and system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110990090.3A CN113723593B (en) 2021-08-26 2021-08-26 Cut load prediction method and system based on neural network

Publications (2)

Publication Number Publication Date
CN113723593A true CN113723593A (en) 2021-11-30
CN113723593B CN113723593B (en) 2024-01-09

Family

ID=78678265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110990090.3A Active CN113723593B (en) 2021-08-26 2021-08-26 Cut load prediction method and system based on neural network

Country Status (1)

Country Link
CN (1) CN113723593B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190260204A1 (en) * 2018-02-17 2019-08-22 Electro Industries/Gauge Tech Devices, systems and methods for the collection of meter data in a common, globally accessible, group of servers, to provide simpler configuration, collection, viewing, and analysis of the meter data
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
CN110222878A (en) * 2019-05-17 2019-09-10 广东工业大学 A kind of short-term load forecasting method based on artificial fish-swarm neural network
CN110634082A (en) * 2019-09-24 2019-12-31 云南电网有限责任公司 Low-frequency load shedding system operation stage prediction method based on deep learning
CN110910016A (en) * 2019-11-21 2020-03-24 青海格尔木鲁能新能源有限公司 New energy storage system scheduling optimization method considering demand response resources
CN111539132A (en) * 2020-07-09 2020-08-14 南京航空航天大学 Dynamic load time domain identification method based on convolutional neural network
CN111695731A (en) * 2020-06-09 2020-09-22 中国电力科学研究院有限公司 Load prediction method, system and equipment based on multi-source data and hybrid neural network
WO2020248471A1 (en) * 2019-06-14 2020-12-17 华南理工大学 Aggregation cross-entropy loss function-based sequence recognition method
CN112330488A (en) * 2020-11-05 2021-02-05 贵州电网有限责任公司 Power grid frequency situation prediction method based on transfer learning
CN112508684A (en) * 2020-12-04 2021-03-16 中信银行股份有限公司 Joint convolutional neural network-based collection risk rating method and system
US20210133536A1 (en) * 2018-10-22 2021-05-06 Ennew Digital Technology Co., Ltd. Load prediction method and apparatus based on neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190260204A1 (en) * 2018-02-17 2019-08-22 Electro Industries/Gauge Tech Devices, systems and methods for the collection of meter data in a common, globally accessible, group of servers, to provide simpler configuration, collection, viewing, and analysis of the meter data
US20210133536A1 (en) * 2018-10-22 2021-05-06 Ennew Digital Technology Co., Ltd. Load prediction method and apparatus based on neural network
CN110222878A (en) * 2019-05-17 2019-09-10 广东工业大学 A kind of short-term load forecasting method based on artificial fish-swarm neural network
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
WO2020248471A1 (en) * 2019-06-14 2020-12-17 华南理工大学 Aggregation cross-entropy loss function-based sequence recognition method
CN110634082A (en) * 2019-09-24 2019-12-31 云南电网有限责任公司 Low-frequency load shedding system operation stage prediction method based on deep learning
CN110910016A (en) * 2019-11-21 2020-03-24 青海格尔木鲁能新能源有限公司 New energy storage system scheduling optimization method considering demand response resources
CN111695731A (en) * 2020-06-09 2020-09-22 中国电力科学研究院有限公司 Load prediction method, system and equipment based on multi-source data and hybrid neural network
CN111539132A (en) * 2020-07-09 2020-08-14 南京航空航天大学 Dynamic load time domain identification method based on convolutional neural network
CN112330488A (en) * 2020-11-05 2021-02-05 贵州电网有限责任公司 Power grid frequency situation prediction method based on transfer learning
CN112508684A (en) * 2020-12-04 2021-03-16 中信银行股份有限公司 Joint convolutional neural network-based collection risk rating method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
关守平;吕欣;张艳蕊;: "基于过程神经网络的短期负荷预测", 东北大学学报(自然科学版), no. 10 *
明彤彤;王凯;田冬冬;徐松;田浩含;: "基于LSTM神经网络的锂离子电池荷电状态估算", 广东电力, no. 03 *
高琳, 高峰, 管晓宏, 周佃民: "电力系统短期负荷预测的多神经网络Boosting集成模型", 西安交通大学学报, no. 10 *

Also Published As

Publication number Publication date
CN113723593B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN110084610B (en) Network transaction fraud detection system based on twin neural network
CN113905391B (en) Integrated learning network traffic prediction method, system, equipment, terminal and medium
CN112270545A (en) Financial risk prediction method and device based on migration sample screening and electronic equipment
CN110264270A (en) A kind of behavior prediction method, apparatus, equipment and storage medium
CN114490065A (en) Load prediction method, device and equipment
JP2022515941A (en) Generating hostile neuropil-based classification system and method
CN111340245B (en) Model training method and system
CN111461353A (en) Model training method and system
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
Urgun et al. Composite power system reliability evaluation using importance sampling and convolutional neural networks
CN117674119A (en) Power grid operation risk assessment method, device, computer equipment and storage medium
KR102152081B1 (en) Valuation method based on deep-learning and apparatus thereof
CN116993513A (en) Financial wind control model interpretation method and device and computer equipment
CN115544033B (en) Method, device, equipment and medium for updating check repeat vector library and checking repeat data
CN117113086A (en) Energy storage unit load prediction method, system, electronic equipment and medium
CN113807541B (en) Fairness repair method, system, equipment and storage medium for decision system
CN116523001A (en) Method, device and computer equipment for constructing weak line identification model of power grid
CN116542701A (en) Carbon price prediction method and system based on CNN-LSTM combination model
CN113723593A (en) Load shedding prediction method and system based on neural network
CN113872703B (en) Method and system for predicting multi-network metadata in quantum communication network
CN115392113A (en) Cross-working condition complex electromechanical system residual life prediction system and method
CN113807005A (en) Bearing residual life prediction method based on improved FPA-DBN
CN116526582B (en) Combined dispatching method and system for electric power unit based on artificial intelligence combined driving
US20220405599A1 (en) Automated design of architectures of artificial neural networks
EP4198831A1 (en) Automated feature engineering for predictive modeling using deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant