CN117951577A - Virtual power plant energy state sensing method - Google Patents

Virtual power plant energy state sensing method Download PDF

Info

Publication number
CN117951577A
CN117951577A CN202410085293.1A CN202410085293A CN117951577A CN 117951577 A CN117951577 A CN 117951577A CN 202410085293 A CN202410085293 A CN 202410085293A CN 117951577 A CN117951577 A CN 117951577A
Authority
CN
China
Prior art keywords
data
neural network
fault
model
power plant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410085293.1A
Other languages
Chinese (zh)
Inventor
楚天丰
刘要博
胡旭光
马大中
于同伟
田野
厍世达
王顺江
闫振宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202410085293.1A priority Critical patent/CN117951577A/en
Publication of CN117951577A publication Critical patent/CN117951577A/en
Pending legal-status Critical Current

Links

Landscapes

  • Testing And Monitoring For Control Systems (AREA)

Abstract

The invention belongs to the technical field of distributed energy state sensing, relates to a virtual power plant energy state sensing method, and particularly relates to a virtual power plant energy state sensing method based on a time sequence condition stacked convolutional neural network. The invention comprises the following steps: collecting production operation data of distributed energy resources in a virtual power plant as a complete data set; constructing a stacked convolutional neural network by using the complete data set, and training to obtain a state perception model of the data set; and testing the trained state sensing module by using a test set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing module, and accurately detecting the position and type of the fault. The invention has the important significance of improving the energy management efficiency, realizing intelligent control, providing fault early warning function, reducing the energy cost and the like, and has important practical application value for optimizing the operation of an energy system and improving the energy utilization efficiency.

Description

Virtual power plant energy state sensing method
Technical Field
The invention belongs to the technical field of distributed energy state sensing, relates to a virtual power plant energy state sensing method, and particularly relates to a virtual power plant energy state sensing method based on a time sequence condition stacked convolutional neural network.
Background
A virtual power plant is a system that integrates distributed energy resources, primarily for coordinating and optimizing the operation of individual distributed energy units. Efficient operation of virtual power plants relies on accurate sensing and status monitoring of distributed energy systems. However, due to the variety and complexity of energy resources, including solar energy, wind energy, energy storage systems, etc., resource identification and classification remains a challenging task. Therefore, in a virtual power plant, accurate acquisition of state information of an energy system is important for achieving accurate energy scheduling and optimization.
The prior art scheme is as follows:
first, data acquisition needs to be performed on an energy system to collect data including various aspects of energy production, transmission, consumption and the like. Such data may come from various sources, such as sensors, equipment monitoring, market transactions, and the like.
And then preprocessing the acquired data, including the steps of data cleaning, feature extraction, data normalization and the like. The data cleaning mainly removes abnormal values and noise, and ensures the quality of data. Feature extraction is the extraction of useful features from raw data, such as trends, periodicity, etc. in time series data. Data normalization may map data of different numerical ranges to a uniform standard range for a better understanding of the data by the neural network.
Next, a neural network model is designed for energy state sensing. The model may include various types of neural network layers, such as convolutional layers, cyclic layers, fully-connected layers, and the like. The design of the model takes the characteristics and actual requirements of the energy system into consideration so as to extract the relevant characteristics of the energy state to the greatest extent.
And finally, training the designed neural network model by utilizing the preprocessed data. The training process requires dividing the data into a training set and a testing set, optimizing the weights and biases of the model with the training set, and then evaluating the performance of the model with the testing set. The model is enabled to continuously approach the state of the real energy system through repeated iteration training process.
Once the model training is completed, this trained neural network model can be utilized for energy state sensing. New observation data is input into the model, and the model outputs a prediction result of the states of all components of the energy system. These predictions can be used to monitor the operating state of the energy system in real time, aid in decision making and system optimization.
In general, the neural network-based energy state sensing scheme realizes sensing and prediction of the state of an energy system through data acquisition, preprocessing, neural network model design and training. The method can improve the reliability and efficiency of the energy system and provide decision support for energy management.
Currently, a state sensing method based on a neural network is one of advanced data compensation methods. These methods take advantage of the powerful modeling capabilities of neural networks to enable efficient processing and recovery of unreliable data. Compared with the traditional mathematical modeling and statistical analysis method, the neural network-based method is more suitable for processing the multivariate data sequence and can utilize global information for modeling and prediction. The neural network-based method has certain advantages in processing the data compensation problem, can fully utilize the time sequence and global information of the data, and can be applied to various fields.
However, the current neural network method still has some challenges, such as design and optimization of model structure, demand of computing resources, and selection of loss functions, and further research and improvement are still needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a virtual power plant energy state sensing method, which stacks convolutional neural networks based on time sequence conditions to realize accurate sensing and state monitoring of a distributed energy system in a virtual power plant.
In order to achieve the aim of the invention, the invention is realized by adopting the following technical scheme:
A virtual power plant energy state awareness method, comprising:
Collecting production operation data of distributed energy resources in a virtual power plant as a complete data set;
Constructing a stacked convolutional neural network by using the complete data set, and training to obtain a state perception model of the data set;
and testing the trained state sensing model by using a test set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing model, and accurately detecting the occurrence position and type of the fault.
Furthermore, the production operation data of the distributed energy resources in the virtual power plant are collected as a complete data set, wherein the complete data set comprises a plurality of data types of a stable operation state and a fault operation state, and the fault data set is provided with a fault information tag; comprising the following steps:
step 1.1: performing data preprocessing on the acquired complete data, including normalization processing and correlation analysis;
step 1.2: sampling to generate sample data on the basis of time window sliding of the preprocessed data, and extracting a plurality of continuous subsequence samples from the original data by sliding a time window with a fixed length;
Step 1.3: combining the sample data with the corresponding fault type tag and fault position tag to form a complete state sensing data set;
Step 1.4: and randomly disturbing the state sensing data set, and dividing the state sensing data set into a training set, a verification set and a test set according to a set proportion.
Further, the data preprocessing of the collected complete data includes:
step 1.1.1: normalization processing is carried out on the acquired data, and a dynamic reference normalization method is provided, wherein the dynamic reference normalization method is shown in the following formula:
Wherein x (t) is a data sample at time t in the data set D, μ (t) represents a data average value at the reference time point t, σ (t) represents a data standard deviation at the reference time point t, and x norm (t) is a value after normalization of the data sample x (t);
step 1.1.2: for a given dataset, re-ranking according to the normalized numerical sizes of the variables, introducing an aochow coefficient, adjusting the ranking, multiplying the original ranking by the aochow coefficient, and adjusting the ranking of each variable:
Where AC represents the aochoi, max_rank is the ranking of the highest ranking variable, min_rank is the ranking of the lowest ranking variable, cur_rank represents the current variable ranking;
Step 1.1.3: the Spearman coefficient correlation analysis is carried out on the data subjected to the aor coefficient adjustment:
Wherein ρ is the correlation coefficient of the variable X and the variable Y, the variable X and the variable Y are two different data samples after normalization of the data set D, cov (X, Y) is the covariance coefficient of the variable X and the variable Y, And/>Representing the variance of variable X and variable Y, respectively.
Further, the constructing a stacked convolutional neural network by using the complete data set, and training to obtain a state sensing model of the data set includes:
Step 2.1: constructing a one-dimensional deep convolutional neural network, and extracting data characteristics by utilizing the preprocessed data to capture a fault mode; constructing a transducer model to perform time sequence learning, and ensuring the time correlation of data;
Step 2.2: forming a stacked convolutional neural network by using a one-dimensional deep convolutional neural network and a transducer model, and building a loss function to train the network;
step 2.3: and performing state sensing on the new unknown data by using the trained stacked neural network model.
Further, the one-dimensional deep convolutional neural network is constructed, and the data characteristics are extracted by utilizing the preprocessed data to capture a fault mode; constructing a transducer model to perform time sequence learning, ensuring time correlation of data, comprising:
Step 2.1.1: the convolution layer is constructed to effectively capture and aggregate the features in the local area, a multi-head attention mechanism is added to capture and fuse the local features on the global level, and the two captured features are weighted to output a depth feature sequence:
Fout=αFatt+βFconv
Wherein F out represents a weighted output characteristic sequence of the capturing characteristics of the convolution layer and the multi-head attention mechanism, alpha and beta are respectively the weighted proportion of the characteristics of the multi-head attention mechanism and the extracting characteristics of the convolution layer, and F att and F conv respectively represent the characteristics of the multi-head attention mechanism and the extracting characteristics of the convolution layer;
Step 2.1.2: constructing a dense embedding layer E (-), converting the weighted output characteristic sequence F out into characteristic embedding, adding a position vector PE into the characteristic embedding sequence, and inputting the embedding sequence H 0 into a transducer:
H0=E(Fout)+PE
wherein E (·) represents a dense embedding layer function;
Step 2.1.3: transformer Decoder formed by stacking L encoders is constructed, a scaling dot product attention mechanism is adopted to obtain a more stable gradient, the output of the encoder before the self-attention layer ATT in the encoder L is embedded into a learning global feature in a sequence H l-1, and the output of Transformer Decoder is obtained through stacking:
Where ATT l(Hl-1) the self-attention layer learned characteristics of the first encoder, W Q、WK and W V are trainable parameters of the self-attention layer, d k is the dimension of the sequence embedded H l-1, MHA l(Hl-1) is the attention layer output after stacking, Encoder characteristics representing the 1 st self-attention layer,/>The encoder characteristics representing the h-th self-attention layer, W O, is the trainable parameter set for the multi-attention layer after stacking.
Further, the constructing a stacked convolutional neural network by using the one-dimensional deep convolutional neural network and the transducer model, and building a loss function to train the network includes:
Step 2.2.1: each Transformer Decoder adopts an MHA sub-layer and a sub-layer of Feed-Forward Network, FFN, residual connection and layer normalization LN are arranged between the sub-layers, and an output embedded sequence H l of the encoder l is as follows:
in the method, in the process of the invention, Output of the first encoder sequence feature, LN (·) represents the value output after normalization of the feature layer,/>A feature transformation representing a feed-forward network learned encoder sequence feature output;
Step 2.2.2: transmitting the feature vector sequence of attention code to two fault classification heads in series, wherein the fault classification heads adopt 4 dense layers with dropout regularization, the first classification head outputs soft maximum probability of fault type, and the second magnetic head outputs soft maximum probability of fault positioning;
Step 2.2.3: training a sequence learning model by taking cross entropy as a loss function, classifying faults and no faults by adopting a binary cross entropy loss function, and positioning the faults by adopting a classified cross entropy loss function:
Where D_Loss and M_Loss represent the binary cross entropy Loss and the class cross entropy Loss, respectively, y i represents the true value, the value is 0 or 1, Is a predicted value; y (i,m) denotes that where the ith sample belongs to the mth fault type,/>Representing a prediction result in which the ith sample belongs to the mth fault type, n representing the number of scalar values in the model;
Step 2.2.4: according to the value of the loss function, calculating the gradient through a back propagation algorithm, transmitting the gradient back to each layer of the network to adjust network parameters so as to minimize the loss function, and repeatedly training all samples in the training set until the preset training round number is reached or the loss function reaches a satisfactory value.
Furthermore, the using the trained stacked neural network model to perform state sensing on the new unknown data includes:
Step 2.3.1: after training is completed, the generalization capability and performance of the model are verified by using a test set;
Step 2.3.2: performing model evaluation by using a root mean square error state sensing neural network model, wherein evaluation indexes are as follows:
Where i is different training samples, N is training data points, x i is actual fault distance, and x' i represents the fault distance estimated by the model distance.
A virtual power plant energy state sensing device, comprising:
the acquisition module is used for acquiring production operation data of distributed energy resources in the virtual power plant as a complete data set;
the neural network training module is used for constructing a stacked convolutional neural network by utilizing the complete data set and training to obtain a state perception model of the data set;
The testing module is used for testing the trained state sensing model by utilizing the testing set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing model, and accurately detecting the occurrence position and type of the fault.
A computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the steps of any one of the virtual power plant energy state awareness methods when executing the computer program.
A computer storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of any one of the virtual power plant energy state sensing methods.
The beneficial effects of the invention are as follows:
The invention provides a virtual power plant energy state sensing method based on a time sequence condition stacked convolutional neural network. First, time series data of key parameters in the virtual power plant, such as current, voltage, temperature and the like, are collected through devices such as sensors and the like. These time series data are then input into a time series condition stacked convolutional neural network for processing. The network structure can learn the time sequence characteristics of the energy state by utilizing the characteristic extraction and time sequence condition stacking method of the convolutional neural network, and can predict and classify the state.
The virtual power plant energy state sensing method based on the time sequence condition stacked convolutional neural network has important significance of improving energy management efficiency, realizing intelligent control, providing fault early warning function, reducing energy cost and the like, and has important practical application value for optimizing the operation of an energy system and improving energy utilization efficiency.
The invention provides two methods of dynamic reference normalization and increasing an Aodia coefficient for data processing to capture the long-term trend and short-term fluctuation characteristics of a dynamic data sequence; constructing a convolution layer to effectively capture and aggregate features in a local area, and simultaneously increasing a multi-head attention mechanism to capture and fuse the local features on a global level; and forming a stacked convolutional neural network by using the one-dimensional deep convolutional neural network and a transducer model, and adding 4 dense layers with dropout regularization to output two fault classification heads. The first classification head outputs a soft maximum probability of failure type and the second head outputs a maximum probability of failure localization.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram of a virtual power plant energy state sensing method according to the present invention;
FIG. 2 is a block diagram of a zoom dot product attention mechanism in accordance with the present invention;
FIG. 3 is a virtual power plant topology for a specific application of the present invention;
fig. 4 is a training flow chart of the method of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The invention will be further described with reference to fig. 1-4 and specific examples.
Example 1
The invention provides an embodiment, which is a virtual power plant energy state sensing method, as shown in fig. 1. The invention is innovated on data processing and convolutional neural network, and the energy state sensing method suitable for the virtual power plant is provided on the basis of considering the characteristics of distributed energy resource data in the virtual power plant. Namely a virtual power plant energy state sensing method based on a time sequence condition stacked convolutional neural network. The historical operation data of the invention is derived from a virtual power plant topological structure shown in figure 3, and consists of distributed energy resources such as wind power, photovoltaic, energy storage, biomass energy and the like.
In the invention, firstly, in a data processing part, two methods of dynamic reference normalization and increasing the slow-and-slow coefficient are provided for data processing so as to capture the long-term trend and short-term fluctuation characteristics of a dynamic data sequence.
Secondly, innovations are made on convolutional neural networks:
(1) The build convolution layer effectively captures and aggregates features within local regions while the increased multi-headed attention mechanism captures and fuses local features at the global level.
(2) And forming a stacked convolutional neural network by using the one-dimensional deep convolutional neural network and a transducer model, and adding 4 dense layers with dropout regularization to output two fault classification heads. The first classification head outputs a soft maximum probability of a fault type and the second head outputs a soft maximum probability of fault localization.
The method specifically comprises the following steps:
Step 1: acquiring production operation data of distributed energy resources in a virtual power plant in a certain operation time period as a complete data set D, wherein the complete data set D comprises a plurality of data types such as a stable operation state, a fault operation state and the like, and the fault data set is provided with a fault information tag;
Step 2: constructing a stacked convolutional neural network by using the complete data set D and training, wherein the stacked convolutional neural network comprises a fault detection neural network and a fault positioning neural network, so as to obtain a state perception model of the data set D;
And step 3, testing the trained state sensing module by using a test set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing module, and accurately detecting the position and type of the fault.
Example 2
The invention also provides an embodiment, which is a virtual power plant energy state sensing method, wherein the steps 1: collecting production operation data of distributed energy resources in a virtual power plant within a certain operation time period as a complete data set D, wherein the complete data set D comprises a plurality of data types such as a stable operation state, a fault operation state and the like, and the fault data set is provided with a fault information label, and further comprises the following steps of
Step 1.1: performing data preprocessing on the acquired complete data, including normalization processing, correlation analysis and the like;
step 1.2: sampling to generate sample data on the basis of time window sliding of the preprocessed data, and extracting a plurality of continuous subsequence samples from the original data by sliding a time window with a fixed length;
Step 1.3: combining the sample data with the corresponding fault type tag and fault position tag to form a complete state sensing data set;
Specifically, 200 data samples without fault condition and random noise enhancement are added to the data, and 19×14×15+200=4190 data samples are obtained. The sample data is combined with corresponding fault type labels and fault position labels to form a complete state sensing data set, wherein the data set D totally comprises 5 sensing states, namely a normal operation state, an abnormal warning state, a working condition adjustment state, a topology change state and an emergency alarm state, and each state corresponds to the change of different data types.
Step 1.4: randomly disturbing the complete state sensing data set, and dividing the complete state sensing data set into a training set, a verification set and a test set according to a preset proportion; the preset proportion is specifically that a complete state sensing data set is randomly disturbed, and the whole state sensing data set is subjected to random scrambling according to the following steps of 7:2: the scale of 1 divides the dataset into a training set, a validation set and a test set.
The production process data collected in this example are shown in table 1.
TABLE 1 production process data
Wind power generation Wind speed, wind direction and wind energy density
Photovoltaic device Illumination intensity, temperature, spectral distribution, sunlight time
Biomass energy Geographical location, amount of available resources, humidity, heating value
Cogeneration Water supply temperature, water return temperature, voltage and phase angle
Energy storage battery State of charge, state of discharge, temperature, efficiency
In the method for sensing the energy state of the virtual power plant, the step 1.1 further comprises the following steps:
Step 1.1.1: the acquired data is normalized, and in order to better adapt to the dynamic change of time series data and capture the long-term trend and short-term fluctuation of the data, a dynamic reference normalization method is provided:
Where x (t) is a data sample at time t in the data set D, μ (t) represents a data mean value at the reference time point t, σ (t) represents a standard deviation of data at the reference time point t, and x norm (t) is a value after normalization of the data sample x (t).
Step 1.1.2: for a given dataset, re-ranking according to the normalized numerical sizes of the variables, introducing an aoh coefficient (Arrogance Coefficient, AC) to adjust the ranking, multiplying the original ranking by the aoh coefficient to adjust the ranking of each variable:
Where AC represents the aochoi, max_rank is the ranking of the highest ranked variable, min_rank is the ranking of the lowest ranked variable, and cur_rank represents the current variable ranking.
Step 1.1.3: the Spearman coefficient correlation analysis is carried out on the data subjected to the aor coefficient adjustment:
Wherein ρ is the correlation coefficient of the variable X and the variable Y, the variable X and the variable Y are two different data samples after normalization of the data set D, cov (X, Y) is the covariance coefficient of the variable X and the variable Y, And/>Representing the variance of variable X and variable Y, respectively;
Example 3
The invention also provides an embodiment, which is a virtual power plant energy state sensing method, as shown in fig. 4, wherein in the step 2, a stacked convolutional neural network is constructed and trained by using a complete data set D, and the stacked convolutional neural network comprises a fault detection neural network and a fault location neural network, so as to obtain a state sensing model of the data set D; the method specifically comprises the following steps:
Step 2.1: constructing a one-dimensional deep convolutional neural network, and extracting data characteristics by utilizing the preprocessed data to capture a fault mode; constructing a transducer model to perform time sequence learning, and ensuring the time correlation of data; comprising the following steps:
step 2.1.1: the method comprises the steps of constructing a convolution layer to effectively capture and aggregate features in a local area, enabling the input of the convolution layer to be one-dimensional time sequence information X m, enabling the dimension of the sequence information to be 100X 1, setting the number of convolution kernels to be 64, enabling the size of the convolution kernels to be 3*3, simultaneously increasing a multi-head attention mechanism to capture and fuse the local features on a global level, and enabling the two captured features to be weighted to output depth feature sequences:
Fout=αFatt+βFconv
Wherein F out represents the weighted output feature sequence of the convolution layer and the multi-head attention mechanism capturing feature, alpha and beta are the weighted proportion of the multi-head attention mechanism feature and the convolution layer extracting feature respectively, and F att and F conv represent the multi-head attention mechanism and the convolution layer extracting feature respectively.
Step 2.1.2: constructing a dense embedding layer E (-), converting the weighted output characteristic sequence F out into characteristic embedding, adding a position vector PE into the characteristic embedding sequence, and inputting the embedding sequence H 0 into a transducer:
H0=E(Fout)+PE
where R (·) represents a dense embedding layer function.
Step 2.1.3: in order to more accurately sense the system state, transformer Decoder formed by stacking L encoders is constructed in consideration of the characteristic of system state diversity, one-dimensional versions of ResNet-152-v2, inception-v3, inception-ResNet-v2, denseNet-169, xception, resext-50 and EFFICIENTNET-B2 are developed, time is allocated to extract features from each slice of time sequence data, and a feature map sequence extracted by a final convolution layer of a time distribution one-dimensional depth CNN is subjected to global average pooling to obtain a depth feature vector sequence. Meanwhile, a scaling dot product attention mechanism is adopted to acquire a more stable gradient, the output of the encoder before the self-attention layer ATT in the encoder l is embedded into the learning global feature in the sequence H l-1, and the output of Transformer Decoder is obtained through stacking. As shown in FIG. 2, FIG. 2 is a block diagram of a scaled dot product attention mechanism in accordance with the present invention.
Where ATT l(Hl-1) the self-attention layer learned characteristics of the first encoder, W Q、WK and W V are trainable parameters of the self-attention layer, d k is the dimension of the sequence embedded H l-1, MHA l(Hl-1) is the attention layer output after stacking,Encoder characteristics representing the 1 st self-attention layer,/>The encoder characteristics representing the h-th self-attention layer, W O, is the trainable parameter set for the multi-attention layer after stacking.
Step 2.2: forming a stacked convolutional neural network by using a one-dimensional deep convolutional neural network and a transducer model, and building a loss function to train the network; comprising the following steps:
Step 2.2.1: each Transformer Decoder employs an MHA sub-layer and a Feed-Forward Network (FFN) sub-layer with residual connections and Layer Normalization (LN) between the sub-layers. In the MHA sublayer, the input data is first self-attentive calculated by a plurality of attentive heads. This allows the subject to focus on different locations and key information in the input sequence. Each attention header will generate a set of weights to weight the representation of the calculated input data. After the MHA sub-layer calculation is completed, the attention weighted result is connected with the original input data in a residual way, and the input data is processed through the FFN sub-layer. The FFN sub-layer consists of two fully connected layers, wherein the two fully connected layers are subjected to nonlinear transformation through an activation function (usually ReLU), then the same operation is carried out on each Decoder, and the characteristics of input data are gradually extracted and integrated on a plurality of layers. Thus, the output embedded sequence H l of the encoder l is:
in the method, in the process of the invention, Output of the first encoder sequence feature, LN (·) represents the value output after normalization of the feature layer,/>A feature transformation representing a feed forward network learned encoder sequence feature output.
Step 2.2.2: the attention-encoded feature vector sequence is passed in series to two fault classification heads that employ 4 dense layers with dropout regularization. The classification head is tuned to a different number of hidden layers, each layer of neurons, and dropout factors. The four layers of dense classification heads after super parameter adjustment are 512, 128 and 32 hidden neurons respectively, and the optimal activation is realized by adopting a ReLU function. Binary classification of faults or no faults is performed by adopting Sigmoid activation, and SoftMax activation gives out fault types, phases and fault location prediction scores. The first classification head outputs a soft maximum probability of a fault type and the second head outputs a soft maximum probability of fault localization.
Step 2.2.3: the sequence learning model is trained with cross entropy as a loss function. Performing fault and fault-free classification by adopting a binary cross entropy Loss function, wherein D_loss represents the difference between model output and a real label, and by minimizing the Loss function, the model learns how to accurately predict faults and faults from state information; and performing fault location by adopting a classified cross entropy Loss function, wherein M_Loss represents the values of the Loss functions corresponding to different states, and accurately identifying the fault position according to the state information.
Where D_Loss and M_Loss represent the binary cross entropy Loss and the class cross entropy Loss, respectively, y i represents the true value, the value is 0 or 1,Is a predicted value; y (i,m) denotes that where the ith sample belongs to the mth fault type,/>Representing the predicted outcome in which the ith sample belongs to the mth fault type, n represents the number of scalar values in the model.
Step 2.2.4: according to the value of the loss function, calculating the gradient through a back propagation algorithm, transmitting the gradient back to each layer of the network to adjust network parameters so as to minimize the loss function, and repeatedly training all samples in the training set until the preset training round number is reached or the loss function reaches a satisfactory value.
Step 2.3: performing state sensing on new unknown data by using the trained stacked neural network model; comprising the following steps:
Step 2.3.1: after training is completed, the test set is used to verify the generalization ability and performance of the model.
Step 2.3.2: and carrying out model evaluation on the state-aware neural network model by using root mean square error (Root Mean Square Error, RMSE), wherein the evaluation indexes are as follows:
Where i is different training samples, N is training data points, x i is actual fault distance, and x' i represents the fault distance estimated by the model distance.
Example 4
The invention also provides an embodiment, which is an energy state sensing device of a virtual power plant, comprising:
the system comprises an acquisition module, a fault information tag and a control module, wherein the acquisition module is used for acquiring production operation data of distributed energy resources in a virtual power plant in a certain operation time period as a complete data set D, wherein the complete data set D comprises a plurality of data types such as a stable operation state, a fault operation state and the like, and the fault data set is provided with the fault information tag;
the neural network training module is used for constructing a stacked convolutional neural network by utilizing the complete data set and training, wherein the stacked convolutional neural network comprises a fault detection neural network and a fault positioning neural network, so as to obtain a state perception model of the data set D;
The test module is used for testing the trained state sensing module by utilizing the test set, further optimizing and improving the stacked convolutional neural network to obtain the safe and reliable state sensing module, and the safe and reliable state sensing module is used for accurately detecting the position and the type of the fault.
Example 5
Based on the same inventive concept, the embodiment of the invention also provides a computer device, which comprises a storage medium, a processor and a computer program stored on the storage medium and capable of running on the processor. The steps of any one of the virtual power plant energy status sensing methods described in embodiments 1, 2 or 3 are implemented when the processor executes the computer program.
Example 6
Based on the same inventive concept, the embodiments of the present invention further provide a computer storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the virtual power plant energy status sensing methods described in embodiments 1 or 2 or 3.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (10)

1. A method for sensing the energy status of a virtual power plant, comprising:
Collecting production operation data of distributed energy resources in a virtual power plant as a complete data set;
Constructing a stacked convolutional neural network by using the complete data set, and training to obtain a state perception model of the data set;
and testing the trained state sensing model by using a test set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing model, and accurately detecting the occurrence position and type of the fault.
2. The method for sensing the energy state of the virtual power plant according to claim 1, wherein production operation data of distributed energy resources in the virtual power plant are collected as a complete data set, wherein the complete data set comprises a plurality of data types of stable operation state and fault operation state, and the fault data set is provided with a fault information tag; comprising the following steps:
step 1.1: performing data preprocessing on the acquired complete data, including normalization processing and correlation analysis;
step 1.2: sampling to generate sample data on the basis of time window sliding of the preprocessed data, and extracting a plurality of continuous subsequence samples from the original data by sliding a time window with a fixed length;
Step 1.3: combining the sample data with the corresponding fault type tag and fault position tag to form a complete state sensing data set;
Step 1.4: and randomly disturbing the state sensing data set, and dividing the state sensing data set into a training set, a verification set and a test set according to a set proportion.
3. The method for sensing the energy state of the virtual power plant according to claim 2, wherein the data preprocessing of the collected complete data comprises:
step 1.1.1: normalization processing is carried out on the acquired data, and a dynamic reference normalization method is provided, wherein the dynamic reference normalization method is shown in the following formula:
Wherein x (t) is a data sample at time t in the data set D, μ (t) represents a data average value at the reference time point t, σ (t) represents a data standard deviation at the reference time point t, and x norm (t) is a value after normalization of the data sample x (t);
step 1.1.2: for a given dataset, re-ranking according to the normalized numerical sizes of the variables, introducing an aochow coefficient, adjusting the ranking, multiplying the original ranking by the aochow coefficient, and adjusting the ranking of each variable:
Where AC represents the aochoi, max_rank is the ranking of the highest ranking variable, min_rank is the ranking of the lowest ranking variable, cur_rank represents the current variable ranking;
Step 1.1.3: the Spearman coefficient correlation analysis is carried out on the data subjected to the aor coefficient adjustment:
Wherein ρ is the correlation coefficient of the variable X and the variable Y, the variable X and the variable Y are two different data samples after normalization of the data set D, cov (X, Y) is the covariance coefficient of the variable X and the variable Y, And/>Representing the variance of variable X and variable Y, respectively.
4. The method for sensing the state of energy of a virtual power plant according to claim 1, wherein the constructing a stacked convolutional neural network by using a complete data set and training the stacked convolutional neural network to obtain a state sensing model of the data set comprises:
Step 2.1: constructing a one-dimensional deep convolutional neural network, and extracting data characteristics by utilizing the preprocessed data to capture a fault mode; constructing a transducer model to perform time sequence learning, and ensuring the time correlation of data;
Step 2.2: forming a stacked convolutional neural network by using a one-dimensional deep convolutional neural network and a transducer model, and building a loss function to train the network;
step 2.3: and performing state sensing on the new unknown data by using the trained stacked neural network model.
5. The method for sensing the energy state of the virtual power plant according to claim 4, wherein the one-dimensional deep convolutional neural network is constructed, and the data characteristics are extracted by utilizing the preprocessed data to capture a fault mode; constructing a transducer model to perform time sequence learning, ensuring time correlation of data, comprising:
Step 2.1.1: the convolution layer is constructed to effectively capture and aggregate the features in the local area, a multi-head attention mechanism is added to capture and fuse the local features on the global level, and the two captured features are weighted to output a depth feature sequence:
Fout=αFatt+βFconv
Wherein F out represents a weighted output characteristic sequence of the capturing characteristics of the convolution layer and the multi-head attention mechanism, alpha and beta are respectively the weighted proportion of the characteristics of the multi-head attention mechanism and the extracting characteristics of the convolution layer, and F att and F conv respectively represent the characteristics of the multi-head attention mechanism and the extracting characteristics of the convolution layer;
Step 2.1.2: constructing a dense embedding layer E (-), converting the weighted output characteristic sequence F out into characteristic embedding, adding a position vector PE into the characteristic embedding sequence, and inputting the embedding sequence H 0 into a transducer:
H0=E(Fout)+PE
wherein E (·) represents a dense embedding layer function;
Step 2.1.3: transformer Decoder formed by stacking L encoders is constructed, a scaling dot product attention mechanism is adopted to obtain a more stable gradient, the output of the encoder before the self-attention layer ATT in the encoder L is embedded into a learning global feature in a sequence H l-1, and the output of Transformer Decoder is obtained through stacking:
Where ATT l(Hl-1) the self-attention layer learned characteristics of the first encoder, W Q、WK and W V are trainable parameters of the self-attention layer, d k is the dimension of the sequence embedded H l-1, MHA l(Hl-1) is the attention layer output after stacking, Encoder characteristics representing the 1 st self-attention layer,/>The encoder characteristics representing the h-th self-attention layer, W O, is the trainable parameter set for the multi-attention layer after stacking.
6. The method of claim 4, wherein constructing a stacked convolutional neural network using a one-dimensional deep convolutional neural network and a transducer model, and constructing a loss function to train the network comprises:
Step 2.2.1: each Transformer Decoder adopts an MHA sub-layer and a sub-layer of Feed-Forward Network, FFN, residual connection and layer normalization LN are arranged between the sub-layers, and an output embedded sequence H l of the encoder l is as follows:
in the method, in the process of the invention, The output of the first encoder sequence feature, LN (·) represents the value output after normalization of the feature layer,A feature transformation representing a feed-forward network learned encoder sequence feature output;
Step 2.2.2: transmitting the feature vector sequence of attention code to two fault classification heads in series, wherein the fault classification heads adopt 4 dense layers with dropout regularization, the first classification head outputs soft maximum probability of fault type, and the second magnetic head outputs soft maximum probability of fault positioning;
Step 2.2.3: training a sequence learning model by taking cross entropy as a loss function, classifying faults and no faults by adopting a binary cross entropy loss function, and positioning the faults by adopting a classified cross entropy loss function:
Where D_Loss and M_Loss represent the binary cross entropy Loss and the class cross entropy Loss, respectively, y i represents the true value, the value is 0 or 1, Is a predicted value; y (i,m) denotes that where the ith sample belongs to the mth fault type,/>Representing a prediction result in which the ith sample belongs to the mth fault type, n representing the number of scalar values in the model;
Step 2.2.4: according to the value of the loss function, calculating the gradient through a back propagation algorithm, transmitting the gradient back to each layer of the network to adjust network parameters so as to minimize the loss function, and repeatedly training all samples in the training set until the preset training round number is reached or the loss function reaches a satisfactory value.
7. The method of claim 4, wherein using the trained stacked neural network model to perform state sensing on new unknown data comprises:
Step 2.3.1: after training is completed, the generalization capability and performance of the model are verified by using a test set;
Step 2.3.2: performing model evaluation by using a root mean square error state sensing neural network model, wherein evaluation indexes are as follows:
Where i is different training samples, N is training data points, x i is actual fault distance, and x' i represents the fault distance estimated by the model distance.
8. A virtual power plant energy state sensing device is characterized in that: comprising the following steps:
the acquisition module is used for acquiring production operation data of distributed energy resources in the virtual power plant as a complete data set;
the neural network training module is used for constructing a stacked convolutional neural network by utilizing the complete data set and training to obtain a state perception model of the data set;
The testing module is used for testing the trained state sensing model by utilizing the testing set, further optimizing and improving the stacked convolutional neural network to obtain a safe and reliable state sensing model, and accurately detecting the occurrence position and type of the fault.
9. A computer device comprising a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, characterized by: the processor, when executing the computer program, implements the steps of a method for sensing energy status of a virtual power plant as claimed in any one of claims 1-7.
10. A computer storage medium, characterized by: the computer storage medium has a computer program stored thereon, which when executed by a processor, implements the steps of a method for sensing energy status of a virtual power plant as claimed in any one of claims 1 to 7.
CN202410085293.1A 2024-01-22 2024-01-22 Virtual power plant energy state sensing method Pending CN117951577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410085293.1A CN117951577A (en) 2024-01-22 2024-01-22 Virtual power plant energy state sensing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410085293.1A CN117951577A (en) 2024-01-22 2024-01-22 Virtual power plant energy state sensing method

Publications (1)

Publication Number Publication Date
CN117951577A true CN117951577A (en) 2024-04-30

Family

ID=90800634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410085293.1A Pending CN117951577A (en) 2024-01-22 2024-01-22 Virtual power plant energy state sensing method

Country Status (1)

Country Link
CN (1) CN117951577A (en)

Similar Documents

Publication Publication Date Title
CN109842373B (en) Photovoltaic array fault diagnosis method and device based on space-time distribution characteristics
CN110288136B (en) Wind power multi-step prediction model establishment method
CN110597240A (en) Hydroelectric generating set fault diagnosis method based on deep learning
CN110909919A (en) Photovoltaic power prediction method of depth neural network model with attention mechanism fused
Ibrahim et al. Short‐Time Wind Speed Forecast Using Artificial Learning‐Based Algorithms
CN110570030A (en) Wind power cluster power interval prediction method and system based on deep learning
CN109472097B (en) Fault diagnosis method for online monitoring equipment of power transmission line
CN111832812A (en) Wind power short-term prediction method based on deep learning
Li et al. Deep spatio-temporal wind power forecasting
CN114169445A (en) Day-ahead photovoltaic power prediction method, device and system based on CAE and GAN hybrid network
CN115271253A (en) Water-wind power generation power prediction model construction method and device and storage medium
CN114399081A (en) Photovoltaic power generation power prediction method based on weather classification
Chen et al. Research on wind power prediction method based on convolutional neural network and genetic algorithm
CN115481791A (en) Water-wind power generation and power generation combined prediction method, device and equipment
CN114119273A (en) Park comprehensive energy system non-invasive load decomposition method and system
CN116014722A (en) Sub-solar photovoltaic power generation prediction method and system based on seasonal decomposition and convolution network
CN117273440A (en) Engineering construction Internet of things monitoring and managing system and method based on deep learning
Liu et al. Deep learning for prediction and fault detection in geothermal operations
CN115481788B (en) Phase change energy storage system load prediction method and system
Li et al. Gated recurrent unit networks for remaining useful life prediction
Mehr et al. The validity of deep learning computational model for wind speed simulation
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN117951577A (en) Virtual power plant energy state sensing method
CN114841266A (en) Voltage sag identification method based on triple prototype network under small sample
Yu et al. Time series cross-correlation network for wind power prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination