CN115730635A - Electric vehicle load prediction method - Google Patents

Electric vehicle load prediction method Download PDF

Info

Publication number
CN115730635A
CN115730635A CN202211559258.6A CN202211559258A CN115730635A CN 115730635 A CN115730635 A CN 115730635A CN 202211559258 A CN202211559258 A CN 202211559258A CN 115730635 A CN115730635 A CN 115730635A
Authority
CN
China
Prior art keywords
data
gate
electric vehicle
load
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211559258.6A
Other languages
Chinese (zh)
Inventor
潘庭龙
许德智
杨玮林
董越
刘子博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202211559258.6A priority Critical patent/CN115730635A/en
Publication of CN115730635A publication Critical patent/CN115730635A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a load prediction method for an electric automobile, which comprises the following steps: acquiring large sample data of electric vehicle load, and processing the large sample data to normalize the large sample data; constructing an LSTM model, inputting the normalized large sample data into the LSTM model, wherein the normalized large sample data comprises a training set and a test set, and training the LSTM model by using the training set; and processing the LSTM model after the test set is trained to obtain the predicted load of the large-sample electric automobile. The method can accurately predict the load of the large-sample electric automobile, improve the load prediction precision of the small-sample electric automobile, provide a basis for power grid dispatching and improve the charging safety of the electric automobile.

Description

Electric vehicle load prediction method
Technical Field
The invention relates to the technical field of load prediction, in particular to a method for predicting the load of an electric vehicle.
Background
In recent years, the electric automobile industry is developed vigorously. The research heat of electric vehicles as a key link for integrating an intelligent power grid and an intelligent transportation network is continuously rising, the number and the utilization rate of the electric vehicles are rapidly increased in daily life, and the charging load of the electric vehicles plays a non-negligible role in the operation and the dispatching of the power grid. Therefore, the load prediction of the electric vehicle is an important link, and has important significance for power grid dispatching, electric power market transaction, charging station planning and construction, and economic and convenient travel of users.
The traditional power load prediction methods include a regression analysis method, a similar day method and the like. With the large-scale access of novel load types such as distributed power supplies and electric vehicles, great challenges are brought to the traditional load prediction method. Compared with the traditional power load, the charging method has the advantages that the characteristics of different charging modes, different traveling rules, different charging efficiency, different charging frequencies and the like are considered, and the distribution of the charging load of the electric automobile on time is different from the rule of the power load. In addition, the charging load of the electric automobile is influenced by multiple factors such as travel road conditions, weather and operation states, and has larger randomness in time. Therefore, it is necessary to design a method capable of accurately predicting the load of the electric vehicle.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects in the prior art, and provide an electric vehicle load prediction method, which can accurately predict the load of a large-sample electric vehicle, improve the load prediction precision of a small-sample electric vehicle, provide a basis for power grid dispatching and improve the charging safety of the electric vehicle.
According to the technical scheme provided by the invention, the electric vehicle load prediction method comprises the following steps:
acquiring large sample data of electric vehicle load, and processing the large sample data to normalize the large sample data;
constructing an LSTM model, inputting the normalized large sample data into the LSTM model, wherein the normalized large sample data comprises a training set and a test set, and training the LSTM model by using the training set; and processing the test set by using the trained LSTM model to obtain the predicted load of the large-sample electric automobile.
In an embodiment of the present invention, the method for predicting the electric vehicle load further includes:
constructing a DANN model, and transferring the trained LSTM model to the DANN model to form an LSTM-DANN model;
acquiring load small sample data of the electric vehicle, and inputting the LSTM-DANN model; and taking the test set as source domain data, introducing a gradient inversion layer into the source domain data, taking the small sample data as target domain data, and training the LSTM-DANN model to update the target domain data prediction accuracy to the source domain data prediction accuracy direction.
In one embodiment of the invention, the large sample data is processed according to a discretization formula to normalize the large sample data, the discretization formula being:
Figure BDA0003983918920000021
wherein x is i (t) is the actual load value of the electric automobile in the time period t, x i ' (t) is the normalized load value of the electric automobile in the time period t, x i,max (t) is the maximum load value, x, in the charging load of the electric vehicle i,min And (t) is the minimum load value in the charging load of the electric automobile.
In an embodiment of the present invention, the LSTM model includes an input gate, a forgetting gate, an output gate, and an internal memory unit, the input gate is configured to control the number of large sample data input at a current time stored in the internal memory unit, the forgetting gate is configured to control the number of large sample data stored at a previous time stored in the current time, and the output gate is configured to control the number of large sample data output in the internal memory unit to the LSTM model at the current time.
In an embodiment of the present invention, a calculation formula of the forgetting gate is: f. of t =σ(W f x t +U f h t-1 +b f )
Wherein, f t To forget the gate output, W f Weight of forgetting gate, b f For forgetting the offset of the door, X t To forget the input value of the gate at time t, h t-1 Sigma is sigmoid function, U, for the intermediate state of the forgetting gate at the time t-1 f An input for a forgetting gate;
the calculation formula of the input gate is as follows: i all right angle t =σ(W i x t +U i h t-1 +b i )
a t =tanh(W c x t +U c h t-1 +b c )
C t =f t *C t-1 +i t *a t
Wherein, a t Input to the internal memory cell for time t, C t Internal memory cell state at time t, W i As the weight of the input gate, b i For input of the bias of the gate, W c Is the weight of the internal memory cell, b c For biasing of internal memory cells, U c For the input of internal memory cells, U i For input of the input gate, i t Tan h is the hyperbolic tangent function for the output of the input gate.
The calculation formula of the output gate is as follows: o t =σ(W o x t +U o h t-1 +b o )
h t =o t *tanh(C t )
Wherein o is t To output the output of the gate, W o For the weight of the output gate, U o Is of, b o Is the bias of the output gate.
In an embodiment of the present invention, after the LSTM model is trained, the stability of the LSTM model is verified according to a standard deviation of a root mean square error, and the root mean square error and the standard deviation calculation formula are respectively:
Figure BDA0003983918920000031
Figure BDA0003983918920000032
wherein R is MSE Is root mean square error, σ RMSE Is standard deviation, f' i To predict the load value, f i Is an actual load value, r j Is the root-mean-square error of the signal,
Figure BDA0003983918920000033
the root mean square error is the average.
In one embodiment of the invention, the method for training the LSTM-DANN model comprises the following steps:
step 1, extracting a feature vector of a domain classification network in the LSTM model after training, inputting the target domain data into a prediction classifier, and inputting the feature vector of the domain classification network and the target domain data into the domain classifier; constructing a loss function based on the generated countermeasure network, and setting a threshold value;
step 2, the prediction classifier processes the target domain data and outputs a prediction classification result; the domain classifier processes the feature vectors of the domain classification network and outputs a domain classification result;
and 3, substituting the prediction classification result and the domain classification result into the loss function to obtain a loss gradient value, obtaining required target domain load prediction data when the loss gradient value reaches the threshold value, and finishing training, otherwise, returning to the step 2.
In one embodiment of the present invention, in step 1, feature vectors of domain classification networks in the LSTM model are extracted by a CNN feature extractor.
In one embodiment of the invention, the loss function is:
Figure BDA0003983918920000041
wherein E is a loss gradient value, θ f Extracting feature vectors, θ, of the network for the features y To predict the classification result, θ d In order to be the result of the domain classification,
Figure BDA0003983918920000042
the loss is predicted for the label of the ith sample,
Figure BDA0003983918920000043
n is a constant, and is the discrimination loss of the ith sample.
In one embodiment of the invention, the gradient inversion layer expression is:
R γ (x)=x
Figure BDA0003983918920000044
wherein R is γ (x) To reverse the layer loss gradient, γ is the adaptation factor.
Compared with the prior art, the technical scheme of the invention has the following advantages:
1. according to the method, the LSTM model is built to predict the load data of the large-sample electric vehicle, the LSTM model can divide the electric vehicles with the same behavior characteristics into the same class of clusters through a clustering algorithm, modeling analysis is carried out on the different classes of clusters to consider the difference of individuals, the accuracy of the total load is improved, a basis is provided for power grid dispatching, and the charging safety of the electric vehicle is improved.
2. According to the invention, the LSTM-DANN model is constructed, data in the LSTM model can be directly transferred to the DANN model, learning is not needed to be started from the beginning, and a large amount of training time can be saved; the LSTM-DANN model can update the prediction accuracy of the small sample data to the prediction accuracy of the large sample data, so that the prediction accuracy under the condition of insufficient data samples is improved, a basis is provided for power grid dispatching, and the charging safety of the electric vehicle is improved.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference will now be made in detail to the present disclosure, examples of which are illustrated in the accompanying drawings.
FIG. 1 is a flow chart of a method of load prediction for an electric vehicle in accordance with the present invention;
FIG. 2 is a diagram of the LSTM model structure of the present invention.
Detailed Description
The present invention is further described below in conjunction with the drawings and the embodiments so that those skilled in the art can better understand the present invention and can carry out the present invention, but the embodiments are not to be construed as limiting the present invention.
Referring to fig. 1, in order to improve the accuracy of large-sample electric vehicle load prediction data, facilitate power grid dispatching and improve the charging safety of an electric vehicle, the electric vehicle load prediction method of the invention comprises the following steps:
acquiring large sample data of electric vehicle load, and processing the large sample data to normalize the large sample data;
constructing an LSTM (Long Shoort Term Memory) model, inputting the normalized large sample data into the LSTM model, wherein the normalized large sample data comprises a training set and a test set, and training the LSTM model by using a book training set; and processing the LSTM model after the test set is trained to obtain the predicted load of the large-sample electric automobile.
Specifically, the electric vehicles have a difference in driving rules, which results in a large difference in charging load. At present, load prediction methods of electric vehicles predict group loads of different types of electric vehicles, for example, the electric vehicles are classified into shared electric vehicles, electric taxis, electric private cars and the like, but the electric vehicles have huge charging data and are interfered by user behaviors, so that the problem of inaccurate prediction is caused. Therefore, the electric automobiles with the same behavior characteristic can be divided into the same cluster through a clustering algorithm, modeling analysis is carried out on the different clusters so as to consider the difference of individuals, and the accuracy of the total load is improved. When the loads are clustered, the similarity of the form and the contour of the load changing along with time can be correctly measured by considering the similarity degree of the trend and the periodicity of the load curve, so that the power utilization habits and the characteristics of the user can be grasped, and the clustering of the loads can be better realized.
Meanwhile, the deep learning method has obvious advantages in data processing, the size of the charging load can obviously change along with the power utilization time, and the deep learning method belongs to data with strong time sequence. The recurrent neural network is a powerful and useful tool for processing time sequences, but the recurrent neural network has insufficient learning ability for long-term dependence on information, and the problems of gradient disappearance and the like can occur. The LSTM neural network is adopted, long-term dependence can be captured, and modeling and sequence feature extraction can be performed on sequence data with indefinite length. After the data processing and clustering are carried out on the daily load of the electric automobile, different clusters are trained by adopting an LSTM neural network respectively according to different obtained clustering groups, so that a network structure suitable for each group is obtained, and more accurate charging load prediction is realized.
The embodiment of the invention takes large sample data as urban electric vehicle load data of 12 days as an example for explanation, wherein a training set is urban electric vehicle load data of the first 10 days in 12 days, two days are taken as a time interval, an LSTM model predicts urban electric vehicle loads of the third and fourth days according to the urban electric vehicle load data of the first and second days of the training set, compares the urban electric vehicle loads with the urban electric vehicle loads of the third and fourth days in an actual training set, optimizes the LSTM model, predicts the fifth and sixth days according to the urban electric vehicle loads of the third and fourth days of the training set, compares the urban electric vehicle loads with actual values, optimizes the LSTM model, and repeats optimization to obtain the trained LSTM model, and processes the urban electric vehicle load data of the eleventh and twelfth days in the prediction set through the trained LSTM model, so that the data of the thirteenth and fourteenth days can be accurately predicted. During specific implementation, the data of the large sample, the training time period, the training set and the prediction set can be selected according to actual needs, and the purpose of load prediction of the electric vehicle is specifically met.
Further, processing the large sample data according to a discretization formula to normalize the large sample data, the discretization formula being:
Figure BDA0003983918920000071
wherein x is i (t) is the actual load value, x, of the electric vehicle in the time period t i ' (t) is the normalized load value of the electric automobile in the time period t, x i,max (t) is the maximum load value, x, in the charging load of the electric vehicle i,min And (t) is the minimum load value in the charging load of the electric automobile.
In particular, x i (t) the actual load value of the ith sample is expressed by adopting a discrete standardization method, namely, the data is scaled to fall into a small specific interval, and finally, the value E [0,1 ] of all the load data after standardization is enabled]。
Furthermore, the LSTM model includes an input gate, a forgetting gate, an output gate, and an internal memory unit, where the input gate is used to control the number of large sample data input at the current time stored in the internal memory unit, the forgetting gate is used to control the number of large sample data stored at the previous time stored in the current time, and the output gate is used to control the number of large sample data in the internal memory unit output to the LSTM model output at the current time.
The calculation formula of the forgetting door is as follows: f. of t =σ(W f x t +U f h t-1 +b f )
Wherein f is t To forget gate output, W f Weight of forgetting gate, b f For forgetting the offset of the door, X t To forget the input value of the door at time t, h t-1 To forget the intermediate state of the gate at time t-1, sigma is sigmoid function, U f Input for forgetting to forget the gate;
the calculation formula of the input gate is as follows: i all right angle t =σ(W i x t +U i h t-1 +b i )
a t =tanh(W c x t +U C h t-1 +b C )
C t =f t *C t-1 +i t *a t
Wherein, a t For the input of the internal memory cell at time t, C t Internal memory cell state at time t, W i As the weight of the input gate, b i For input of the bias of the gate, W C Is the weight of the internal memory cell, b C For biasing of internal memory cells, U C For the input of internal memory cells, U i For input of an input gate, i t Tan h is the hyperbolic tangent function for the output of the input gate.
The calculation formula of the output gate is as follows: o. o t =σ(W o x t +U o h t-1 +b o )
h t =o t *tanh(C t )
Wherein o is t To output the output of the gate, W o To output the weight of the gate, U o Is a, b o Is the biasing of the output gate.
Specifically, as shown in fig. 2, each gate is composed of a sigma neural network layer and a multiplication operation, and mainly performs a gating function because the output is 0 to 1, and when the output is close to 0 or 1, it is in a physical sense of being on or off. Therefore, by adding and multiplying each vector, the output of the charging load can be finally realized. The specific situations and operating principles of the input gate, the forgetting gate, the output gate and the internal memory unit are all consistent with the prior art, and are known to those skilled in the art, and are not described herein again.
In order to obtain accurate large-sample electric vehicle load prediction data, the LSTM model needs to be trained, and the optimal solution of the weights and the offsets of the input gate, the forgetting gate, the output gate and the internal memory unit is solved, so that accurate prediction data can be obtained. With the forward propagation of the LSTM model, the backward propagation of the LSTM model is well deduced, and all parameters are iteratively updated by a gradient descent method, the key point being to calculate the partial derivatives of all parameters based on the loss function.
For back propagation of errors, the error is typically propagated forward step by step through the gradient δ (t) of the hidden states h (t) and c (t), where two gradients are defined, the hidden state gradient δ at time t h (t), internal memory cell gradient δ at time t c (t):
Figure BDA0003983918920000081
Figure BDA0003983918920000082
For the purpose of derivation, the loss function L (t) is divided into two blocks, one block is the loss L (t) at the time t, and the other block is the loss L (t + 1) at the next time, i.e. at time t +1, i.e.:
Figure BDA0003983918920000083
and the hidden state gradient delta at the last sequence index position tau h (τ) and internal memory cell gradient δ c (τ) is:
Figure BDA0003983918920000091
Figure BDA0003983918920000092
then is made of h (t+1),δ c (t + 1) reverse derivation of δ h (t),δ c (t)。
δ h The gradient of (t) is determined by an output gradient error at the t moment of the layer and an error larger than the t moment, namely:
Figure BDA0003983918920000093
the difficulty of the whole LSTM backward propagation is that
Figure BDA0003983918920000094
This part of the calculation. It is observed that since h (t) = o (t) <' > tanh (C (t)), in the first term o (t), a recursion relation including h, the second term tanh (C (t)) is complicated, and the tanh function can be further expressed as:
C(t)=C(t-1)⊙f(t)+i(t)⊙a(t)
in the first term of the tanh function, f (t) contains a recurrence relation of h, and in the second term of the tanh function, i (t) and h (t) both contain a recurrence relation of h, and thus, eventually
Figure BDA0003983918920000095
The result of this part of the calculation consists of four parts. Namely:
ΔC=o(t+1)⊙[1-tanh 2 (C(t+1))]
Figure BDA0003983918920000096
and delta t (t) inverse gradient error from previous layer δ c The gradient error of (t + 1) and the gradient error of the layer which is transmitted back from h (t) are composed of two parts, namely:
Figure BDA0003983918920000101
has a h (t) and δ c (t), it is easier to calculate the parameters of each door, where only W is given f The calculation process of (2):
Figure BDA0003983918920000102
further, after the LSTM model is trained, the stability of the LSTM model is verified according to the standard deviation of the root mean square error, and the root mean square error and the standard deviation are calculated
The formulas are respectively as follows:
Figure BDA0003983918920000103
Figure BDA0003983918920000104
wherein R is MSE Root mean square error, σ RMSE Is standard deviation, f i ' As predicted load value, f i Is an actual load value, r j Is the root mean square error (rms) of the signal,
Figure BDA0003983918920000105
mean root mean square error.
In particular, the root mean square error R MSE And standard deviation σ RMSE The smaller the value of (A), the higher the stability of the LSTM model, and the more accurate the prediction.
Further, in order to update the prediction accuracy of the small sample data to the prediction accuracy of the large sample data and improve the prediction accuracy in the case of insufficient data samples, the method for predicting the load of the electric vehicle according to the present invention further includes:
constructing a DANN (Domain Adaptive Neural Network) model, and migrating the trained LSTM model to the DANN model to form an LSTM-DANN model;
acquiring load small sample data of the electric vehicle, and inputting the LSTM-DANN model; and taking the test set as source domain data, introducing a gradient inversion layer into the source domain data, taking the small sample data as target domain data, and training the LSTM-DANN model to update the target domain data prediction accuracy to the source domain data prediction accuracy direction.
The training method of the LSTM-DANN model comprises the following steps:
step 1, extracting a feature vector of a domain classification network in the trained LSTM model, inputting the target domain data into a prediction classifier, and inputting the feature vector of the domain classification network and the target domain data into the domain classifier; constructing a loss function based on the generated confrontation network, and setting a threshold;
step 2, the prediction classifier processes the target domain data and outputs a prediction classification result; the domain classifier processes the feature vectors of the domain classification network and outputs a domain classification result;
and 3, substituting the prediction classification result and the domain classification result into the loss function to obtain a loss gradient value, obtaining required target domain load prediction data when the loss gradient value reaches the threshold value, and completing training, otherwise, returning to the step 2.
Specifically, as shown in fig. 1, the migration learning in the case where the data distribution of the target domain is different from that of the source domain but the task is the same is domain adaptation (domain Adaptat ion). Namely, the main task of constructing the DANN model is to reduce the data distribution difference of a source domain and a target domain, so as to realize the migration of knowledge, the data of the source domain and the data of the target domain enter the DanN model to be trained simultaneously, the following two tasks are completed in the training stage, the first task is to realize the accurate classification of a source domain data set, and the minimization of a data error is realized; the second task is to mix the source domain data set and the target domain data set together, maximize domain classification errors, mix the target domain data set and the source domain data set together, update the prediction accuracy of the small sample data to the prediction accuracy of the large sample data, improve the prediction accuracy under the condition of insufficient data samples, provide a basis for power grid scheduling and improve the charging safety of the electric vehicle.
One of the adaptive methods in the DaNN model is an adaptive method based on countermeasure, that is, based on a method for generating a countermeasure network, the generation of the countermeasure network is actually a combination of two networks: the generation network is responsible for generating simulation data; the discrimination network is responsible for determining whether the incoming data is authentic or generated. The generated data is continuously optimized by the generated network, so that the judgment of the judgment network cannot be realized, and the judgment network is also optimized, so that the judgment is more accurate. The relationship between the two forms a countermeasure, so that the countermeasure network, the generation network and the discrimination network can realize parameter adjustment by utilizing an error back propagation algorithm and an optimization method (such as a gradient descent method) based on respective loss functions, the performance of the generation network and the discrimination network is continuously improved, and finally the mature states of the generation network and the discrimination network are the reasonable mapping functions.
Wherein the loss function of the generated network is:
L G =H(1,D(G(Z)))
wherein L is G Representing the loss of the generating network, G representing the generating network, D representing the discriminating network, H representing the cross entropy and z representing the input random data. Generally, in the judgment of the generated data, 1 represents that the data is absolutely true, and 0 represents that the data is absolutely false. H (1, D (G (Z))) represents the distance from 1 of the judgment result. It is obvious that the generation network is expected to obtain a good effect, and the discriminator is made to discriminate the generated data as true data (i.e., the smaller the distance between D (G (z)) and 1 is, the better).
The penalty function for the discrimination network is:
L D =H(1,D(x))+H(0,D(G(z)))
in the above formula, L D Representing the loss of the generated network, x is the real data, H (1, D (x)) represents the distance of the real data from 1, and H (0, D (G (z))) represents the distance of the generated data from 0. Obviously, in order to achieve a good effect of identifying the network, it is necessary that the real data is the real data, the generated data is the false data, that is, the distance between the real data and 1 is small, and the distance between the generated data and 0 is small.
Further, the loss function is:
Figure BDA0003983918920000121
wherein E is a loss gradient value, θ f Extracting feature vectors, θ, of the network for the features y To predict the classification result, θ d In order to be the result of the domain classification,
Figure BDA0003983918920000122
is the ith sampleThe tag of (a) predicts the loss,
Figure BDA0003983918920000123
n is a constant, and is the discrimination loss of the ith sample.
Predicting classification results theta when using domain classifiers y Take 0, domain classification result θ when using a predictive classifier d Take 0.
Further, in order to delete some of the extreme data in the test set and make the target domain test result more accurate, in step 1, a CNN (Convolutional Neural Networks) feature extractor is used to extract the feature vectors of the domain classification network in the LSTM model.
Specifically, the CNN feature extractor is mainly composed of three modules: the device comprises a convolution layer, a sampling layer and a full connection layer, wherein the convolution layer is responsible for extracting features, the sampling layer is responsible for selecting the features, and the full connection layer is responsible for classifying. Each layer has a plurality of feature maps, each feature map extracting a feature of the input by a convolution filter, each feature map having a plurality of neurons. After the input data is convoluted, the local feature is extracted, once the local feature is extracted, the position relation between the local feature and other features is determined, the input of each neuron is connected with the local receptive field of the previous layer, each feature extraction layer is followed by a calculation layer for local averaging and secondary extraction, namely a feature mapping layer, each calculation layer of the network consists of a plurality of feature mapping planes, and the weights of all neurons on the planes are equal.
The mapping from the input layer to the hidden layer is generally referred to as a feature mapping, that is, a feature extraction layer is obtained by the convolutional layer, and a feature mapping layer is obtained after posing (for extracting the most significant features).
The CNN feature extractor selects a weak classifier which best meets the requirements from a stack of weak classifiers, and the weak classifier is used for eliminating unwanted data and reserving the wanted data; then, a weak classifier which best meets the requirement is selected from the rest weak classifiers, and the unwanted data is removed from the data reserved at the upper stage, and the wanted data is reserved. And finally, continuously connecting a plurality of weak classifiers in series, screening through a data layer, deleting some extreme data in the test set, such as maximum or minimum load, and finally obtaining the required data. The specific situation and working principle of the CNN feature extractor are consistent with those of the prior art, and are not described herein again.
Further, the gradient inversion layer expression is:
R γ (x)=x
Figure BDA0003983918920000131
wherein R is γ (x) To reverse the slice-loss gradient, γ is the adaptation factor.
Specifically, in order to avoid performing staged training in a manner of fixing parameters of a generator and a discriminator respectively and making code writing more difficult, a Gradient Reversal Layer (GRL) is introduced into the DaNN model, so that the Gradient direction is automatically reversed in a backward propagation process, and identity transformation is realized in a forward propagation process, so that programming is convenient. A gradient inversion layer is inserted between the CNN feature extractor and the domain classifier, and the gradient lost by the domain classifier in the process of back propagation is automatically inverted before the domain classification is propagated back to the CNN feature extractor.
As shown in FIG. 1, the specific workflow of the LSTM-DANN model is as follows: normalizing the electric vehicle load large sample data, establishing an LSTM model, inputting the normalized large sample data into the LSTM model, and training the LSTM model to obtain an accurate large sample prediction result; the method comprises the steps of constructing a DANN model, transferring the LSTM model to an LSTM layer in the DANN model, extracting a feature vector of a domain classification network in the LSTM layer by adopting a CNN feature extractor, obtaining electric vehicle load small sample data, taking the small sample data as target domain data, inputting the target domain data into a prediction classifier, inputting the feature vector of the domain classification network and the target domain data into the domain classifier, clustering the small sample data according to the driving characteristics of an electric vehicle by the prediction classifier so as to enable the small sample data to be predicted more accurately, inputting the prediction classification result and the domain classification result into a loss function, comparing a loss function value with a set threshold value, training the LSTM-DANN model, optimizing the prediction classification result and the domain classification result until the loss function value meets the set threshold value, and performing reverse normalization on the data to obtain accurate small sample data. The large sample data can be urban electric vehicle load data, and the small sample data can be community micro-grid electric vehicle load data, and can be selected according to actual needs.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the spirit or scope of the invention.

Claims (10)

1. An electric vehicle load prediction method, characterized by comprising:
acquiring large sample data of electric vehicle load, and processing the large sample data to normalize the large sample data;
constructing an LSTM model, inputting the normalized large sample data into the LSTM model, wherein the normalized large sample data comprises a training set and a test set, and training the LSTM model by using the training set; and processing the test set by using the trained LSTM model to obtain the predicted load of the large-sample electric automobile.
2. The electric vehicle load prediction method according to claim 1, further comprising:
constructing a DANN model, and transferring the trained LSTM model to the DANN model to form an LSTM-DANN model;
acquiring load small sample data of the electric vehicle, and inputting the LSTM-DANN model; and taking the test set as source domain data, introducing a gradient inversion layer into the source domain data, taking the small sample data as target domain data, and training the LSTM-DANN model to update the target domain data prediction accuracy to the source domain data prediction accuracy direction.
3. The electric vehicle load prediction method according to claim 1, characterized in that: processing the large sample data according to a discretization formula to normalize the large sample data, the discretization formula being:
Figure FDA0003983918910000011
wherein x is i (t) is the actual load value, x, of the electric vehicle in the time period t i ' (t) is the normalized load value of the electric automobile in the time period t, x i,max (t) is the maximum load value, x, in the charging load of the electric vehicle i,min And (t) is the minimum load value in the charging load of the electric automobile.
4. The electric vehicle load prediction method according to claim 1, characterized in that: the LSTM model comprises an input gate, a forgetting gate, an output gate and an internal memory unit, wherein the input gate is used for controlling the quantity of the large sample data input at the current moment to be stored in the internal memory unit, the forgetting gate is used for controlling the quantity of the large sample data input at the previous moment to be stored in the current moment, and the output gate is used for controlling the quantity of the large sample data output in the internal memory unit to be output at the current moment of the LSTM model.
5. The electric vehicle load prediction method according to claim 4, characterized in that:
the calculation formula of the forgetting door is as follows: f. of t =σ(W f x t +U f h t-1 +b f )
Wherein f is t To forget the gate output, W f To forget the doorWeight of (a), b f For forgetting the offset of the door, X t To forget the input value of the door at time t, h t-1 Sigma is sigmoid function, U, for the intermediate state of the forgetting gate at the time t-1 f Input for forgetting to forget the gate;
the calculation formula of the input gate is as follows: i all right angle t =σ(W i x t +U i h t-1 +b i )
a t =tanh(W c x t +U c h t-1 +b c )
C t =f t *C t-1 +i t *a t
Wherein, a t For the input of the internal memory cell at time t, C t Internal memory cell state at time t, W i As the weight of the input gate, b i For the biasing of the input gates, W c Is the weight of the internal memory cell, b c For biasing of internal memory cells, U c For the input of internal memory cells, U i For input of an input gate, i t Tan h is the hyperbolic tangent function for the output of the input gate.
The calculation formula of the output gate is as follows: o t =σ(W o x y +U o h t-1 +b o )
h t =o t *tanh(C t )
Wherein o is t To output the output of the gate, W o For the weight of the output gate, U o Is a, b o Is the biasing of the output gate.
6. The electric vehicle load prediction method according to claim 1, characterized in that: after the LSTM model is trained, the stability of the LSTM model is verified according to the standard deviation of the root mean square error, and the calculation formulas of the root mean square error and the standard deviation are respectively as follows:
Figure FDA0003983918910000031
Figure FDA0003983918910000032
wherein R is MSE Is root mean square error, σ RMSE Is standard deviation, f' i To predict the load value, f i Is an actual load value, r j Is the root mean square error (rms) of the signal,
Figure FDA0003983918910000033
mean root mean square error.
7. The method of claim 2, wherein the training of the LSTM-DANN model comprises the steps of:
step 1, extracting a feature vector of a domain classification network in the trained LSTM model, inputting the target domain data into a prediction classifier, and inputting the feature vector of the domain classification network and the target domain data into the domain classifier; constructing a loss function based on the generated countermeasure network, and setting a threshold value;
step 2, the prediction classifier processes the target domain data and outputs a prediction classification result; the domain classifier processes the feature vectors of the domain classification network and outputs a domain classification result;
and 3, substituting the prediction classification result and the domain classification result into the loss function to obtain a loss gradient value, obtaining required target domain load prediction data when the loss gradient value reaches the threshold value, and finishing training, otherwise, returning to the step 2.
8. The electric vehicle load prediction method according to claim 7, characterized in that: in step 1, extracting the feature vector of the domain classification network in the LSTM model through a CNN feature extractor.
9. The method of claim 7, wherein the loss function is:
Figure FDA0003983918910000034
wherein E is a loss gradient value, θ f Extracting feature vectors, θ, of the network for the features y To predict the classification result, θ d In order to be a result of the domain classification,
Figure FDA0003983918910000035
the loss is predicted for the label of the ith sample,
Figure FDA0003983918910000036
n is a constant, and is the discriminant loss of the ith sample.
10. The electric vehicle load prediction method according to claim 2, wherein the gradient inversion layer expression is:
R γ (x)=x
Figure FDA0003983918910000041
wherein R is γ (x) To reverse the slice-loss gradient, γ is the adaptation factor.
CN202211559258.6A 2022-12-06 2022-12-06 Electric vehicle load prediction method Pending CN115730635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211559258.6A CN115730635A (en) 2022-12-06 2022-12-06 Electric vehicle load prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211559258.6A CN115730635A (en) 2022-12-06 2022-12-06 Electric vehicle load prediction method

Publications (1)

Publication Number Publication Date
CN115730635A true CN115730635A (en) 2023-03-03

Family

ID=85300321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211559258.6A Pending CN115730635A (en) 2022-12-06 2022-12-06 Electric vehicle load prediction method

Country Status (1)

Country Link
CN (1) CN115730635A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652425A (en) * 2020-05-29 2020-09-11 重庆工商大学 River water quality prediction method based on rough set and long and short term memory network
CN116359602A (en) * 2023-03-07 2023-06-30 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter
CN117932347A (en) * 2024-03-22 2024-04-26 四川大学 Small sample time sequence prediction method and system based on resistance transfer learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652425A (en) * 2020-05-29 2020-09-11 重庆工商大学 River water quality prediction method based on rough set and long and short term memory network
CN116359602A (en) * 2023-03-07 2023-06-30 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter
CN116359602B (en) * 2023-03-07 2024-05-03 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter
CN117932347A (en) * 2024-03-22 2024-04-26 四川大学 Small sample time sequence prediction method and system based on resistance transfer learning

Similar Documents

Publication Publication Date Title
CN115730635A (en) Electric vehicle load prediction method
CN108181591B (en) Battery SOC value prediction method based on improved BP neural network
Huang et al. An intelligent multifeature statistical approach for the discrimination of driving conditions of a hybrid electric vehicle
CN112561156A (en) Short-term power load prediction method based on user load mode classification
Lin et al. An ensemble learning velocity prediction-based energy management strategy for a plug-in hybrid electric vehicle considering driving pattern adaptive reference SOC
CN110220725B (en) Subway wheel health state prediction method based on deep learning and BP integration
Hu et al. Electrochemical-theory-guided modeling of the conditional generative adversarial network for battery calendar aging forecast
CN111999649A (en) XGboost algorithm-based lithium battery residual life prediction method
CN103745110B (en) Method of estimating operational driving range of all-electric buses
CN112327168A (en) XGboost-based electric vehicle battery consumption prediction method
CN112258251A (en) Grey correlation-based integrated learning prediction method and system for electric vehicle battery replacement demand
CN112434848A (en) Nonlinear weighted combination wind power prediction method based on deep belief network
CN114219181A (en) Wind power probability prediction method based on transfer learning
CN113988426A (en) Electric vehicle charging load prediction method and system based on FCM clustering and LSTM
CN113406503A (en) Lithium battery SOH online estimation method based on deep neural network
CN112734094A (en) Smart city intelligent rail vehicle fault gene prediction method and system
CN115907122A (en) Regional electric vehicle charging load prediction method
CN116523177A (en) Vehicle energy consumption prediction method and device integrating mechanism and deep learning model
CN114596726B (en) Parking berth prediction method based on interpretable space-time attention mechanism
CN112327165B (en) Battery SOH prediction method based on unsupervised transfer learning
CN114580262A (en) Lithium ion battery health state estimation method
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN117150334A (en) Lithium battery multi-condition prediction method and device based on optimized BiLSTM neural network
CN116167465A (en) Solar irradiance prediction method based on multivariate time series ensemble learning
CN112465253B (en) Method and device for predicting links in urban road network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination