CN111736084B - Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network - Google Patents

Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network Download PDF

Info

Publication number
CN111736084B
CN111736084B CN202010605779.5A CN202010605779A CN111736084B CN 111736084 B CN111736084 B CN 111736084B CN 202010605779 A CN202010605779 A CN 202010605779A CN 111736084 B CN111736084 B CN 111736084B
Authority
CN
China
Prior art keywords
storage battery
neural network
network
state
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010605779.5A
Other languages
Chinese (zh)
Other versions
CN111736084A (en
Inventor
舒征宇
黄志鹏
许布哲
沈佶源
胡尧
方曼琴
温馨蕊
徐西睿
陈明欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202010605779.5A priority Critical patent/CN111736084B/en
Publication of CN111736084A publication Critical patent/CN111736084A/en
Application granted granted Critical
Publication of CN111736084B publication Critical patent/CN111736084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/378Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC] specially adapted for the type of battery or accumulator
    • G01R31/379Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC] specially adapted for the type of battery or accumulator for lead-acid batteries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method for predicting the health state of the valve-controlled lead-acid storage battery based on the improved LSTM neural network comprises the steps of measuring the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery by an online monitoring device every day, and inputting data, wherein the capacity of the storage battery is measured by checking equalizing charge once every two months. Establishing n-dimensional sample input with n days as time spanx(t i ). By accumulator capacity data sequencesh(t i )As an output, the output is,x(t i )as an input, a neural network model is built that includes a plurality of LSTM neural network elements. In the initial state, a decimal number between 0 and 1 is randomly generated to form a weight matrix in the networkWAnd a bias matrixbAnd performing value assignment. A Dropout algorithm is introduced to improve the LSTM neural network model and improve the training process of the LSTM neural network model. The method can reduce the problems of over-low prediction precision and under-fitting caused by insufficient data samples, accurately predict the health state of the storage battery of the transformer substation, and improve the utilization rate of the storage battery.

Description

Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network
Technical Field
The invention belongs to the technical field of artificial intelligent control of a transformer substation valve-controlled lead-acid storage battery, and particularly relates to a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network.
Background
The valve-regulated lead-acid storage battery pack is the core of a direct-current power supply system, and the performance quality of the valve-regulated lead-acid storage battery pack is related to the safe and stable operation of the whole transformer substation. However, in actual operation, the health state of the substation storage battery has the problem of difficult estimation. The power supply reliability of the storage battery in an accident state is improved. The sealed valve-controlled lead-acid storage battery has the advantages of excellent performance, simple maintenance, convenient installation, higher reliability, no environmental pollution and the like, so that the sealed valve-controlled lead-acid storage battery has more applications in a direct-current system of a transformer substation. The valve-regulated lead-acid storage battery is used as a standby power supply, is influenced by the operation mode of a transformer substation, and has unique operation characteristics: (1) during the normal operation of a transformer substation, the valve-regulated lead-acid storage battery pack is in a floating charge state and actually has no load; and (2) when the AC system of the transformer substation loses power due to the accident of the power grid, the valve-controlled lead-acid storage battery pack is used as an emergency power supply of the transformer substation to provide a DC power supply for equipment. Therefore, the valve-regulated lead-acid storage battery pack in the direct-current system of the transformer substation is in a floating charge state for a long time, and the measured value of the maximum energy storage capacity can only be measured by checking equalizing charge once every two months. Therefore, the storage battery in the scene of the transformer substation is in a floating charge state for a long time, the data such as the actual capacity of the battery is difficult to collect, and the problems of insufficient data samples, low accuracy of prediction results and the like exist.
The current method is to replace the valve-regulated lead-acid storage battery of the transformer substation at a fixed time interval of two years. But this mode has disadvantages in terms of excessive consumption and environmental pollution. Therefore, finding a feasible method for estimating the health state of the valve-regulated lead-acid storage battery of the transformer substation effectively improves the utilization efficiency of the valve-regulated lead-acid storage battery, and meanwhile, reducing the occurrence of grid accidents caused by the failure of the storage battery is the technical problem to be solved at present.
Disclosure of Invention
The invention provides a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network, which can more accurately and quickly predict the health state of the storage battery.
The technical scheme adopted by the invention is as follows:
the method for predicting the health state of the valve-regulated lead-acid storage battery based on the improved LSTM neural network comprises the following steps:
step 1, collecting sample data:
the method comprises the steps that the float charge voltage, the uniform charge current, the uniform charge time length, the discharge cut-off voltage and the discharge time length of a storage battery are measured by an online monitoring device every day, and input data are obtained, and the capacity of the storage battery is measured by checking uniform charge once every two months;
step 2, preprocessing of sample data:
establishing n-dimensional sample input with n days as time span
Figure GDA0003568307210000021
Wherein the content of the first and second substances,
Figure GDA0003568307210000022
vectors respectively representing the float voltage, the uniform charge current, the uniform charge time, the discharge cut-off voltage and the discharge time of the storage battery within n days, and a storage battery capacity data sequence h (t)i) Namely, multiple capacity actual measurement results;
step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) Establishing a neural network model containing a plurality of LSTM neural network units as input, wherein each LSTM neural network unit can be regarded as the state of the LSTM neural network in different time spans, and assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in an initial state;
and 4, introducing a Dropout algorithm to improve the LSTM neural network model and improving the training process of the LSTM neural network model.
And 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
The invention discloses a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network, which has the following technical effects:
1) the invention provides a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network. The method introduces an artificial intelligence technology into the health state prediction of the valve-regulated lead-acid storage battery of the transformer substation. Under the conditions that data such as actual capacity and the like are difficult to acquire and the accuracy of the prediction result of a common artificial intelligence method is low, the time span of each group of input data samples is two months, and the input samples are floating charge voltage, uniform charge current, uniform charge duration, discharge cut-off voltage and discharge duration information which are all 60-dimensional vectors. And because the running time of the storage battery of the transformer substation is 2 years, the number of time sequence samples collected by a single storage battery is only 12 groups. Namely, the results of 12 capacity measurements made by the battery over a two-year period are used as sample output data. A multi-level LSTM prediction model is established, and the accuracy of a prediction result is improved by means of the long-time memory characteristic and the short-time memory characteristic of the LSTM and the complexity increase of a network model.
2) In order to prevent the overfitting problem caused by the increase of the complexity of the model, a Dropout optimization algorithm is introduced to improve the training process, and the activation state of each neuron is determined according to the connection strength of each neuron, namely the probability that the neuron with higher connection strength is converted into the non-activation state is higher. In this way the dependence of the LSTM prediction model on the partial input features is reduced. The generalization capability of the model is improved, so that the model has high accuracy and good adaptability.
3) The method provided by the invention can reduce the problems of over-low prediction precision and under-fitting caused by insufficient data samples, simultaneously avoids the over-fitting problem caused by improving the complexity of the neural network model, improves the generalization capability of the model, and accurately predicts the health state of the storage battery of the transformer substation. The battery maintenance device can provide basis for timely maintenance or battery replacement of transformer substation workers, and further improves the utilization rate of the storage battery while ensuring the reliability of the storage battery, and ensures reliable operation and power grid safety of the transformer substation. Compared with the existing method, the method provided by the invention can be used for predicting the health state of the storage battery more accurately and rapidly.
Drawings
Fig. 1 is a flow chart of an improved Dropout optimization method.
FIG. 2 is a flow chart of network training for the improved LSTM.
FIG. 3(a) is a diagram showing the predicted state of health of the storage battery at station A;
FIG. 3(B) is a diagram showing the predicted state of health of the B-station battery;
FIG. 3(C) is a diagram showing the predicted state of health of the storage batteries in the C station;
fig. 3(D) is a diagram showing the D-station battery state of health prediction results.
FIG. 4(a) is a diagram of the results of E-station battery state of health prediction;
FIG. 4(b) is a diagram showing the results of the F-station battery state of health prediction;
fig. 4(c) is a diagram showing the results of the prediction of the state of health of the G-station storage battery.
Detailed Description
The LSTM neural network is a deep neural network with a long-time and short-time memory function. The system mainly comprises a forgetting gate, an input gate and an output gate, and the degree of forgetting or keeping new input or historical information is determined by the forgetting gate, the input gate and the output gate.
The method for predicting the health state of the valve-regulated lead-acid storage battery based on the improved LSTM neural network comprises the following steps:
step 1, collecting sample data:
the float charge voltage, the uniform charge current, the uniform charge time length, the discharge cut-off voltage and the discharge time length input data of the storage battery are obtained by daily measurement through an online monitoring device, and the capacity of the storage battery is measured through checking uniform charge once every two months.
Configuration list of the on-line monitoring device: the configuration is 2V300AH,104, 2 groups of storage battery configuration.
Figure GDA0003568307210000031
Figure GDA0003568307210000041
Step 2, preprocessing of sample data:
establishing 60-dimensional sample input with 60 days as time span
Figure GDA0003568307210000042
Wherein the content of the first and second substances,
Figure GDA0003568307210000043
vectors respectively representing the float voltage, the uniform charge current, the uniform charge time, the discharge cut-off voltage and the discharge time of the storage battery within 60 days, and a storage battery capacity data sequence h (t)i) Namely, the results of the measurement of the capacity of the battery for 12 times in total every 2 months over a period of two years.
Step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) Establishing a neural network model containing 12 LSTM neural network units as input, wherein each LSTM neural network unit can be regarded as the state of the LSTM neural network in different time spans, and assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in an initial state;
and 4, introducing a Dropout algorithm to improve the LSTM neural network model and improving the training process of the LSTM neural network model.
The Dropout algorithm is a solution for preventing overfitting of neural network training, and is mainly suitable for the conditions of high complexity of the neural network and large network scale. The core of the method is that the activation state of the neuron is changed in the process of network model training, so that the dependence of a neural network prediction result on some local neurons is reduced, the operation similar to the operation free of local optimal traps is avoided, the overfitting problem of the model is prevented, and the generalization capability of the model is improved.
The principle of the Dropout algorithm is that during each iteration of training, neurons in the network are randomly selected to change the activation states of the neurons, and the training of the network model is gradually completed. However, the adoption of the Dropout algorithm can cause the training time of the model to be increased by 2-3 times, and even the situation that the iteration number cannot be converged can be exceeded under the condition that the network model is complex.
The Dropout optimization algorithm provided by the invention takes the connection strength of the neuron as the probability of changing the activation state of the neuron, and improves the convergence speed of the training process. I.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
And 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
In the step 2, in the pre-processing of the sample data,
x(ti) For LSTM neural network tiNetwork input of time, h (t)i) Is tiNetwork output at time, C (t)i) Is tiOutputting the unit state of the time network;
the network input comprises the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery. The network output is the maximum energy storage capacity of the battery, namely:
Figure GDA0003568307210000051
wherein, x (t)i) Each element in (a) is a vector of dimension 60, representing the t-thiCharging and discharging information of day and day before and after 60 days, wherein
Figure GDA0003568307210000052
Is [ t ]i-60,ti]During which the voltage of the float charge is present,
Figure GDA0003568307210000053
respectively representing the magnitude of the equalizing charge current and the charge time, and if no equalizing charge is carried out on the j day, the value of an element in the corresponding vector is 0I.e. by
Figure GDA0003568307210000054
Figure GDA0003568307210000055
Then is [ ti-60,ti]During which the recording of the discharge of the accumulator,
Figure GDA0003568307210000056
in order to discharge the vector of the cut-off voltage,
Figure GDA0003568307210000057
as a vector of discharge time periods, if the battery is not discharged on day j, the cutoff voltage is numerically equal to the float voltage, i.e.
Figure GDA0003568307210000058
SOH(ti) Is the t th of the storage batteryiThe measured energy storage capacity of the battery.
The step 3 comprises the following steps:
3.1, initializing network hyper-parameters: the set hyper-parameters include: input node number m, hidden node number k, output node number n, learning rate yitaError threshold σ, number of LSTM nuclei w.
3.2, weight bias initialization: in the initial state, a decimal between 0 and 1 is randomly generated to assign a weight matrix W and a bias matrix b in the network.
The step 4 comprises the following steps:
step 4.1, forward operation prediction of the storage battery capacity:
and (3) calculating and updating parameters of each gate in the LSTM model according to the initially set parameters and the following formula (1), and further calculating according to a formula (8) to obtain an output result of the network:
Figure GDA0003568307210000059
Figure GDA00035683072100000510
wherein, f (t)i)、i(ti)、o(ti)、C(ti) Respectively showing the left-behind gate output, the input gate output, the output gate output and the unit state, h (t)i) Is tiNetwork output of the moment. Sigma and tanh are both activation functions, wherein sigma is a sigmoid function, and tanh is a hyperbolic tangent function, and the calculation formulas are respectively as follows:
Figure GDA0003568307210000061
where e is a natural constant and z is a variable to express the formula of the two activation functions.
Wf、Wi、Wc、WoWeight matrices representing respectively the forgetting gate, the input gate, the current input cell state and the output gate, bf、bi、bc、boAnd representing the bias matrixes of the forgetting gate, the input gate, the current input unit state and the output gate, wherein the 8 parameter matrixes are parameter matrixes to be solved and are gradually optimized and updated in the training process of the model.
Figure GDA0003568307210000062
Means multiplication by element when
Figure GDA0003568307210000063
When acting on two vectors, the operation is as follows:
Figure GDA0003568307210000064
when in use
Figure GDA0003568307210000065
When acting on a vector and a matrix, the operation is as follows:
Figure GDA0003568307210000066
when in use
Figure GDA0003568307210000067
When the two matrixes are acted, the elements at the corresponding positions of the two matrixes are multiplied.
Wf、Wi、Wc、Wo、bf、bi、bc、boThe 8 parameters are obtained by network training, specific numerical values do not need to be set artificially, but matrix dimensionality needs to be specified artificially, and random numbers between 0 and 1 are generated by a computer to serve as initial values.
Step 4.2, correcting the weight and the bias parameters of the neural network according to the error of the prediction result:
after the output value of the network is calculated according to the formula (2), the error C between the predicted value and the actual value is calculated according to the formula (6), if the error C is larger than the error threshold value sigma, the error is propagated in the reverse direction, and the parameters and the offset in the network are updated in the reverse direction by combining the formula (7).
C=|h'(ti)-h(ti)| (6)
Figure GDA0003568307210000068
In the above formula, h' (t)i) The predicted capacity value, h (t), representing the output of the LSTM networki) Denotes the actual capacity value, α denotes the learning rate, and W ═ Wf,Wi,Wc,Wo]And b ═ bf,bi,bc,bo]Representing the weights and offsets before updating, W 'and b' representing the weights and offsets after updating.
4.3, neuron activation state updating:
the connection strengths of all neurons are calculated according to equation (8), and the activation states of the neurons are updated according to the probabilities calculated by equation (9).
Proposes Dropout optimization algorithm toThe connection strength of the neuron is used as the probability of changing the activation state of the neuron, and the convergence speed of the training process is improved. The state of the neuron is activated or inactivated, SiThe values of (t) are 1 and 0, which respectively represent that the neuron i is in an activated state and an inactivated state in the t iteration. Defining the connection strength R of a neuron ii(t) the calculation formula is:
Figure GDA0003568307210000071
wherein S isj(t) is the activation state of any neuron in the network other than i, wij(t) e W is the weight between the neurons i, j in the tth iteration. The neuron activation state is updated in an iterative process according to the following formula (9):
Figure GDA0003568307210000072
i.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
4.4, whether data in the sample time sequence data is needed to participate in training. If the training is finished, the step 4.5 is carried out, if the training is needed, the corresponding input x (t) is inputi+1) Substituted and go to step 4.1, if the sample time series data are all involved in the training, proceed to the next step.
And 4.5, checking whether the sample data of the storage battery which does not participate in training exists. If yes, new battery samples, namely the battery time sequence data with 12 dimensions, are substituted, and the process goes to the step 4.1. And if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
Example (b):
referring to fig. 1, the network training based on LSTM specifically includes:
s11, calculating the connection strength R of the neuron ii(t) of (d). When the temperature is higher than the set temperatureAnd when the error between the predicted value and the true value of the model is smaller than a set threshold value, calculating the connection strength by the formula (8).
S12, updating the activation state S of the neuron in the iterative process according to the formula (9)i(t + 1). I.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
Referring to fig. 2, the network training of LSTM based on Dropout optimization method includes:
s21, LSTM network initialization. Giving an input node number m, a hidden node number k, an output node number n, a learning rate yita and an error threshold value sigma, designating the dimensionality of each weight matrix and each bias matrix, and giving each weight matrix and each bias matrix a random number between 0 and 1 generated by a computer;
and S22, preprocessing the original data. Using the collected data of the floating charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery every two months as a group of 60-dimensional network input data samples x (t), using the actually measured capacity data of the storage battery as a 12-dimensional storage battery time sequence data network to output h (t), and starting to train an LSTM network model;
s23, forward operation is carried out to obtain the predicted capacity value h' (t)i). Forward operation is performed according to expressions (1) and (2) by using expressions (1) and (2) to obtain a predicted capacity value h' (t)i);
And S24, calculating a prediction error. Calculating an error C from equation (6);
s25, comparing the magnitude of error C and error threshold σ. If the error C is smaller than the threshold σ, the next step S27 is performed; if the error C is greater than the threshold σ, the network parameter is corrected by equation (7).
And S26, updating the neuron activation state. The connection strengths of all neurons are calculated according to equation (8), and the activation states of the neurons are updated according to the probabilities calculated by equation (9).
And S27, whether data in the sample time sequence data need to participate in training. If the training is completed, go to step S28, if there is any training input x (t)i+1) Instead, the process proceeds to step S23. If the sample time series data are all involved in the training, the next step S28;
and S28, checking whether the sample data of the storage battery which does not participate in training exists. If so, a new battery sample, that is, 12-dimensional battery time-series data is substituted, and the process proceeds to step S23. And if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
Analysis by calculation example:
1) setting scene parameters:
the method is used for testing the actually measured data of the storage battery of the 110-kilovolt transformer substation in a certain jurisdiction of a certain city in Hubei province, and the effectiveness of the improved prediction model provided by the invention is checked. Wherein, the actually measured data of A, B, C, D transformer substation is used as a training set, and E, F, G is a checking set. The storage batteries used by 7 substations are all storage batteries with the specification of 2V/300Ah of GFMD-300C of Santa Yang in Shandong. The specific parameter information is shown in the following table 1:
TABLE 1 Transformer substation Battery sample parameter information
Figure GDA0003568307210000081
Figure GDA0003568307210000091
Through the collection of the samples, sample data of 7 substations which are operated for 51 years in total are obtained. The number of the training set transformer substations is 4, and the number of the testing set transformer substations is 3.
2) Comparison of prediction accuracy of different models:
in the embodiment, a BP neural network (BP), a long and short memory neural network (LSTM) and an improved neural network (LS-imp) are adopted to respectively train the health state prediction model of the storage battery. The BP neural network predicts the value of capacity degradation according to operation data, and the LSTM neural networks before and after improvement directly predict the health state of the storage battery by taking the initial state and historical operation data of the storage battery as input. And respectively adopting the method to perform gradual prediction, namely gradually predicting backwards according to the prediction result of the first step. In the embodiment, four storage batteries are respectively extracted from four transformer substations of a training sample set and used for checking the accuracy of the prediction model obtained by training. The results are shown in fig. 3(a), fig. 3(b), fig. 3(c), fig. 3(d), and table 2.
Due to the fact that the LSTM-imp model and the LSTM model are more complex in structure and have 'state memory', the prediction of long-time sequence data is advantageous, and the accuracy of the prediction results of the LSTM-imp model and the LSTM model is obviously higher than that of the BP neural network model. Table 2 shows the percentage of absolute error statistics for different time steps:
TABLE 2 statistical results of the mean of absolute error percentages on the training sample set
Figure GDA0003568307210000092
As shown in Table 2, when the prediction step size is less than 2, the absolute error rates of the four different models are all lower than 5%, and the LSTM-imp and LSTM models have no obvious advantage in the accuracy of the prediction result compared with the BP neural network model. However, as the prediction time step increases, when the prediction step is 7/8/9, the mean absolute error of the BP neural network model reaches 9.71%, and the maximum absolute error is 10.45%. The LSTM improved model provided by the invention has the advantages that the network model with the state memory is used for processing time series data of a long time span, when the prediction step is 7/8/9, the average absolute error of the prediction result is only 2.73% and 5.53%, and the maximum absolute error is 6.22%.
3) Comparing the generalization ability of different prediction models:
the model with a complex structure can improve the accuracy of the prediction result, but the overfitting problem is easy to occur when the number of samples is insufficient, namely the prediction model obtained by training cannot be suitable for testing the samples due to insufficient generalization capability of the model. The invention randomly extracts one storage battery from three transformer substations E-G respectively to be used as the test of the prediction result. The results are shown in fig. 4(a), 4(b), and 4 (c). The trained model is subjected to prediction inspection in 3 test sample substations, and only the LSTM improved model provided by the invention always keeps higher accuracy. The following table 3 is a statistical result of prediction errors of the prediction model on different verification sample substations:
TABLE 3 statistical results of the mean absolute error percentage on the test sample set
Figure GDA0003568307210000101
As shown in table 3. The BP neural network has low accuracy in long-time span prediction, the traditional LSTM neural network model can keep high prediction accuracy on a training sample, but an overfitting phenomenon appears on a test sample substation E, F, the mean absolute error percentages are 14.87% and 11.79% respectively, and the maximum absolute error percentages reach 18.58% and 17.67%. The model provided by the invention is improved by adopting a Dropout optimization algorithm in training, and the corresponding generalization capability is stronger. The improved model predicts the absolute error percentage of the health state in 3 test substations to be lower than 3.0 percent and the maximum error percentage to be lower than 5.0 percent.
The invention provides an improved LSTM neural network for predicting the health state of a valve-regulated lead-acid storage battery for a transformer substation, which takes the floating charge voltage, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery as input vectors to predict the energy storage capacity of the storage battery. According to the method, long-time span data is used as input of the model, so that the complexity of the LSTM neural network is greatly increased, and the accuracy of a prediction result is improved. Meanwhile, in order to avoid the overfitting phenomenon of the trained model, the Dropout algorithm is improved, the Dropout optimization algorithm is adopted to improve the LSTM neural network, and the generalization capability of the improved model is improved. The following conclusions can be obtained through experimental comparison analysis:
(1) the improved model has higher prediction accuracy. Due to the existence of the state memory function of the LSTM, the improved model provided by the invention has advantages in long-time data prediction, the average absolute error percentage of the prediction result is lower than 3.5%, and the maximum absolute error percentage is lower than 5.0%.
(2) The generalization capability is effectively improved by improving the model. Compared with other models, the model provided by the invention can obtain accurate prediction results on a training set and a test set. While the conventional LSTM network model exhibits varying degrees of overfitting.
The invention provides a method for building a multilevel LSTM prediction model by taking an LSTM neural network as a basis and combining the charging and discharging characteristics of a storage battery of a transformer substation and taking data of a long time span as the input of the model, and the accuracy of a prediction result is improved by increasing the complexity of the network model. Meanwhile, in order to prevent the overfitting problem caused by the complexity improvement of the model, a Dropout method is introduced to improve the generalization capability of the model according to the training process, but the Dropout algorithm can increase the training time of the model by 2-3 times, and even exceeds the situation that iteration times cannot be converged under the condition that the network model is more complex. Therefore, the invention provides a Dropout optimization algorithm based on the neuron connection strength. Therefore, the model has high accuracy and high efficiency and has good adaptability.

Claims (3)

1. The method for predicting the state of health of the valve-regulated lead-acid storage battery based on the improved LSTM neural network is characterized by comprising the following steps of:
step 1, collecting sample data:
the method comprises the steps that data input by the online monitoring device are measured every day to obtain floating charge voltage, uniform charge current, uniform charge time, discharge cut-off voltage and discharge time of a storage battery, and the capacity of the storage battery is measured through checking equalizing charge once every two months;
step 2, sample data preprocessing:
establishing n-dimensional sample input with n days as time span
Figure FDA0003578021800000011
Wherein the content of the first and second substances,
Figure FDA0003578021800000012
vectors respectively representing the float voltage, the uniform charge current, the uniform charge time, the discharge cut-off voltage and the discharge time of the storage battery within n days, and a storage battery capacity data sequence h (t)i) Namely, multiple capacity actual measurement results;
in the step 2, x (t) is used for preprocessing the sample datai) For LSTM neural network tiNetwork input of time, h (t)i) Is tiNetwork output at time, C (t)i) Is tiOutputting the unit state of the time network;
the network input comprises the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery; the network output is the maximum energy storage capacity of the battery, namely:
Figure FDA0003578021800000013
wherein, x (t)i) Each element in (a) is a vector of dimension 60, representing the t-thiCharging and discharging information of day and day before and after 60 days, wherein
Figure FDA0003578021800000014
Is [ t ]i-60,ti]During which the voltage of the float charge is present,
Figure FDA0003578021800000015
respectively representing the magnitude of the equalizing charge current and the charge time, and if the equalizing charge is not carried out on the j day, the value of an element in the corresponding vector is 0, namely
Figure FDA0003578021800000016
Figure FDA0003578021800000017
Then is [ ti-60,ti]During which the recording of the discharge of the accumulator,
Figure FDA0003578021800000018
in order to discharge the vector of the cut-off voltage,
Figure FDA0003578021800000019
as a vector of discharge time periods, if the battery is not discharged on day j, the cutoff voltage is numerically equal to the float voltage, i.e.
Figure FDA00035780218000000110
SOH(ti) Is the t th of the storage batteryiThe measured energy storage capacity of the battery;
step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) Establishing a neural network model containing a plurality of LSTM neural network units as input, wherein each LSTM neural network unit can be regarded as the state of the LSTM neural network in different time spans, and assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in an initial state;
step 4, introducing a Dropout algorithm to improve an LSTM neural network model and improving the training process of the LSTM neural network model;
the step 4 comprises the following steps:
step 4.1, forward operation prediction of the storage battery capacity:
and (3) calculating and updating parameters of each gate in the LSTM model according to the initially set parameters and the following formula (1), and further calculating according to a formula (8) to obtain an output result of the network:
Figure FDA0003578021800000021
Figure FDA0003578021800000022
wherein, f (t)i)、i(ti)、o(ti)、C(ti) Respectively representing the forgetting gate output, the input gate output, the output gate output and the unit state; sigma and tanh are both activation functions, wherein sigma is a sigmoid function, and tanh is a hyperbolic tangent function, and the calculation formulas are respectively as follows:
Figure FDA0003578021800000023
Wf、Wi、Wc、Woweight matrices representing respectively the forgetting gate, the input gate, the current input cell state and the output gate, bf、bi、bc、boThen representing the bias matrixes of the forgetting gate, the input gate, the current input unit state and the output gate, wherein the 8 parameter matrixes are parameter matrixes to be solved and are gradually optimized and updated in the training process of the model;
Figure FDA0003578021800000024
means multiplication by element when
Figure FDA0003578021800000025
When acting on two vectors, the operation is as follows:
Figure FDA0003578021800000026
when in use
Figure FDA0003578021800000027
When acting on a vector and a matrix, the operation is as follows:
Figure FDA0003578021800000031
when in use
Figure FDA0003578021800000032
When the two matrixes are acted, elements at corresponding positions of the two matrixes are multiplied;
Wf、Wi、Wc、Wo、bf、bi、bc、bothe 8 parameters are obtained by network training, specific numerical values do not need to be set artificially, but matrix dimensionality needs to be specified artificially, and random numbers between 0 and 1 are generated by a computer to serve as initial values;
step 4.2, correcting the weight and the bias parameters of the neural network according to the error of the prediction result:
after the output value of the network is calculated according to the formula (2), calculating the error C between the predicted value and the actual value according to the formula (6), if the error C is larger than an error threshold value sigma, reversely spreading the error, and combining the formula (7) to reversely update the parameters and the offset in the network;
C=|h'(ti)-h(ti)| (6)
Figure FDA0003578021800000033
in the above formula, h' (t)i) The predicted capacity value, h (t), representing the output of the LSTM networki) Denotes the actual capacity value, α denotes the learning rate, and W ═ Wf,Wi,Wc,Wo]And b ═ bf,bi,bc,bo]Represents the weight and bias before update, and W 'and b' represent the weight and bias after update;
4.3, neuron activation state updating:
calculating the connection strength of all neurons according to a formula (8), and updating the activation state of the neurons according to the probability calculated by the formula (9);
a Dropout optimization algorithm is provided, the connection strength of the neurons is used as the probability of changing the activation states of the neurons, and the convergence speed of the training process is improved; the state of the neuron is activated or inactivated, Si(t) values of 1 and 0 respectively represent that the neuron i is in an activated state and a non-activated state in the t iteration; statorConnection strength R of sense neuron ii(t) the calculation formula is:
Figure FDA0003578021800000034
wherein S isj(t) is the activation state of any neuron in the network other than i, wij(t) W is the weight between neurons i, j in the tth iteration; the neuron activation state is updated in an iterative process according to the following formula (9):
Figure FDA0003578021800000035
i.e. the probability of the neuron with higher connection strength to be in an inactive state is higher; in this way, the dependence of the LSTM prediction model on partial input features is reduced;
4.4, whether data in the sample time sequence data are needed to participate in training or not; if the training is finished, the step 4.5 is carried out, if the training is needed, the corresponding input x (t) is inputi+1), and going to step 4.1, if the sample time series data are all involved in training, then going to the next step;
4.5, checking whether the sample data of the storage battery which does not participate in training exists; if yes, substituting a new storage battery sample, namely 12-dimensional storage battery time sequence data, and turning to the step 4.1; and if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
2. The improved LSTM neural network based valve regulated lead acid battery state of health prediction method of claim 1, further comprising:
and 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
3. The improved LSTM neural network-based valve-regulated lead-acid battery state of health prediction method of claim 1, wherein: the step 3 comprises the following steps:
3.1, initializing network hyper-parameters: the set hyper-parameters include: input node number m, hidden node number k, output node number n, learning rate yitaError threshold sigma, number of LSTM nuclei w;
3.2, weight bias initialization: in the initial state, a decimal between 0 and 1 is randomly generated to assign a weight matrix W and a bias matrix b in the network.
CN202010605779.5A 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network Active CN111736084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605779.5A CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605779.5A CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Publications (2)

Publication Number Publication Date
CN111736084A CN111736084A (en) 2020-10-02
CN111736084B true CN111736084B (en) 2022-05-20

Family

ID=72652142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605779.5A Active CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Country Status (1)

Country Link
CN (1) CN111736084B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418496B (en) * 2020-11-10 2022-04-22 国网四川省电力公司经济技术研究院 Power distribution station energy storage configuration method based on deep learning
CN112381316B (en) * 2020-11-26 2022-11-25 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112763929B (en) * 2020-12-31 2024-03-08 华东理工大学 Method and device for predicting health of battery monomer of energy storage power station system
CN113093021B (en) * 2021-03-22 2022-02-01 复旦大学 Method for improving health state of valve-controlled lead-acid storage battery based on resonant current pulse
CN113447823B (en) * 2021-05-31 2022-06-21 国网山东省电力公司滨州供电公司 Method for health prediction of storage battery pack
CN116298947B (en) * 2023-03-07 2023-11-03 中国铁塔股份有限公司黑龙江省分公司 Storage battery nuclear capacity monitoring device
CN116609676B (en) * 2023-07-14 2023-09-15 深圳先进储能材料国家工程研究中心有限公司 Method and system for monitoring state of hybrid energy storage battery based on big data processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067644A (en) * 2007-04-20 2007-11-07 杭州高特电子设备有限公司 Storage battery performance analytical expert diagnosing method
CN103217651A (en) * 2013-04-18 2013-07-24 中国科学院广州能源研究所 Method and system for estimating charge state of storage battery
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200011932A1 (en) * 2018-07-05 2020-01-09 Nec Laboratories America, Inc. Battery capacity fading model using deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067644A (en) * 2007-04-20 2007-11-07 杭州高特电子设备有限公司 Storage battery performance analytical expert diagnosing method
CN103217651A (en) * 2013-04-18 2013-07-24 中国科学院广州能源研究所 Method and system for estimating charge state of storage battery
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于LSTM循环神经网络的电池SOC预测方法;耿攀 等;《上海海事大学学报》;20190930;第120-126页 *
基于LSTM神经网络的锂离子电池荷电状态估算;明彤彤 等;《广东电力》;20200331;第26页摘要,第27页第1节至第31页倒数第1行 *

Also Published As

Publication number Publication date
CN111736084A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN111736084B (en) Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network
CN112241608B (en) Lithium battery life prediction method based on LSTM network and transfer learning
CN103018673B (en) Method for predicating life of aerospace Ni-Cd storage battery based on improved dynamic wavelet neural network
CN111680848A (en) Battery life prediction method based on prediction model fusion and storage medium
CN110687452A (en) Lithium battery capacity online prediction method based on K-means clustering and Elman neural network
CN113064093A (en) Energy storage battery state of charge and state of health joint estimation method and system
CN113128672B (en) Lithium ion battery pack SOH estimation method based on transfer learning algorithm
CN112734002B (en) Service life prediction method based on data layer and model layer joint transfer learning
CN112834927A (en) Lithium battery residual life prediction method, system, device and medium
CN114726045B (en) Lithium battery SOH estimation method based on IPEA-LSTM model
CN113361692B (en) Lithium battery remaining life combined prediction method
CN113344288A (en) Method and device for predicting water level of cascade hydropower station group and computer readable storage medium
CN113516271A (en) Wind power cluster power day-ahead prediction method based on space-time neural network
CN115453399A (en) Battery pack SOH estimation method considering inconsistency
CN113917336A (en) Lithium ion battery health state prediction method based on segment charging time and GRU
CN111815039A (en) Weekly scale wind power probability prediction method and system based on weather classification
CN110674460B (en) E-Seq2Seq technology-based data driving type unit combination intelligent decision method
Xu et al. Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network
CN117151770A (en) Attention mechanism-based LSTM carbon price prediction method and system
CN115730525A (en) Rail transit UPS storage battery health state prediction method
CN115276067A (en) Distributed energy storage voltage adjusting method adaptive to topological dynamic change of power distribution network
CN115248390A (en) Lithium ion battery SOH estimation method based on random short-term charging data
CN114357865A (en) Hydropower station runoff and associated source load power year scene simulation and prediction method thereof
CN113705086A (en) Ultra-short-term wind power prediction method based on Elman error correction
CN113094989A (en) Unmanned aerial vehicle battery life prediction method based on random configuration network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant