CN111736084A - Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network - Google Patents

Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network Download PDF

Info

Publication number
CN111736084A
CN111736084A CN202010605779.5A CN202010605779A CN111736084A CN 111736084 A CN111736084 A CN 111736084A CN 202010605779 A CN202010605779 A CN 202010605779A CN 111736084 A CN111736084 A CN 111736084A
Authority
CN
China
Prior art keywords
storage battery
neural network
state
network
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010605779.5A
Other languages
Chinese (zh)
Other versions
CN111736084B (en
Inventor
舒征宇
黄志鹏
许布哲
沈佶源
胡尧
方曼琴
温馨蕊
徐西睿
陈明欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Xuji Instrument Co Ltd
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202010605779.5A priority Critical patent/CN111736084B/en
Publication of CN111736084A publication Critical patent/CN111736084A/en
Application granted granted Critical
Publication of CN111736084B publication Critical patent/CN111736084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/378Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC] specially adapted for the type of battery or accumulator
    • G01R31/379Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC] specially adapted for the type of battery or accumulator for lead-acid batteries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The method for predicting the health state of the valve-controlled lead-acid storage battery based on the improved LSTM neural network comprises the steps of measuring the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery by an online monitoring device every day, and inputting data, wherein the capacity of the storage battery is measured by checking equalizing charge once every two months. Establishing n-dimensional sample input with n days as time spanx(t i ). By battery capacity data sequencesh(t i )As an output, the output is,x(t i )as input, buildingA neural network model is constructed that includes a plurality of LSTM neural network elements. In the initial state, randomly generating a decimal number between 0 and 1 as a weight matrix in the networkWAnd a bias matrixbAnd carrying out assignment. A Dropout algorithm is introduced to improve the LSTM neural network model and improve the training process of the LSTM neural network model. The method can reduce the problems of over-low prediction precision and under-fitting caused by insufficient data samples, accurately predict the health state of the storage battery of the transformer substation, and improve the utilization rate of the storage battery.

Description

Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network
Technical Field
The invention belongs to the technical field of artificial intelligent control of a transformer substation valve-controlled lead-acid storage battery, and particularly relates to a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network.
Background
The valve-regulated lead-acid storage battery pack is the core of a direct-current power supply system, and the performance quality of the valve-regulated lead-acid storage battery pack is related to the safe and stable operation of the whole transformer substation. However, in actual operation, the health state of the substation storage battery has the problem of difficult estimation. The power supply reliability of the storage battery in an accident state is improved. The sealed valve-controlled lead-acid storage battery has the advantages of excellent performance, simple maintenance, convenient installation, higher reliability, no environmental pollution and the like, so that the sealed valve-controlled lead-acid storage battery has more applications in a direct-current system of a transformer substation. The valve-regulated lead-acid storage battery is used as a standby power supply, is influenced by the operation mode of a transformer substation, and has unique operation characteristics: (1) during the normal operation of a transformer substation, the valve-regulated lead-acid storage battery pack is in a floating charge state and actually has no load; (2) when the AC system of the transformer substation loses power due to the accident of the power grid, the valve-controlled lead-acid storage battery pack is used as an emergency power supply of the transformer substation to provide a DC power supply for equipment. Therefore, the valve-regulated lead-acid storage battery pack in the direct-current system of the transformer substation is in a floating charge state for a long time, and the measured value of the maximum energy storage capacity can only be measured by checking equalizing charge once every two months. Therefore, the storage battery in the scene of the transformer substation is in a floating charge state for a long time, the data such as the actual capacity of the battery is difficult to collect, and the problems of insufficient data samples, low accuracy of prediction results and the like exist.
The current method is to replace the valve-regulated lead-acid storage battery of the transformer substation at a fixed time interval of two years. But this mode has disadvantages in terms of excessive consumption and environmental pollution. Therefore, finding a feasible method for estimating the health state of the valve-regulated lead-acid storage battery of the transformer substation effectively improves the utilization efficiency of the valve-regulated lead-acid storage battery, and meanwhile, reducing the occurrence of grid accidents caused by the failure of the storage battery is the technical problem to be solved at present.
Disclosure of Invention
The invention provides a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network, which can more accurately and quickly predict the health state of the storage battery.
The technical scheme adopted by the invention is as follows:
the method for predicting the health state of the valve-regulated lead-acid storage battery based on the improved LSTM neural network comprises the following steps:
step 1, collecting sample data:
the method comprises the steps that the float charge voltage, the uniform charge current, the uniform charge time length, the discharge cut-off voltage and the discharge time length of a storage battery are measured by an online monitoring device every day, and input data are obtained, and the capacity of the storage battery is measured by checking uniform charge once every two months;
step 2, preprocessing of sample data:
establishing n-dimensional sample input with n days as time span
Figure BDA0002561013680000021
Wherein,
Figure BDA0002561013680000022
respectively indicate the storage battery in n daysVector of float voltage, average charge current, average charge time length, discharge cut-off voltage and discharge time length, accumulator capacity data sequence h (t)i) Namely, multiple capacity actual measurement results;
step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) Establishing a neural network model containing a plurality of LSTM neural network units as input, wherein each LSTM neural network unit can be regarded as the state of the LSTM neural network in different time spans, and assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in an initial state;
and 4, introducing a Dropout algorithm to improve the LSTM neural network model and improving the training process of the LSTM neural network model.
And 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
The invention discloses a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network, which has the following technical effects:
1) the invention provides a method for predicting the health state of a valve-controlled lead-acid storage battery based on an improved LSTM neural network. The method introduces an artificial intelligence technology into the health state prediction of the valve-regulated lead-acid storage battery of the transformer substation. Under the conditions that data such as actual capacity and the like are difficult to acquire and the accuracy of the prediction result of a common artificial intelligence method is low, the time span of each group of input data samples is two months, and the input samples are floating charge voltage, uniform charge current, uniform charge duration, discharge cut-off voltage and discharge duration information which are all 60-dimensional vectors. And because the running time of the storage battery of the transformer substation is 2 years, the number of time sequence samples collected by a single storage battery is only 12 groups. Namely, the results of 12 capacity measurements made by the battery over a two-year period are used as sample output data. A multi-level LSTM prediction model is established, and the accuracy of a prediction result is improved by means of the long-time memory characteristic and the short-time memory characteristic of the LSTM and the complexity increase of a network model.
2) In order to prevent the overfitting problem caused by the increase of the complexity of the model, a Dropout optimization algorithm is introduced to improve the training process, and the activation state of each neuron is determined according to the connection strength of each neuron, namely the probability that the neuron with higher connection strength is converted into the non-activation state is higher. In this way the dependence of the LSTM prediction model on the partial input features is reduced. The generalization capability of the model is improved, so that the model has high accuracy and good adaptability.
3) The method provided by the invention can reduce the problems of over-low prediction precision and under-fitting caused by insufficient data samples, simultaneously avoids the over-fitting problem caused by improving the complexity of the neural network model, improves the generalization capability of the model, and accurately predicts the health state of the storage battery of the transformer substation. The battery maintenance device can provide basis for timely maintenance or battery replacement of transformer substation workers, and further improves the utilization rate of the storage battery while ensuring the reliability of the storage battery, and ensures reliable operation and power grid safety of the transformer substation. Compared with the existing method, the method provided by the invention can be used for predicting the health state of the storage battery more accurately and rapidly.
Drawings
Fig. 1 is a flow chart of an improved Dropout optimization method.
FIG. 2 is a flow chart of network training for the improved LSTM.
FIG. 3(a) is a diagram showing the predicted state of health of the storage battery at station A;
FIG. 3(B) is a diagram showing the predicted state of health of the B-station battery;
FIG. 3(C) is a diagram showing the predicted state of health of the storage batteries in the C station;
fig. 3(D) is a diagram showing the D-station battery state of health prediction results.
FIG. 4(a) is a diagram of the results of E-station battery state of health prediction;
FIG. 4(b) is a diagram showing the results of the F-station battery state of health prediction;
fig. 4(c) is a diagram showing the results of the prediction of the state of health of the G-station storage battery.
Detailed Description
The LSTM neural network is a deep neural network with a long-time and short-time memory function. The system mainly comprises a forgetting gate, an input gate and an output gate, and the degree of forgetting or keeping new input or historical information is determined by the forgetting gate, the input gate and the output gate.
The method for predicting the health state of the valve-regulated lead-acid storage battery based on the improved LSTM neural network comprises the following steps:
step 1, collecting sample data:
the float charge voltage, the uniform charge current, the uniform charge time length, the discharge cut-off voltage and the discharge time length input data of the storage battery are obtained by daily measurement through an online monitoring device, and the capacity of the storage battery is measured through checking uniform charge once every two months.
Configuration list of the on-line monitoring device: the configuration is 2V300AH,104, 2 groups of storage battery configuration.
Figure BDA0002561013680000031
Figure BDA0002561013680000041
Step 2, preprocessing of sample data:
establishing 60-dimensional sample input with 60 days as time span
Figure BDA0002561013680000042
Wherein,
Figure BDA0002561013680000043
vectors respectively representing the float voltage, the uniform charge current, the uniform charge time, the discharge cut-off voltage and the discharge time of the storage battery within 60 days, and a storage battery capacity data sequence h (t)i) Namely, the results of the measurement of the capacity of the battery for 12 times in total every 2 months over a period of two years.
Step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) As an input, a neural network model is built containing 12 LSTM neural network elements, each of which can be considered asIn the state of the LSTM neural network in different time spans, assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in the initial state;
and 4, introducing a Dropout algorithm to improve the LSTM neural network model and improving the training process of the LSTM neural network model.
The Dropout algorithm is a solution for preventing overfitting of neural network training, and is mainly suitable for the conditions of high complexity of the neural network and large network scale. The core of the method is that the activation state of the neuron is changed in the process of network model training, so that the dependence of a neural network prediction result on some local neurons is reduced, the operation similar to the operation free of local optimal traps is avoided, the overfitting problem of the model is prevented, and the generalization capability of the model is improved.
The principle of the Dropout algorithm is that during each iteration of training, neurons in the network are randomly selected to change the activation states of the neurons, and the training of the network model is gradually completed. However, the adoption of the Dropout algorithm can cause the training time of the model to be increased by 2-3 times, and even the situation that the iteration number cannot be converged can be exceeded under the condition that the network model is complex.
The Dropout optimization algorithm provided by the invention takes the connection strength of the neuron as the probability of changing the activation state of the neuron, and improves the convergence speed of the training process. I.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
And 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
In the step 2, in the pre-processing of the sample data,
x(ti) For LSTM neural network tiNetwork input of time, h (t)i) Is tiNetwork output at time, C (t)i) Is tiOutputting the unit state of the time network;
the network input comprises the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery. The network output is the maximum energy storage capacity of the battery, namely:
Figure BDA0002561013680000051
wherein, x (t)i) Each element in (a) is a vector of dimension 60, representing the t-thiCharging and discharging information of day and day before and after 60 days, wherein
Figure BDA0002561013680000052
Is [ t ]i-60,ti]During which the voltage of the float charge is present,
Figure BDA0002561013680000053
respectively representing the magnitude of the equalizing charge current and the charge time, and if the equalizing charge is not carried out on the j day, the value of an element in the corresponding vector is 0, namely
Figure BDA0002561013680000054
Figure BDA0002561013680000055
Then is [ ti-60,ti]During which the recording of the discharge of the accumulator,
Figure BDA0002561013680000056
in order to discharge the vector of the cut-off voltage,
Figure BDA0002561013680000057
as a vector of discharge time periods, if the battery is not discharged on day j, the cutoff voltage is numerically equal to the float voltage, i.e.
Figure BDA0002561013680000058
SOH(ti) Is the t th of the storage batteryiThe measured energy storage capacity of the battery.
The step 3 comprises the following steps:
3.1, initializing network hyper-parameters: the set hyper-parameters include: the number of input nodes m, the number of hidden nodes k,number of output nodes n, learning rate yitaError threshold σ, number of LSTM nuclei w.
3.2, weight bias initialization: in the initial state, a decimal between 0 and 1 is randomly generated to assign a weight matrix W and a bias matrix b in the network.
The step 4 comprises the following steps:
step 4.1, forward operation prediction of the storage battery capacity:
and (3) calculating and updating parameters of each gate in the LSTM model according to the initially set parameters and the following formula (1), and further calculating according to a formula (8) to obtain an output result of the network:
Figure BDA0002561013680000059
Figure BDA00025610136800000510
wherein, f (t)i)、i(ti)、o(ti)、C(ti) Respectively showing the output of the forgetting gate, the output of the input gate, the output of the output gate and the state of the unit, h (t)i) Is tiNetwork output of the moment. Sigma and tanh are both activation functions, wherein sigma is a sigmoid function, and tanh is a hyperbolic tangent function, and the calculation formulas are respectively as follows:
Figure BDA0002561013680000061
where e is a natural constant and z is a variable to express the formula of the two activation functions.
Wf、Wi、Wc、WoWeight matrices representing respectively the forgetting gate, the input gate, the current input cell state and the output gate, bf、bi、bc、boAnd representing the bias matrixes of the forgetting gate, the input gate, the current input unit state and the output gate, wherein the 8 parameter matrixes are parameter matrixes to be solved and are gradually optimized and updated in the training process of the model.
Figure BDA0002561013680000062
Means multiplication by element when
Figure BDA0002561013680000063
When acting on two vectors, the operation is as follows:
Figure BDA0002561013680000064
when in use
Figure BDA0002561013680000065
When acting on a vector and a matrix, the operation is as follows:
Figure BDA0002561013680000066
when in use
Figure BDA0002561013680000067
When the two matrixes are acted, the elements at the corresponding positions of the two matrixes are multiplied.
Wf、Wi、Wc、Wo、bf、bi、bc、boThe 8 parameters are obtained by network training, specific numerical values do not need to be set artificially, but matrix dimensionality needs to be specified artificially, and random numbers between 0 and 1 are generated by a computer to serve as initial values.
Step 4.2, correcting the weight and the bias parameters of the neural network according to the error of the prediction result:
after the output value of the network is calculated according to the formula (2), the error C between the predicted value and the actual value is calculated according to the formula (6), if the error C is larger than the error threshold value sigma, the error is propagated in the reverse direction, and the parameters and the offset in the network are updated in the reverse direction by combining the formula (7).
C=|h'(ti)-h(ti)| (6)
Figure BDA0002561013680000068
In the above formula, h' (t)i) The predicted capacity value, h (t), representing the output of the LSTM networki) Denotes an actual capacity value, α denotes a learning rate, and W ═ Wf,Wi,Wc,Wo]And b ═ bf,bi,bc,bo]Representing the weights and offsets before updating, W 'and b' representing the weights and offsets after updating.
4.3, neuron activation state updating:
the connection strengths of all neurons are calculated according to equation (8), and the activation states of the neurons are updated according to the probabilities calculated by equation (9).
A Dropout optimization algorithm is provided, the link strength of the neuron is used as the probability of changing the activation state of the neuron, and the convergence speed of the training process is improved. The state of the neuron is activated or inactivated, SiThe values of (t) are 1 and 0, which respectively represent that the neuron i is in an activated state and an inactivated state in the t iteration. Defining the connection strength R of a neuron ii(t) the calculation formula is:
Figure BDA0002561013680000071
wherein S isj(t) is the activation state of any neuron in the network other than i, wij(t) ∈ W is the weight between neurons i, j in the t iteration the activation state of the neuron is updated during the iteration according to equation (9):
Figure BDA0002561013680000072
i.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
4.4, whether data in the sample time sequence data is needed to participate in training. If the training is finished, the step 4.5 is carried out, and if the training is needed, the corresponding input is inputx(ti+1) Substituted and go to step 4.1, if the sample time series data are all involved in the training, proceed to the next step.
And 4.5, checking whether the sample data of the storage battery which does not participate in training exists. If yes, new battery samples, namely the battery time sequence data with 12 dimensions, are substituted, and the process goes to the step 4.1. And if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
Example (b):
referring to fig. 1, the network training based on LSTM specifically includes:
s11, calculating the connection strength R of the neuron ii(t) of (d). And when the error between the predicted value and the true value of the model is smaller than a set threshold value, calculating the connection strength by the formula (8).
S12, updating the activation state S of the neuron in the iterative process according to the formula (9)i(t + 1). I.e., the higher the connection strength, the greater the probability that the neuron will transition to an inactive state. In this way the dependence of the LSTM prediction model on the partial input features is reduced.
Referring to fig. 2, the network training of LSTM based on Dropout optimization method includes:
s21, LSTM network initialization. Giving an input node number m, a hidden node number k, an output node number n, a learning rate yita and an error threshold value sigma, designating the dimensionality of each weight matrix and each bias matrix, and giving each weight matrix and each bias matrix a random number between 0 and 1 generated by a computer;
and S22, preprocessing the original data. Using the collected data of the floating charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery every two months as a group of 60-dimensional network input data samples x (t), using the actually measured capacity data of the storage battery as a 12-dimensional storage battery time sequence data network to output h (t), and starting to train an LSTM network model;
s23, obtaining the predicted capacity value h' (t) by forward operationi). Forward operation is performed according to expressions (1) and (2) by using expressions (1) and (2) to obtain a predicted capacity value h' (t)i);
And S24, calculating a prediction error. The error C is calculated from equation (6). (ii) a
S25, comparing the magnitude of error C and error threshold σ. If the error C is smaller than the threshold σ, the next step S27 is performed; if the error C is greater than the threshold σ, the network parameter is corrected by equation (7).
And S26, updating the neuron activation state. The connection strengths of all neurons are calculated according to equation (8), and the activation states of the neurons are updated according to the probabilities calculated by equation (9).
And S27, whether data in the sample time sequence data need to participate in training. If the training is completed, go to step S28, if there is any training input x (t)i+1) Instead, the process proceeds to step S23. If the sample time series data are all involved in training, the next step S28;
and S28, checking whether the storage battery sample data which does not participate in training exists. If so, a new battery sample, that is, 12-dimensional battery time-series data is substituted, and the process proceeds to step S23. And if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
Analysis by calculation example:
1) setting scene parameters:
the method is used for testing the actually measured data of the storage battery of the 110-kilovolt transformer substation in a certain jurisdiction of a certain city in Hubei province, and the effectiveness of the improved prediction model provided by the invention is checked. Wherein, the actually measured data of A, B, C, D transformer substation is used as a training set, and E, F, G is a checking set. The storage batteries used by 7 substations are all storage batteries with the specification of 2V/300Ah of GFMD-300C of Santa Yang in Shandong. The specific parameter information is shown in the following table 1:
TABLE 1 Transformer substation Battery sample parameter information
Table 1 battery sample parameter information
Figure BDA0002561013680000081
Figure BDA0002561013680000091
Through the collection of the samples, sample data of 7 substations which are operated for 51 years in total are obtained. The number of the training set transformer substations is 4, and the number of the testing set transformer substations is 3.
2) Comparison of prediction accuracy of different models:
in the embodiment, a BP neural network (BP), a long and short memory neural network (LSTM) and an improved neural network (LS-imp) are adopted to respectively train the health state prediction model of the storage battery. The BP neural network predicts the value of capacity degradation according to operation data, and the LSTM neural networks before and after improvement directly predict the health state of the storage battery by taking the initial state and historical operation data of the storage battery as input. And respectively adopting the method to perform gradual prediction, namely gradually predicting backwards according to the prediction result of the first step. In the embodiment, four storage batteries are respectively extracted from four transformer substations of a training sample set and used for checking the accuracy of the prediction model obtained by training. The results are shown in fig. 3(a), fig. 3(b), fig. 3(c), fig. 3(d), and table 2.
Due to the fact that the LSTM-imp model and the LSTM model are more complex in structure and have 'state memory', the prediction of long-time sequence data is advantageous, and the accuracy of the prediction results of the LSTM-imp model and the LSTM model is obviously higher than that of the BP neural network model. Table 2 shows the percentage of absolute error statistics for different time steps:
TABLE 2 statistical results of the mean of absolute error percentages on the training sample set
Table 2 Statistical results of the percentage of absolute errors onthe training sample set
Figure BDA0002561013680000092
As shown in Table 2, when the prediction step size is less than 2, the absolute error rates of the four different models are all lower than 5%, and the LSTM-imp and LSTM models have no obvious advantage in the accuracy of the prediction result compared with the BP neural network model. However, as the prediction time step increases, when the prediction step is 7/8/9, the mean absolute error of the BP neural network model reaches 9.71%, and the maximum absolute error is 10.45%. The LSTM improved model provided by the invention has the advantages that the network model with the state memory is used for processing time series data of a long time span, when the prediction step is 7/8/9, the average absolute error of the prediction result is only 2.73% and 5.53%, and the maximum absolute error is 6.22%.
3) Comparing the generalization ability of different prediction models:
the model with a complex structure can improve the accuracy of the prediction result, but the overfitting problem is easy to occur when the number of samples is insufficient, namely the prediction model obtained by training cannot be suitable for testing the samples due to insufficient generalization capability of the model. The invention randomly extracts one storage battery from three transformer substations E-G respectively to be used as the test of the prediction result. The results are shown in fig. 4(a), 4(b), and 4 (c). The trained model is subjected to prediction inspection in 3 test sample substations, and only the LSTM improved model provided by the invention always keeps higher accuracy. The following table 3 is a statistical result of prediction errors of the prediction model on different verification sample substations:
TABLE 3 statistical results of the mean absolute error percentage on the test sample set
Table 3 Statistical results of the percentage of absolute errors onthe test sample set
Figure BDA0002561013680000101
As shown in table 3. The BP neural network has low accuracy in long-time span prediction, the traditional LSTM neural network model can keep high prediction accuracy on a training sample, but an overfitting phenomenon appears on a test sample substation E, F, the mean absolute error percentages are 14.87% and 11.79% respectively, and the maximum absolute error percentages reach 18.58% and 17.67%. The model provided by the invention is improved by adopting a Dropout optimization algorithm in training, and the corresponding generalization capability is stronger. The improved model predicts the absolute error percentage of the health state in 3 test substations to be lower than 3.0 percent and the maximum error percentage to be lower than 5.0 percent.
The invention provides an improved LSTM neural network for predicting the health state of a valve-regulated lead-acid storage battery for a transformer substation, which takes the floating charge voltage, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery as input vectors to predict the energy storage capacity of the storage battery. According to the method, long-time span data is used as input of the model, so that the complexity of the LSTM neural network is greatly increased, and the accuracy of a prediction result is improved. Meanwhile, in order to avoid the overfitting phenomenon of the trained model, the Dropout algorithm is improved, the Dropout optimization algorithm is adopted to improve the LSTM neural network, and the generalization capability of the improved model is improved. The following conclusions can be obtained through experimental comparison analysis:
(1) the improved model has higher prediction accuracy. Due to the existence of the state memory function of the LSTM, the improved model provided by the invention has advantages in long-time data prediction, the average absolute error percentage of the prediction result is lower than 3.5%, and the maximum absolute error percentage is lower than 5.0%.
(2) The generalization capability is effectively improved by improving the model. Compared with other models, the model provided by the invention can obtain accurate prediction results on a training set and a test set. While the conventional LSTM network model exhibits varying degrees of overfitting.
The invention provides a method for building a multilevel LSTM prediction model by taking an LSTM neural network as a basis and combining the charging and discharging characteristics of a storage battery of a transformer substation and taking data of a long time span as the input of the model, and the accuracy of a prediction result is improved by increasing the complexity of the network model. Meanwhile, in order to prevent the overfitting problem caused by the complexity improvement of the model, a Dropout method is introduced to improve the generalization capability of the model according to the training process, but the Dropout algorithm can increase the training time of the model by 2-3 times, and even exceeds the situation that iteration times cannot be converged under the condition that the network model is more complex. Therefore, the invention provides a Dropout optimization algorithm based on the neuron connection strength. Therefore, the model has high accuracy and high efficiency and has good adaptability.

Claims (5)

1. The method for predicting the state of health of the valve-regulated lead-acid storage battery based on the improved LSTM neural network is characterized by comprising the following steps of:
step 1, collecting sample data:
the method comprises the steps that the float charge voltage, the uniform charge current, the uniform charge time length, the discharge cut-off voltage and the discharge time length of a storage battery are measured by an online monitoring device every day, and input data are obtained, and the capacity of the storage battery is measured by checking uniform charge once every two months;
step 2, preprocessing of sample data:
establishing n-dimensional sample input with n days as time span
Figure FDA0002561013670000011
Wherein,
Figure FDA0002561013670000012
vectors respectively representing the float voltage, the uniform charge current, the uniform charge time, the discharge cut-off voltage and the discharge time of the storage battery within n days, and a storage battery capacity data sequence h (t)i) Namely, multiple capacity actual measurement results;
step 3, constructing an LSTM neural network model:
with a battery capacity data sequence h (t)i) As output, x (t)i) Establishing a neural network model containing a plurality of LSTM neural network units as input, wherein each LSTM neural network unit can be regarded as the state of the LSTM neural network in different time spans, and assigning values to a weight matrix W and a bias matrix b in the network by randomly generating decimal numbers between 0 and 1 in an initial state;
and 4, introducing a Dropout algorithm to improve the LSTM neural network model and improving the training process of the LSTM neural network model.
2. The method for predicting the state of health of the valve-regulated lead-acid battery based on the improved LSTM neural network according to claim 1, further comprising the following steps:
and 5, substituting the input samples in the test set into the trained model to obtain 12 predicted capacity values of the storage battery, wherein the interval time of each value is 2 months.
3. The improved LSTM neural network-based valve-regulated lead-acid battery state of health prediction method of claim 1, wherein: in the step 2, x (t) is used for preprocessing the sample datai) For LSTM neural network tiNetwork input of time, h (t)i) Is tiNetwork output at time, C (t)i) Is tiOutputting the unit state of the time network;
the network input comprises the float charge voltage, the uniform charge current, the uniform charge duration, the discharge cut-off voltage and the discharge duration of the storage battery; the network output is the maximum energy storage capacity of the battery, namely:
Figure FDA0002561013670000013
wherein, x (t)i) Each element in (a) is a vector of dimension 60, representing the t-thiCharging and discharging information of day and day before and after 60 days, wherein
Figure FDA0002561013670000021
Is [ t ]i-60,ti]During which the voltage of the float charge is present,
Figure FDA0002561013670000022
respectively representing the magnitude of the equalizing charge current and the charge time, and if the equalizing charge is not carried out on the j day, the value of an element in the corresponding vector is 0, namely
Figure FDA0002561013670000023
Figure FDA0002561013670000024
Then is [ ti-60,ti]During which the recording of the discharge of the accumulator,
Figure FDA0002561013670000025
in order to discharge the vector of the cut-off voltage,
Figure FDA0002561013670000026
as a vector of discharge time periods, if the battery is not discharged on day j, the cutoff voltage is numerically equal to the float voltage, i.e.
Figure FDA0002561013670000027
SOH(ti) Is the t th of the storage batteryiThe measured energy storage capacity of the battery.
4. The improved LSTM neural network-based valve-regulated lead-acid battery state of health prediction method of claim 1, wherein: the step 3 comprises the following steps:
3.1, initializing network hyper-parameters: the set hyper-parameters include: input node number m, hidden node number k, output node number n, learning rate yitaError threshold sigma, number of LSTM nuclei w;
3.2, weight bias initialization: in the initial state, a decimal between 0 and 1 is randomly generated to assign a weight matrix W and a bias matrix b in the network.
5. The improved LSTM neural network-based valve-regulated lead-acid battery state of health prediction method of claim 1, wherein: the step 4 comprises the following steps:
step 4.1, forward operation prediction of the storage battery capacity:
and (3) calculating and updating parameters of each gate in the LSTM model according to the initially set parameters and the following formula (1), and further calculating according to a formula (8) to obtain an output result of the network:
Figure FDA0002561013670000028
Figure FDA0002561013670000029
wherein, f (t)i)、i(ti)、o(ti)、C(ti) Respectively representing the forgetting gate output, the input gate output, the output gate output and the unit state; sigma and tanh are both activation functions, wherein sigma is a sigmoid function, and tanh is a hyperbolic tangent function, and the calculation formulas are respectively as follows:
Figure FDA00025610136700000210
Wf、Wi、Wc、Woweight matrices representing respectively the forgetting gate, the input gate, the current input cell state and the output gate, bf、bi、bc、boThen representing the bias matrixes of the forgetting gate, the input gate, the current input unit state and the output gate, wherein the 8 parameter matrixes are parameter matrixes to be solved and are gradually optimized and updated in the training process of the model;
Figure FDA0002561013670000031
means multiplication by element when
Figure FDA0002561013670000032
When acting on two vectors, the operation is as follows:
Figure FDA0002561013670000033
when in use
Figure FDA0002561013670000034
When acting on a vector and a matrix, the operation is as follows:
Figure FDA0002561013670000035
when in use
Figure FDA0002561013670000036
When the two matrixes are acted, elements at corresponding positions of the two matrixes are multiplied;
Wf、Wi、Wc、Wo、bf、bi、bc、bothe 8 parameters are obtained by network training, specific numerical values do not need to be set artificially, but matrix dimensionality needs to be specified artificially, and random numbers between 0 and 1 are generated by a computer to serve as initial values;
step 4.2, correcting the weight and the bias parameters of the neural network according to the error of the prediction result:
after the output value of the network is calculated according to the formula (2), calculating the error C between the predicted value and the actual value according to the formula (6), if the error C is larger than an error threshold value sigma, reversely spreading the error, and combining the formula (7) to reversely update the parameters and the offset in the network;
C=|h'(ti)-h(ti)| (6)
Figure FDA0002561013670000037
in the above formula, h' (t)i) The predicted capacity value, h (t), representing the output of the LSTM networki) Denotes an actual capacity value, α denotes a learning rate, and W ═ Wf,Wi,Wc,Wo]And b ═ bf,bi,bc,bo]Represents the weight and bias before update, and W 'and b' represent the weight and bias after update;
4.3, neuron activation state updating:
calculating the connection strength of all neurons according to a formula (8), and updating the activation state of the neurons according to the probability calculated by the formula (9);
a Dropout optimization algorithm is provided, the link strength of the neuron is used as the probability of changing the activation state of the neuron, and the convergence speed of the training process is improved; the state of the neuron is activated or inactivated, Si(t) values of 1 and 0 indicate that neuron i is activated in the t-th iterationA state and an inactive state; defining the connection strength R of a neuron ii(t) the calculation formula is:
Figure FDA0002561013670000041
wherein S isj(t) is the activation state of any neuron in the network other than i, wij(t) ∈ W is the weight between neurons i, j in the t-th iteration, and the activation state of the neuron is updated according to the following formula (9) in the iteration process:
Figure FDA0002561013670000042
i.e. the probability of the neuron with higher connection strength to be in an inactive state is higher; in this way, the dependence of the LSTM prediction model on partial input features is reduced;
4.4, whether data in the sample time sequence data are needed to participate in training or not; if the training is finished, the step 4.5 is carried out, if the training is needed, the corresponding input x (t) is inputi+1), and going to step 4.1, if the sample time series data are all involved in training, then going to the next step;
4.5, checking whether the sample data of the storage battery which does not participate in training exists; if yes, substituting a new storage battery sample, namely 12-dimensional storage battery time sequence data, and turning to the step 4.1; and if no new storage battery sample data exists, stopping LSTM network parameter updating iteration, and outputting the trained prediction model.
CN202010605779.5A 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network Active CN111736084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605779.5A CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605779.5A CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Publications (2)

Publication Number Publication Date
CN111736084A true CN111736084A (en) 2020-10-02
CN111736084B CN111736084B (en) 2022-05-20

Family

ID=72652142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605779.5A Active CN111736084B (en) 2020-06-29 2020-06-29 Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network

Country Status (1)

Country Link
CN (1) CN111736084B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381316A (en) * 2020-11-26 2021-02-19 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112418496A (en) * 2020-11-10 2021-02-26 国网四川省电力公司经济技术研究院 Power distribution station energy storage configuration method based on deep learning
CN112763929A (en) * 2020-12-31 2021-05-07 华东理工大学 Method and device for predicting health of battery monomer of energy storage power station system
CN113093021A (en) * 2021-03-22 2021-07-09 复旦大学 Method for improving health state of valve-controlled lead-acid storage battery based on resonant current pulse
CN113447823A (en) * 2021-05-31 2021-09-28 国网山东省电力公司滨州供电公司 Method for health prediction of storage battery pack
CN113533965A (en) * 2021-06-18 2021-10-22 天生桥二级水力发电有限公司 Storage battery performance analysis platform and method
CN114896865A (en) * 2022-04-20 2022-08-12 北京航空航天大学 Digital twin-oriented self-adaptive evolutionary neural network health state online prediction method
CN116298947A (en) * 2023-03-07 2023-06-23 中国铁塔股份有限公司黑龙江省分公司 Storage battery nuclear capacity monitoring device
CN116609676A (en) * 2023-07-14 2023-08-18 深圳先进储能材料国家工程研究中心有限公司 Method and system for monitoring state of hybrid energy storage battery based on big data processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067644A (en) * 2007-04-20 2007-11-07 杭州高特电子设备有限公司 Storage battery performance analytical expert diagnosing method
CN103217651A (en) * 2013-04-18 2013-07-24 中国科学院广州能源研究所 Method and system for estimating charge state of storage battery
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type
US20200011932A1 (en) * 2018-07-05 2020-01-09 Nec Laboratories America, Inc. Battery capacity fading model using deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067644A (en) * 2007-04-20 2007-11-07 杭州高特电子设备有限公司 Storage battery performance analytical expert diagnosing method
CN103217651A (en) * 2013-04-18 2013-07-24 中国科学院广州能源研究所 Method and system for estimating charge state of storage battery
US20200011932A1 (en) * 2018-07-05 2020-01-09 Nec Laboratories America, Inc. Battery capacity fading model using deep learning
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
明彤彤 等: "基于LSTM神经网络的锂离子电池荷电状态估算", 《广东电力》 *
耿攀 等: "基于LSTM循环神经网络的电池SOC预测方法", 《上海海事大学学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418496A (en) * 2020-11-10 2021-02-26 国网四川省电力公司经济技术研究院 Power distribution station energy storage configuration method based on deep learning
CN112381316A (en) * 2020-11-26 2021-02-19 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112381316B (en) * 2020-11-26 2022-11-25 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112763929B (en) * 2020-12-31 2024-03-08 华东理工大学 Method and device for predicting health of battery monomer of energy storage power station system
CN112763929A (en) * 2020-12-31 2021-05-07 华东理工大学 Method and device for predicting health of battery monomer of energy storage power station system
CN113093021A (en) * 2021-03-22 2021-07-09 复旦大学 Method for improving health state of valve-controlled lead-acid storage battery based on resonant current pulse
CN113093021B (en) * 2021-03-22 2022-02-01 复旦大学 Method for improving health state of valve-controlled lead-acid storage battery based on resonant current pulse
CN113447823A (en) * 2021-05-31 2021-09-28 国网山东省电力公司滨州供电公司 Method for health prediction of storage battery pack
CN113447823B (en) * 2021-05-31 2022-06-21 国网山东省电力公司滨州供电公司 Method for health prediction of storage battery pack
CN113533965A (en) * 2021-06-18 2021-10-22 天生桥二级水力发电有限公司 Storage battery performance analysis platform and method
CN114896865A (en) * 2022-04-20 2022-08-12 北京航空航天大学 Digital twin-oriented self-adaptive evolutionary neural network health state online prediction method
CN114896865B (en) * 2022-04-20 2024-07-23 北京航空航天大学 Digital twinning-oriented self-adaptive evolutionary neural network health state online prediction method
CN116298947B (en) * 2023-03-07 2023-11-03 中国铁塔股份有限公司黑龙江省分公司 Storage battery nuclear capacity monitoring device
CN116298947A (en) * 2023-03-07 2023-06-23 中国铁塔股份有限公司黑龙江省分公司 Storage battery nuclear capacity monitoring device
CN116609676A (en) * 2023-07-14 2023-08-18 深圳先进储能材料国家工程研究中心有限公司 Method and system for monitoring state of hybrid energy storage battery based on big data processing
CN116609676B (en) * 2023-07-14 2023-09-15 深圳先进储能材料国家工程研究中心有限公司 Method and system for monitoring state of hybrid energy storage battery based on big data processing

Also Published As

Publication number Publication date
CN111736084B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN111736084B (en) Valve-regulated lead-acid storage battery health state prediction method based on improved LSTM neural network
CN112241608B (en) Lithium battery life prediction method based on LSTM network and transfer learning
CN111680848A (en) Battery life prediction method based on prediction model fusion and storage medium
CN103018673B (en) Method for predicating life of aerospace Ni-Cd storage battery based on improved dynamic wavelet neural network
CN113064093A (en) Energy storage battery state of charge and state of health joint estimation method and system
CN112733417B (en) Abnormal load data detection and correction method and system based on model optimization
CN108960321B (en) Battery fault prediction method for large lithium battery energy storage power station
CN113128672B (en) Lithium ion battery pack SOH estimation method based on transfer learning algorithm
CN109002781B (en) Fault prediction method for energy storage converter
CN112834927A (en) Lithium battery residual life prediction method, system, device and medium
CN111210049B (en) LSTM-based method for predicting degradation trend of lead-acid valve-controlled storage battery of transformer substation
CN113516271A (en) Wind power cluster power day-ahead prediction method based on space-time neural network
CN113361692A (en) Lithium battery residual life combined prediction method
CN113344288A (en) Method and device for predicting water level of cascade hydropower station group and computer readable storage medium
CN113705086A (en) Ultra-short-term wind power prediction method based on Elman error correction
CN115730525A (en) Rail transit UPS storage battery health state prediction method
CN114429248A (en) Transformer apparent power prediction method
CN114357670A (en) Power distribution network power consumption data abnormity early warning method based on BLS and self-encoder
CN115453399A (en) Battery pack SOH estimation method considering inconsistency
CN114726045A (en) Lithium battery SOH estimation method based on IPEA-LSTM model
CN113093014A (en) Online collaborative estimation method and system for SOH and SOC based on impedance parameters
CN117151770A (en) Attention mechanism-based LSTM carbon price prediction method and system
CN114219126B (en) Small hydropower infiltration area network load supply prediction method based on residual error correction
Alharbi et al. Short-term wind speed and temperature forecasting model based on gated recurrent unit neural networks
CN114357865A (en) Hydropower station runoff and associated source load power year scene simulation and prediction method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240722

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee after: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region after: China

Address before: 443002 No. 8, University Road, Xiling District, Yichang, Hubei

Patentee before: CHINA THREE GORGES University

Country or region before: China

TR01 Transfer of patent right

Effective date of registration: 20240918

Address after: 461000 in Xuji smart grid industrial park, east of Weiwu Avenue and south of Shangji street, Xuchang City, Henan Province

Patentee after: HENAN XUJI METERING Co.,Ltd.

Country or region after: China

Address before: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee before: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region before: China