CN115236522A - End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network - Google Patents

End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network Download PDF

Info

Publication number
CN115236522A
CN115236522A CN202210859282.5A CN202210859282A CN115236522A CN 115236522 A CN115236522 A CN 115236522A CN 202210859282 A CN202210859282 A CN 202210859282A CN 115236522 A CN115236522 A CN 115236522A
Authority
CN
China
Prior art keywords
neural network
representing
energy storage
capacity estimation
storage battery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210859282.5A
Other languages
Chinese (zh)
Inventor
汤爱华
蒋依汗
张志刚
黄渝坤
伍心雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202210859282.5A priority Critical patent/CN115236522A/en
Publication of CN115236522A publication Critical patent/CN115236522A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/385Arrangements for measuring battery or accumulator variables
    • G01R31/387Determining ampere-hour charge capacity or SoC
    • G01R31/388Determining ampere-hour charge capacity or SoC involving voltage measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/396Acquisition or processing of data for testing or for monitoring individual cells or groups of cells within a battery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Secondary Cells (AREA)

Abstract

The invention particularly relates to an end-to-end capacity estimation method of an energy storage battery based on a hybrid deep neural network, which comprises the following steps: acquiring battery charging voltage data of an energy storage battery; inputting the battery charging voltage data of the energy storage battery into the trained battery capacity estimation model, and outputting a corresponding battery capacity estimation value; the battery capacity estimation model includes: the convolutional neural network layer is used for extracting the battery aging characteristics of the input data and generating a corresponding initial characteristic diagram; the attention mechanism layer is used for sequentially extracting the channel attention weight and the space attention weight of the initial feature map and generating a corresponding weighted feature map; and the recurrent neural network layer is used for training by taking the weighted feature map generated by the attention mechanism layer as input and outputting a corresponding battery capacity estimated value. The method can automatically extract the battery aging characteristics to realize the end-to-end energy storage battery capacity estimation and relieve the problems that the deep neural network is easy to fall into local optimization and gradient disappears (dispersion).

Description

End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network
Technical Field
The invention relates to the technical field of electrochemical energy storage scientific management, in particular to an end-to-end capacity estimation method of an energy storage battery based on a hybrid deep neural network.
Background
As the global energy crisis becomes more and more severe, the development of clean energy becomes more and more important. Because of the characteristics of high energy ratio and wide temperature working range, the lithium ion energy storage battery is widely applied to the field of electrochemical energy storage. However, due to the influence of external environment and internal factors, the lithium battery inevitably undergoes gradual degradation during use, which is manifested as capacity degradation and performance degradation. Therefore, the accurate estimation of the battery capacity is beneficial to the health state evaluation and the residual service life prediction of the battery, and is very important for the development of the battery state monitoring, the fault diagnosis, the safety early warning and other works in the electrochemical energy storage field.
In the prior art, data-driven (physical model) estimation methods are often used for estimating the capacity of an energy storage battery, and the data-driven estimation methods are modeled for a certain type of energy storage battery by a specific health factor. For example, chinese patent publication No. CN105891715a discloses a method for estimating the health status of a lithium ion battery, which comprises: establishing a battery equivalent physical model; obtaining a battery SOC-OCV curve through battery pulse discharge, and solving a relation between the battery SOC and an open-circuit voltage; collecting battery voltage, current and temperature parameters; identifying model parameters on line on the basis of a recursive least square method to obtain ohmic internal resistance of the battery; and calculating the health state of the battery by using the estimated value of the ohmic internal resistance of the battery. The existing scheme is based on a data-driven estimation method to realize the capacity estimation of the energy storage battery.
The existing estimation method based on the physical model can bring higher precision, but the problems of complex parameters and high calculation cost exist, the actual operating conditions of the energy storage battery are complex, complete charging and discharging data are difficult to obtain, the battery capacity estimation needs to span multiple charging and discharging cycles, and the application of the estimation method based on the physical model is very difficult due to the problems. Therefore, the prior art starts to adopt an estimation method based on a deep learning model. However, most of the existing estimation methods based on the deep learning model need to manually extract battery aging characteristic data, and then complete battery capacity estimation based on data after data preprocessing, that is, the existing techniques cannot realize end-to-end estimation, so that the practical application scenario of energy storage battery capacity estimation is limited. Meanwhile, due to the fact that the actual operation conditions of the energy storage battery are complex, when the battery capacity estimation is carried out on the basis of the related data of the energy storage battery by the existing deep learning model, the problems of local optimization and gradient disappearance (dispersion) are easily caused, and the accuracy of the energy storage battery capacity estimation is low. Therefore, how to design a method capable of improving the practicability and accuracy of the energy storage battery capacity estimation is an urgent technical problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to design an energy storage battery end-to-end capacity estimation method based on a hybrid deep neural network can automatically extract battery aging characteristics to realize end-to-end energy storage battery capacity estimation and alleviate the problems that the deep neural network is easy to fall into local optimization and gradient disappears (dispersion), so that the practicability and accuracy of energy storage battery capacity estimation can be improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
an end-to-end capacity estimation method of an energy storage battery based on a hybrid deep neural network comprises the following steps:
s1: acquiring battery charging voltage data of an energy storage battery;
s2: inputting the battery charging voltage data of the energy storage battery into the trained battery capacity estimation model, and outputting a corresponding battery capacity estimation value;
the battery capacity estimation model includes:
the convolutional neural network layer is used for extracting the battery aging characteristics of the input data and generating a corresponding initial characteristic diagram;
the attention mechanism layer is used for sequentially extracting the channel attention weight and the space attention weight of the initial feature map and generating a corresponding weighted feature map;
the cyclic neural network layer is used for training by taking the weighted feature map generated by the attention mechanism layer as input and outputting a corresponding battery capacity estimation value;
s3: and taking the battery capacity estimated value of the energy storage battery as the battery capacity estimated result.
Preferably, in step S2, the convolutional neural network layer is constructed based on a one-dimensional convolutional neural network.
Preferably, the one-dimensional convolutional neural network generates the initial feature map by the following formula:
Figure BDA0003755624110000021
in the formula:
Figure BDA0003755624110000022
the output of the input data corresponding to the kth convolution kernel is represented, and the output of each convolution kernel of the one-dimensional convolution neural network forms an initial characteristic diagram; w is a (k) A weight matrix representing a kth convolution kernel; x is the number of c A sequence representing original input data; l represents the convolution kernel size; i represents the convolution kernel receptive field; t represents the sequence length.
Preferably, in step S2, the attention mechanism layer includes a channel attention module and a spatial attention module for extracting a channel attention weight and a spatial attention weight, respectively;
generating a weighted feature map by:
s201: extracting a channel attention weight of the initial feature map through a channel attention module;
s202: multiplying the initial characteristic diagram by the channel attention weight to obtain a channel attention characteristic diagram;
s203: extracting, by a spatial attention module, a spatial attention weight of a channel attention feature map;
s204: and multiplying the initial feature map by the spatial attention weight to obtain a weighted feature map.
Preferably, in step S201, the channel attention module extracts the channel attention weight by the following formula:
M c (F)=softmax(Mean(F cout ));
Figure BDA0003755624110000031
Figure BDA0003755624110000032
in the formula: m c (F) Representing a channel attention weight; f represents an initial characteristic diagram of the output of the convolutional neural network layer; softmax represents the normalization operation of the output value by the activation function; mean represents the tensor averaging operation; f cout Representing features on the output channel; z is a radical of j Represents an output value of the jth node; c represents the number of output nodes; z is a radical of c Represents an output value of the c-th node; c out An output value representing an output channel; k represents the number of convolution kernels of the convolutional neural network layer, i.e., the number of output channels.
Preferably, in step S202, the channel attention feature map is calculated by the following formula:
Figure BDA0003755624110000033
in the formula: f' represents a channel attention feature map; m c (F) Representing a channel attention weight; f denotes an initial feature map.
Preferably, in step S202, the spatial attention module extracts the spatial attention weight by the following formula:
Figure BDA0003755624110000034
in the formula: m s (F') represents a spatial attention weight; f. of n×n A convolution operation representing a kernel of n; f' represents a channel attention feature map; avgPool (F') represents the average pooling of channel attention feature maps; maxPool (F') represents maximal pooling of channel attention profiles; sigma represents a sigmoid activation function; [;]representing the joining of features of two channels;
Figure BDA0003755624110000035
and
Figure BDA0003755624110000036
the feature maps after the average pooling and the maximum pooling of the channel attention feature maps are respectively shown.
Preferably, in step S204, the weighted feature map is calculated by the following formula:
Figure BDA0003755624110000037
in the formula: f' represents a weighted feature map; m s (F') represents a spatial attention weight; f denotes an initial feature map.
Preferably, in step S2, the recurrent neural network layer includes a bidirectional long and short memory time neural network and a full connection layer connected to an output of the bidirectional long and short memory time neural network;
the bidirectional long-short memory time neural network comprises a forgetting gate, an input gate and an output gate;
the forget gate is represented by the following formula:
f t =σ(W f h t-1 +W f x t +b f );
in the formula: f. of t Indicating a forgotten gate output; sigma represents a sigmoid activation function; w f Representing a forgetting gate weight; h is t-1 Hidden layer information representing time t-1; x is the number of t An input feature representing time t; b is a mixture of f Indicating a forgotten door deviation;
the input gate is represented by the following formula:
i t =σ(W i h t-1 +W i x t +b i );
Figure BDA0003755624110000041
Figure BDA0003755624110000042
in the formula: i.e. i t Control reservation section information indicating an input gate; sigma represents a sigmoid activation function; w i And W c Representing an input gate weight; h is a total of t-1 Hidden layer information representing time t-1; x is a radical of a fluorine atom t An input feature representing time t; b i 、b c Representing an input gate offset;
Figure BDA0003755624110000043
representing the candidate value of the state of the subcellular unit, namely the information brought by new input; c. C t Representing the state of the subcellular unit at the time t, namely updated information; tanh represents a hyperbolic tangent function; c. C t-1 Represents the state of the subcellular unit at the time of t-1;
the output gate is represented by the following formula:
o t =σ(W o h t-1 +W o x t +b o );
h t =o t ⊙tanh(c t );
in the formula: o t Representing output characteristics of the output gate; h is a total of t Hidden layer information indicating time t; h is t-1 Hidden layer information representing time t-1; sigma represents a sigmoid activation function; w o Represents an output gate weight; x is the number of t An input feature representing time t; b o Representing an output gate offset; tanh represents a hyperbolic tangent function; c. C t Represents the state of the subcellular unit at the time t, namely updated information;
the output of the fully connected layer is represented by the following formula:
y=aA T +b;
in the formula: y represents the output of the full connection layer, i.e., the battery capacity estimation value; a represents the number of neurons; a represents a weight matrix; t represents a transposed symbol; b represents a deviation.
Preferably, the loss function of the recurrent neural network layer adopts the following mean square error loss function;
Figure BDA0003755624110000044
in the formula: y is i A battery capacity target value representing a time i;
Figure BDA0003755624110000045
representing the estimated value of the battery capacity at the moment i; n represents the number of samples;
the optimizer of the recurrent neural network layer adopts the following adaptive moment estimation algorithm;
m t =β 1 m t-1 +(1-β 1 )g t
Figure BDA0003755624110000051
Figure BDA0003755624110000052
Figure BDA0003755624110000053
in the formula: m is t And m t-1 Representing the smoothed running average of iterations t and t-1; t represents the number of iterations; beta is a 1 And beta 2 Represents a smoothing constant; g t Representing an objective function gradient; v t A sliding mean representing the square of the gradient; w is a t And w t+1 Respectively representing updating parameters of iteration t times and iteration t +1 times; alpha is alpha t Representing the learning rate of t iterations; ε =10 -8 Indicating that the avoidance divisor is 0;
Figure BDA0003755624110000054
and
Figure BDA0003755624110000055
represents the smoothing constant to the power of t.
The method for estimating the end-to-end capacity of the energy storage battery based on the hybrid deep neural network has the following beneficial effects:
according to the method, the battery charging voltage data is used as input, the battery aging characteristics are automatically extracted through the convolutional neural network layer to generate a characteristic diagram (time sequence characteristic diagram), the battery capacity estimation value is predicted through the cyclic neural network layer based on the characteristic diagram, and the battery capacity is directly estimated through the battery charging voltage data through the structure of the convolutional neural network layer and the cyclic neural network layer, so that the end-to-end energy storage battery capacity estimation can be realized, the processing difficulty of the energy storage battery data can be reduced, the energy storage battery capacity estimation can be guaranteed to be optimal under the full life cycle and different aging paths of the energy storage battery, and the practicability of the energy storage battery capacity estimation can be improved.
Meanwhile, an attention mechanism layer is arranged between the convolutional neural network layer and the cyclic neural network layer, and channel attention weight and spatial attention weight of an initial feature map are sequentially extracted through the attention mechanism layer to generate an attention feature map. On the other hand, the method takes the weighted feature map generated by the attention mechanism layer as the input of the recurrent neural network layer for training, avoids the complicated manual extraction of battery aging feature data as the model input, further relieves the problems of gradient loss and dispersion of the recurrent neural network hidden layer by weighting the extracted features, can obtain the optimal weight of each hidden layer of the deep neural network, and can improve the performance of the recurrent neural network layer and the accuracy of the estimation of the capacity of the energy storage battery.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a hybrid deep neural network-based energy storage battery end-to-end capacity estimation method;
FIG. 2 is a modeling flow chart of a method for estimating the capacity of an energy storage battery based on an end-to-end structure;
FIG. 3 is a schematic of the logic of the attention mechanism layer;
FIG. 4 is a comparison of the battery capacity estimation value and the experimental value of the battery capacity estimation model of the present invention;
fig. 5 shows the result of analysis of the capacity estimation error under the constant-current constant-voltage charging condition.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or the orientations or positional relationships that the products of the present invention are conventionally placed in use, and are only used for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another, and are not to be construed as indicating or implying relative importance. Furthermore, the terms "horizontal", "vertical" and the like do not imply that the components are required to be absolutely horizontal or pendant, but rather may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined. In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
The following is further detailed by the specific embodiments:
example (b):
the embodiment discloses an end-to-end capacity estimation method for an energy storage battery based on a hybrid deep neural network.
As shown in fig. 1 and fig. 2, the method for estimating the end-to-end capacity of the energy storage battery based on the hybrid deep neural network includes:
s1: acquiring battery charging voltage data of an energy storage battery;
s2: inputting the battery charging voltage data of the energy storage battery into the trained battery capacity estimation model, and outputting a corresponding battery capacity estimation value;
the battery capacity estimation model includes:
the convolutional neural network layer is used for extracting the battery aging characteristics of the input data and generating a corresponding initial characteristic diagram;
the attention mechanism layer is used for sequentially extracting the channel attention weight and the space attention weight of the initial feature map and generating a corresponding weighted feature map;
the cyclic neural network layer is used for training by taking the weighted characteristic diagram generated by the attention mechanism layer as input and outputting a corresponding battery capacity estimation value;
in this embodiment, when the battery capacity estimation model is trained, the energy storage battery is selected to develop a cycle aging life experiment, data such as terminal voltage and capacity of a battery test are collected as training samples to construct a corresponding training set, a corresponding testing set and a corresponding verification set, and the training sets are trained by the existing training means.
S3: the battery capacity estimation value of the energy storage battery is used as the battery capacity estimation result, and the health state of the energy storage battery can be analyzed based on the battery capacity estimation result of the energy storage battery, so that the battery capacity estimation value can be used as a reference index for maintaining and replacing the energy storage battery.
According to the method, the battery charging voltage data are used as input, the battery aging characteristics are automatically extracted through the convolutional neural network layer to generate a characteristic diagram (time sequence characteristic diagram), the battery capacity estimation value is predicted through the cyclic neural network layer based on the characteristic diagram, and the convolutional neural network layer and the cyclic neural network layer achieve direct estimation of the battery capacity through the battery charging voltage data, namely end-to-end energy storage battery capacity estimation can be achieved, so that the processing difficulty of the energy storage battery data can be reduced, the energy storage battery capacity estimation is guaranteed to be optimal under the full life cycle of the energy storage battery and different aging paths, and therefore the practicability of the energy storage battery capacity estimation can be improved.
Meanwhile, an attention mechanism layer is arranged between the convolutional neural network layer and the cyclic neural network layer, and channel attention weight and spatial attention weight of an initial feature map are sequentially extracted through the attention mechanism layer to generate an attention feature map. On the other hand, the method takes the weighted feature map generated by the attention mechanism layer as the input of the recurrent neural network layer for training, avoids the complicated manual extraction of battery aging feature data as the model input, further relieves the problems of gradient loss and dispersion of the recurrent neural network hidden layer by weighting the extracted features, can obtain the optimal weight of each hidden layer of the deep neural network, and can improve the performance of the recurrent neural network layer and the accuracy of the estimation of the capacity of the energy storage battery.
The battery capacity estimation model in this embodiment is a two-dimensional attention mechanism one-dimensional convolution-bidirectional long-and-short-term memory neural network model (hereinafter referred to as 1 DCNN-CBAM-BiLSTM).
In a specific implementation process, the convolutional neural network layer is constructed based on a one-dimensional convolutional neural network (1 DCNN). The convolution neural network layer performs one-dimensional convolution operation on input multi-channel one-dimensional input data by using a one-dimensional convolution kernel with a specified size.
The one-dimensional convolutional neural network generates an initial feature map by the following formula:
Figure BDA0003755624110000081
in the formula:
Figure BDA0003755624110000082
the output of the input data corresponding to the kth convolution kernel is represented, and the output of each convolution kernel of the one-dimensional convolution neural network forms an initial characteristic diagram; w is a (k) A weight matrix representing a kth convolution kernel; x is the number of c A sequence representing original input data; l represents the convolution kernel size; i represents the convolution kernel receptive field; t represents the sequence length.
In a specific implementation process, the attention mechanism layer comprises a Channel Attention Module (CAM) and a Spatial Attention Module (SAM) for extracting a channel attention weight and a spatial attention weight respectively; wherein the channel attention module and the spatial attention module are capable of being embedded in different convolutional layers of a one-dimensional convolutional neural network.
As shown in fig. 3, the weighted feature map is generated by:
s201: extracting a channel attention weight of the initial feature map through a channel attention module;
s202: multiplying the initial characteristic diagram by the channel attention weight to obtain a channel attention characteristic diagram;
s203: extracting a spatial attention weight of the channel attention feature map through a spatial attention module;
s204: and multiplying the initial feature map by the spatial attention weight to obtain a weighted feature map.
Specifically, the channel attention module performs tensor averaging operation on the output values of the output channels, then performs normalization operation on the output values obtained by tensor averaging through the activation function, normalizes the attention weight of each channel of the feature map to 0-1, and the normalized weight is the channel attention weight.
The channel attention module extracts a channel attention weight by the following formula:
M c (F)=softmax(Mean(F cout ));
Figure BDA0003755624110000083
Figure BDA0003755624110000084
in the formula: m c (F) Representing a channel attention weight; f represents an initial characteristic diagram of the output of the convolutional neural network layer; softmax represents the normalization operation of the output value by the activation function; mean represents the tensor averaging operation; f cout Representing features on the output channel; z is a radical of j Represents an output value of the jth node; c represents the number of output nodes; z is a radical of c Represents an output value of the c-th node; c out An output value representing an output channel; k denotes the number of convolution kernels of the convolutional neural network layer, i.e., the number of output channels.
The channel attention profile is calculated by the following formula:
Figure BDA0003755624110000091
in the formula: f' represents a channel attention feature map; m is a group of c (F) Representing a channel attention weight; f denotes an initial feature map.
The spatial attention module performs spatial domain processing on the channel attention feature map. Firstly, performing maximum pooling and average pooling on input feature maps under the channel dimension, stacking the two pooled feature maps in the channel dimension, then extracting channel information through a convolution kernel, merging the number of channels of the feature maps into 1, and finally normalizing the spatial weight of the feature maps through a sigmoid function according to the convolved result, wherein the normalized weight is the spatial attention weight.
The spatial attention module extracts a spatial attention weight by the following formula:
Figure BDA0003755624110000092
in the formula: m s (F') represents a spatial attention weight; f. of n×n A convolution operation representing a kernel of n; f' represents a channel attention feature map; avgPool (F') represents the average pooling of channel attention feature maps; maxPool (F') represents maximal pooling of channel attention profiles; sigma represents a sigmoid activation function; [;]indicating that the features of the two channels are connected;
Figure BDA0003755624110000093
and
Figure BDA0003755624110000094
the feature maps after the average pooling and the maximum pooling of the channel attention feature maps are respectively shown.
Calculating a weighted feature map by:
Figure BDA0003755624110000095
in the formula: f' represents a weighted feature map; m s (F') represents a spatial attention weight; f denotes an initial feature map.
According to the method, the attention mechanism layer can add weight to output channels of each convolution layer of the convolution neural network layer and endow the initial characteristic diagram with the weight, and further endow each characteristic with the weight by obtaining the importance degree of each channel of the initial characteristic diagram, so that a subsequent deep neural network can focus on certain characteristic channels, and generate the weighted characteristic diagram by utilizing the internal spatial relationship among the characteristics, so that the strong relevant characteristics can be emphasized on the characteristics, the problem that the deep neural network is easy to fall into local optimum can be relieved, and the effectiveness of the characteristic diagram can be further improved.
In the specific implementation process, the recurrent neural network layer comprises a bidirectional long-short memory time neural network (BilSTM) and a full connection layer connected with the output of the bidirectional long-short memory time neural network;
the bidirectional long-short memory time neural network comprises a forgetting gate, an input gate and an output gate;
the forget gate is represented by the following formula:
f t =σ(W f h t-1 +W f x t +b f );
in the formula: f. of t Indicating a forgotten gate output; sigma represents a sigmoid activation function; w f Representing a forgetting gate weight; h is a total of t-1 Hidden layer information representing time t-1; x is the number of t An input feature representing time t; b f Indicating a forgotten door deviation;
the input gate is represented by the following formula:
i t =σ(W i h t-1 +W i x t +b i );
Figure BDA0003755624110000101
Figure BDA0003755624110000102
in the formula: i all right angle t Control reservation section information indicating an input gate; sigma represents a sigmoid activation function; w is a group of i And W c Representing an input gate weight; h is t-1 Hidden layer information representing time t-1; x is the number of t An input feature representing time t; b i 、b c Representing an input gate offset;
Figure BDA0003755624110000103
representing the candidate value of the state of the subcellular unit, namely the information brought by new input; c. C t Representing the state of the subcellular unit at the time t, namely updated information; tanh represents a hyperbolic tangent function; c. C t-1 Represents the state of the subcellular unit at the time of t-1;
the output gate is represented by the following formula:
o t =σ(W o h t-1 +W o x t +b o );
h t =o t ⊙tanh(c t );
in the formula: o t Representing output characteristics of the output gate; h is a total of t Hidden layer information representing time t; h is a total of t-1 Hidden layer information representing time t-1; sigma represents a sigmoid activation function; w o Represents an output gate weight; x is the number of t An input feature representing time t; b o Representing an output gate offset; tanh represents a hyperbolic tangent function; c. C t Representing the state of the subcellular unit at the time t, namely updated information;
the activation function of the neural network during the two-way long-short memory is a sigmoid function and a tanh function, and the formulas are respectively as follows:
Figure BDA0003755624110000104
Figure BDA0003755624110000105
the output of the fully-connected layer is represented by the following formula:
y=aA T +b;
in the formula: y represents the output of the full connection layer, i.e., the battery capacity estimation value; a represents the number of neurons; a represents a weight matrix; t represents a transposed symbol; b represents a deviation.
In a specific implementation process, the following mean square error loss function (MSE) is adopted as a loss function of a recurrent neural network layer;
Figure BDA0003755624110000111
in the formula: y is i A battery capacity target value representing a time i;
Figure BDA0003755624110000112
representing the estimated value of the battery capacity at the moment i; n represents the number of samples;
the optimizer of the recurrent neural network layer adopts an adaptive moment estimation (Adam) algorithm as follows;
m t =β 1 m t-1 +(1-β 1 )g t
Figure BDA0003755624110000113
Figure BDA0003755624110000114
Figure BDA0003755624110000115
in the formula: m is t And m t-1 Representing the smoothed running average of iterations t and t-1; t represents the number of iterations; beta is a 1 And beta 2 Represents a smoothing constant; g is a radical of formula t Representing an objective function gradient; v t A sliding mean representing the square of the gradient; w is a t And w t+1 Respectively representing updating parameters of iteration t times and iteration t +1 times; alpha is alpha t Representing the learning rate of t iterations; ε =10 -8 Indicating that the avoidance divisor is 0;
Figure BDA0003755624110000116
and
Figure BDA0003755624110000117
represents the smoothing constant to the power of t.
According to the method, the cyclic neural network layer is constructed on the basis of the long-time memory neural network and the full connection layer, the mean square error loss function is used as the loss function, the adaptive moment estimation algorithm is used as the optimizer, so that the battery capacity estimation model can realize accurate estimation of the battery capacity under the full life cycle and different aging paths, and the practicability of the energy storage battery capacity estimation can be further improved. Meanwhile, the long-time memory neural network takes the weighted feature map as input for training, the complicated manual extraction of battery aging feature data as model input is avoided, the problems of gradient loss and dispersion of hidden layers of the cyclic neural network are further relieved by weighting the extracted features, the optimal weight of each hidden layer of the deep neural network can be obtained, and the accuracy of energy storage battery capacity estimation can be further improved.
In order to better illustrate the advantages of the present invention, the following experiments are disclosed in this example.
Fig. 4 shows the battery capacity estimation value of the battery capacity estimation model proposed by the present invention, i.e., the battery capacity estimation value of the two-dimensional attention mechanism one-dimensional convolution-two-way long-short time memory neural network model (1 DCNN-CBAM-BiLSTM), and the battery capacity estimation value and experimental value comparison result of the existing convolution-long-short time memory neural network (CNN-LSTM).
As can be seen from fig. 4, the battery capacity estimation model (1 DCNN-CBAM-BiLSTM) of the present invention has a smoother estimation result compared to the conventional convolution-long-short-term memory neural network (CNN-LSTM). Under the early use cycle of the battery, the capacity estimation results of the two models are not greatly different, but after the battery is subjected to a certain cycle number, namely the capacity fading nonlinearity is stronger after the battery is aged, the battery capacity estimation model (1 DCNN-CBAM-BilSTM) can extract the battery capacity characteristics deeply, and the higher capacity estimation precision is ensured.
FIG. 5 shows the analysis of the battery capacity estimation error of the battery capacity estimation model (1 DCNN-CBAM-BilSTM) and the conventional convolution-long-short-term memory neural network (CNN-LSTM) under the energy storage battery cycle aging experimental condition.
Fig. 5 (a) shows statistics of the root mean square error of the capacity estimation of the two models, where the root mean square error of the capacity estimation of the battery capacity estimation model (1 DCNN-CBAM-BiLSTM) is 0.0015, which is 36% lower than the root mean square error of the capacity estimation of the battery of the conventional convolution-long-short term memory neural network (CNN-LSTM).
Fig. 5 (b) is a statistics of the average absolute error of the capacity estimates of the two models, where the root mean square error of the battery capacity estimate model (1 DCNN-CBAM-BiLSTM) is 0.0011, which is 41.6% lower than the root mean square error of the battery capacity estimate of the conventional convolution-long-short term memory neural network (CNN-LSTM).
Simulation results show that the battery capacity estimation model (1 DCNN-CBAM-BilSTM) is superior to the existing neural network model in estimation accuracy on the whole under the condition of the energy storage battery cyclic aging experiment.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (10)

1. An end-to-end capacity estimation method of an energy storage battery based on a hybrid deep neural network is characterized by comprising the following steps:
s1: acquiring battery charging voltage data of an energy storage battery;
s2: inputting the battery charging voltage data of the energy storage battery into the trained battery capacity estimation model, and outputting a corresponding battery capacity estimation value;
the battery capacity estimation model includes:
the convolutional neural network layer is used for extracting the battery aging characteristics of the input data and generating a corresponding initial characteristic diagram;
the attention mechanism layer is used for sequentially extracting the channel attention weight and the space attention weight of the initial feature map and generating a corresponding weighted feature map;
the cyclic neural network layer is used for training by taking the weighted characteristic diagram generated by the attention mechanism layer as input and outputting a corresponding battery capacity estimation value;
s3: and taking the battery capacity estimated value of the energy storage battery as the battery capacity estimated result.
2. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 1, wherein: in step S2, a convolutional neural network layer is constructed based on a one-dimensional convolutional neural network.
3. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 2, wherein: the one-dimensional convolutional neural network generates an initial characteristic map by the following formula:
Figure FDA0003755624100000011
in the formula:
Figure FDA0003755624100000012
the output of the input data corresponding to the kth convolution kernel is represented, and the output of each convolution kernel of the one-dimensional convolution neural network forms an initial characteristic diagram; w is a (k) A weight matrix representing a kth convolution kernel; x is the number of c A sequence representing original input data; l represents the convolution kernel size; i represents the convolution kernel receptive field; t represents the sequence length.
4. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 1, characterized by: in step S2, the attention mechanism layer includes a channel attention module and a spatial attention module for extracting a channel attention weight and a spatial attention weight, respectively;
generating a weighted feature map by:
s201: extracting a channel attention weight of the initial feature map through a channel attention module;
s202: multiplying the initial characteristic diagram by the channel attention weight to obtain a channel attention characteristic diagram;
s203: extracting a spatial attention weight of the channel attention feature map through a spatial attention module;
s204: and multiplying the initial feature map by the spatial attention weight to obtain a weighted feature map.
5. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 4, wherein: in step S201, the channel attention module extracts a channel attention weight by the following formula:
M c (F)=softmax(Mean(F cout ));
Figure FDA0003755624100000021
Figure FDA0003755624100000022
in the formula: m c (F) Representing a channel attention weight; f represents an initial characteristic diagram of the output of the convolutional neural network layer; softmax represents the normalization operation of the output value by the activation function; mean represents the tensor averaging operation; f cout Representing features on the output channel; z is a radical of j Represents an output value of the jth node; c represents the number of output nodes; z is a radical of c Represents an output value of the c-th node; c out An output value representing an output channel; k denotes the number of convolution kernels of the convolutional neural network layer, i.e., the number of output channels.
6. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 4, wherein: in step S202, a channel attention feature map is calculated by the following formula:
Figure FDA0003755624100000023
in the formula: f' represents a channel attention feature map; m c (F) Representing a channel attention weight; f denotes an initial feature map.
7. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 4, wherein: in step S202, the spatial attention module extracts the spatial attention weight by the following formula:
Figure FDA0003755624100000024
in the formula: m s (F') represents a spatial attention weight; f. of n×n A convolution operation representing a kernel of n; f' represents a channel attention feature map; avgPool (F') represents the average pooling of channel attention feature maps; max Pool (F') indicates maximal pooling of the channel attention profile; sigma represents a sigmoid activation function; [;]indicating that the features of the two channels are connected;
Figure FDA0003755624100000025
and
Figure FDA0003755624100000026
the feature maps after the average pooling and the maximum pooling of the channel attention feature maps are respectively shown.
8. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 4, characterized by: in step S204, a weighted feature map is calculated by the following formula:
Figure FDA0003755624100000027
in the formula: f' represents a weighted feature map; m s (F') represents a spatial attention weight; f denotes an initial feature map.
9. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 1, characterized by: in step S2, the recurrent neural network layer comprises a bidirectional long and short memory time neural network and a full connection layer connected with the output of the bidirectional long and short memory time neural network;
the bidirectional long-short memory time neural network comprises a forgetting gate, an input gate and an output gate;
the forget gate is represented by the following formula:
f t =σ(W f h t-1 +W f x t +b f );
in the formula: f. of t Indicating a forgotten gate output; sigma represents a sigmoid activation function; w f Representing a forgetting gate weight; h is t-1 Hidden layer information representing time t-1; x is a radical of a fluorine atom t An input feature representing time t; b f Indicating a forgotten door deviation;
the input gate is represented by the following formula:
i t =σ(W i h t-1 +W i x t +b i );
Figure FDA0003755624100000031
Figure FDA0003755624100000032
in the formula: i.e. i t Control reservation section information indicating an input gate; sigma represents a sigmoid activation function; w i And W c Representing input gate weights; h is t-1 Hidden layer information representing time t-1; x is the number of t An input feature representing time t; b i 、b c Representing an input gate offset;
Figure FDA0003755624100000033
representing the candidate value of the state of the subcellular unit, namely the information brought by new input; c. C t Representing the state of the subcellular unit at the time t, namely updated information; tanh represents a hyperbolic tangent function; c. C t-i Represents the state of the subcellular unit at the time of t-1;
the output gate is represented by the following formula:
o t =σ(W o h t-1 +W o x t +b o );
h t =o t ⊙tanh(c t );
in the formula: o t Representing output characteristics of the output gate;h t hidden layer information indicating time t; h is t-1 Hidden layer information representing time t-1; sigma represents a sigmoid activation function; w is a group of o Represents an output gate weight; x is the number of t An input feature representing time t; b o Representing an output gate offset; tanh represents a hyperbolic tangent function; c. C t Representing the state of the subcellular unit at the time t, namely updated information;
the output of the fully-connected layer is represented by the following formula:
y=aA T +b;
in the formula: y represents the output of the full connection layer, i.e., the battery capacity estimation value; a represents the number of neurons; a represents a weight matrix; t represents a transposed symbol; b represents a deviation.
10. The hybrid deep neural network-based energy storage battery end-to-end capacity estimation method of claim 9, wherein: the loss function of the recurrent neural network layer adopts the following mean square error loss function;
Figure FDA0003755624100000041
in the formula: y is i A battery capacity target value indicating a time i;
Figure FDA0003755624100000042
an estimated value of battery capacity at time i; n represents the number of samples;
the optimizer of the recurrent neural network layer adopts the following adaptive moment estimation algorithm;
m t =β 1 m t-1 +(1-β 1 )g t
Figure FDA0003755624100000043
Figure FDA0003755624100000044
Figure FDA0003755624100000045
in the formula: m is t And m t-1 Representing the smoothed running average of iterations t and t-1; t represents the number of iterations; beta is a 1 And beta 2 Represents a smoothing constant; g t Representing an objective function gradient; v t A sliding mean representing the square of the gradient; w is a t And w t+1 Respectively representing updating parameters of iteration t times and iteration t +1 times; alpha (alpha) ("alpha") t Representing the learning rate of t iterations; ε =10 -8 Indicating that the avoidance divisor is 0;
Figure FDA0003755624100000046
and
Figure FDA0003755624100000047
represents the smoothing constant to the power of t.
CN202210859282.5A 2022-07-20 2022-07-20 End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network Pending CN115236522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210859282.5A CN115236522A (en) 2022-07-20 2022-07-20 End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210859282.5A CN115236522A (en) 2022-07-20 2022-07-20 End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network

Publications (1)

Publication Number Publication Date
CN115236522A true CN115236522A (en) 2022-10-25

Family

ID=83672711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210859282.5A Pending CN115236522A (en) 2022-07-20 2022-07-20 End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network

Country Status (1)

Country Link
CN (1) CN115236522A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117054891A (en) * 2023-10-11 2023-11-14 中煤科工(上海)新能源有限公司 Method and device for predicting service life of battery
CN117590260A (en) * 2024-01-18 2024-02-23 武汉船用电力推进装置研究所(中国船舶集团有限公司第七一二研究所) Method and device for estimating state of charge of marine lithium ion power battery and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117054891A (en) * 2023-10-11 2023-11-14 中煤科工(上海)新能源有限公司 Method and device for predicting service life of battery
CN117590260A (en) * 2024-01-18 2024-02-23 武汉船用电力推进装置研究所(中国船舶集团有限公司第七一二研究所) Method and device for estimating state of charge of marine lithium ion power battery and electronic equipment
CN117590260B (en) * 2024-01-18 2024-04-16 武汉船用电力推进装置研究所(中国船舶集团有限公司第七一二研究所) Method and device for estimating state of charge of marine lithium ion power battery and electronic equipment

Similar Documents

Publication Publication Date Title
CN111443294B (en) Method and device for indirectly predicting remaining life of lithium ion battery
CN112241608B (en) Lithium battery life prediction method based on LSTM network and transfer learning
CN110221225B (en) Spacecraft lithium ion battery cycle life prediction method
CN112798960B (en) Battery pack residual life prediction method based on migration deep learning
CN113009349B (en) Lithium ion battery health state diagnosis method based on deep learning model
CN115236522A (en) End-to-end capacity estimation method of energy storage battery based on hybrid deep neural network
CN113064093B (en) Method and system for jointly estimating state of charge and state of health of energy storage battery
US20220222409A1 (en) Method and system for predicting remaining useful life of analog circuit
CN113109715B (en) Battery health condition prediction method based on feature selection and support vector regression
CN109492748B (en) Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network
CN112782591A (en) Lithium battery SOH long-term prediction method based on multi-battery data fusion
CN113406521B (en) Lithium battery health state online estimation method based on feature analysis
CN113702843B (en) Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm
CN116840720A (en) Fuel cell remaining life prediction method
CN115219937A (en) Method for estimating health states of energy storage batteries with different aging paths based on deep learning
CN114660497A (en) Lithium ion battery service life prediction method aiming at capacity regeneration phenomenon
CN115389946A (en) Lithium battery health state estimation method based on isobaric rise energy and improved GRU
CN115130830A (en) Non-intrusive load decomposition method based on cascade width learning and sparrow algorithm
CN114596726B (en) Parking berth prediction method based on interpretable space-time attention mechanism
CN114280490B (en) Lithium ion battery state of charge estimation method and system
CN113743008A (en) Fuel cell health prediction method and system
CN112014757A (en) Battery SOH estimation method integrating capacity increment analysis and genetic wavelet neural network
CN116774089A (en) Convolutional neural network battery state of health estimation method and system based on feature fusion
CN117034762A (en) Composite model lithium battery life prediction method based on multi-algorithm weighted sum
CN115542182A (en) SOH estimation method for series battery pack of mobile energy storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination