CN111638465B - Lithium battery health state estimation method based on convolutional neural network and transfer learning - Google Patents

Lithium battery health state estimation method based on convolutional neural network and transfer learning Download PDF

Info

Publication number
CN111638465B
CN111638465B CN202010475482.1A CN202010475482A CN111638465B CN 111638465 B CN111638465 B CN 111638465B CN 202010475482 A CN202010475482 A CN 202010475482A CN 111638465 B CN111638465 B CN 111638465B
Authority
CN
China
Prior art keywords
layer
value
model
neural network
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010475482.1A
Other languages
Chinese (zh)
Other versions
CN111638465A (en
Inventor
陶吉利
李央
马龙华
白杨
乔志军
谢亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Science and Technology ZUST
Original Assignee
Zhejiang University of Science and Technology ZUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Science and Technology ZUST filed Critical Zhejiang University of Science and Technology ZUST
Priority to CN202010475482.1A priority Critical patent/CN111638465B/en
Publication of CN111638465A publication Critical patent/CN111638465A/en
Application granted granted Critical
Publication of CN111638465B publication Critical patent/CN111638465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/382Arrangements for monitoring battery or accumulator variables, e.g. SoC
    • G01R31/3842Arrangements for monitoring battery or accumulator variables, e.g. SoC combining voltage and current measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Secondary Cells (AREA)
  • Charge And Discharge Circuits For Batteries Or The Like (AREA)

Abstract

The invention discloses a lithium battery health state estimation method based on a convolutional neural network and transfer learning. The method is based on transfer learning, a basic model is pre-trained offline by using complete cycle data of an accelerated aging experiment and the last small part of 7.5% cycle data in the life cycle of a waste battery, and then parameters of the basic model are finely adjusted by using normal speed aging data of only 15% cycle before a new battery, so that the health state of the battery at any time is estimated online. Because the service life of the battery is greatly shortened by the accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, a large amount of time for collecting training data is saved, the size of model input data is reduced, and the calculation process is quicker.

Description

Lithium battery health state estimation method based on convolutional neural network and transfer learning
Technical Field
The invention belongs to the technical field of automation, and relates to a lithium battery health state estimation method based on a convolutional neural network and transfer learning.
Background
The existing lithium battery health state estimation method mainly comprises two main categories of model-based and data-driven. The model-based approach has high requirements on complex physical mechanisms inside the battery. The data-driven method mainly extracts important features from the original voltage, current, capacity and other data of the battery manually, and the important features are used as the input of some traditional machine learning models. The method converts a voltage platform reflecting the first-order phase change of the battery on an original charging and discharging voltage capacity curve into a delta Q/delta V peak which can be clearly identified on the capacity increment curve, then extracts the delta Q/delta V peak, a corresponding voltage value and other characteristics as input of a model, and researches show that the physical change of the aging of the battery can be correspondingly embodied in the voltage capacity curve.
In recent years, deep learning is used in various fields, important features can be automatically extracted from a large amount of data, and the defects that important information is lost and workload is large due to traditional artificial feature extraction of machine learning are overcome. S. shen et al, in the document a deep learning method for online capacity estimation of lithium-ion batteries (Journal of Energy Storage, vol.25, p.100817, 2019), introduced deep learning into the study of battery health state estimation for the first time, however, this method is based on 10 years of cyclic experimental data, and collecting data is quite time-consuming, while the present invention aims to improve the accuracy of health state estimation model and overcome the disadvantage of excessive dependence on data in the past.
Disclosure of Invention
In order to overcome the defects of huge data quantity required for establishing a model and the like in the prior art, a method based on a convolutional neural network and transfer learning is provided. The transfer learning uses the complete cycle data of the accelerated aging experiment and the last small part of the 7.5% cycle data in the life cycle of the abandoned battery to off-line pre-train a basic model, and then uses the normal speed aging data of only 15% of previous cycles of the new battery to carry out fine adjustment on the parameters of the basic model, thereby estimating the health state of the battery at any moment. Because the service life of the battery is greatly shortened by the accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, a large amount of time for collecting training data is saved, the size of model input data is reduced, and the calculation process is quicker.
The technical scheme of the invention is that a lithium battery health state estimation method based on a convolutional neural network and transfer learning is established through means of data acquisition, model establishment, fine tuning and the like. The method can effectively improve the accuracy of the estimation of the health state of the battery.
The specific technical scheme of the invention is as follows:
the lithium battery health state estimation method based on the convolutional neural network and the transfer learning comprises the following steps:
s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:
s11: selecting a plurality of brand-new lithium batteries of different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycles of constant-current charging, constant-voltage charging and constant-current discharging until the health state is reduced to below 80%;
meanwhile, waste lithium batteries with the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, and the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to be below 80%;
acquiring brand new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out charge-discharge cycle according to the processes of constant current charging, constant voltage charging and constant current discharging, and acquiring the first 15% cycle data of the service life of the batteries;
s12: calculating to obtain the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and forming a matrix by using numerical values of three variables of the voltage, the current and the battery capacity as input data of a convolutional neural network;
s2: constructing a convolutional neural network model, wherein the whole network comprises a convolutional layer, a pooling layer and a full-connection layer, and selecting a correction linear unit as an activation function to be connected with the output of each convolutional layer and the output of each pooling layer;
s3: pre-training the model constructed in the S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by a brand-new lithium battery accelerated aging experiment in S1 into a plurality of small batches of training samples, inputting the training samples into a neural network constructed in S2 in batches, updating parameters through a random gradient descent method in an iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including a value k of a convolution kernel a,b,c,k Offset value b k Weight W of full connection layer l And bias b l
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into a neural network trained in the S31 according to the batches, performing iterative learning on the basis of model parameter values stored in a first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel' a,b,c,k Offset value b' k Weight W of full connection layer l ' and offset b l ';
The forward propagation and parameter updates at this point are as follows:
Figure BDA0002515692820000031
a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)
Figure BDA0002515692820000032
wherein: model internal parameter θ' j Value k 'comprising convolution kernel' a,b,c,k Offset value b' k Weight W of full connection layer l ' and offset b l '; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase;
s4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and fixing convolution layer parameters of the pre-training model in the iterative learning process to be constant, namely keeping k' a,b,c,k And b' k Not changing, only the weight W of the full connection layer l ' and offset b l Update to W l "and b l ", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter updates at this point are as follows:
Figure BDA0002515692820000033
a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)
Figure BDA0002515692820000034
wherein: internal parameter of model theta' j Weight W including full connection layer l And bias b l "; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: carrying out a constant current charging experiment on the lithium battery to be estimated to obtain the voltage, current and capacity test values of the lithium battery, and taking a matrix formed by the three as the estimation value obtained in S4The input X of the calculation model uses the parameter k 'saved in S4 in the calculation process of network forward propagation' a,b,c,k 、b' k 、W l "and b l ", the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000041
a l ”'=f(z l ”')=f(W l ”a l-1 ”'+b l ”) (23)
the superscript ""' "of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
Preferably, in S1, the aging test is performed by overcharging and overdischarging the battery, that is, by setting a higher upper constant current charging voltage limit and a lower constant current discharging voltage limit.
Preferably, in S1, the normal rate aging test of the waste lithium battery is performed for 35 to 40 charge-discharge cycles.
Preferably, in S1, the normal rate aging test of a brand-new lithium battery is performed by 75 charge-discharge cycles.
Preferably, in S1, the data collected in different aging experiments are respectively constructed as model input X:
Figure BDA0002515692820000042
wherein: k is the number of sampling points in the constant current charging stage, V i 、I i 、C i The voltage, current and capacity values of the ith sampling point are respectively.
Preferably, in the convolutional neural network model, the forward propagation of the convolutional layer is calculated as follows:
Figure BDA0002515692820000043
i'=(i-1)×h s +a (3)
j'=(j-1)×w s +b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column in the k-th layer of the output matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and number of channels, w, of the convolution kernel s And h s Step sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrix i',j',c Is the value of the ith 'row and the jth' column in the input matrix at layer c, k a,b,c,k Is the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000051
Figure BDA0002515692820000052
wherein w k And h k Width and height, w, of the convolution kernel, respectively s And h s Step sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrix in ,w out Denotes the width of the input matrix and the output matrix, respectively, h in And h out Representing the height, w, of the input and output matrices, respectively p And h p Respectively representing the number of zero elements symmetrically filled in the left and right directions and the up and down directions of an input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the max pooling layer is calculated as:
Figure BDA0002515692820000053
the above equation (7) shows that the feature map is divided into i × j pieces with the size e 1 ×e 2 For each e 1 ×e 2 Performing one maximum pooling operation on the characteristic points of the region; wherein M is i,j,k Is the value of the k layer ith row and jth column of the pooled layer output,
Figure BDA0002515692820000054
the e + sigma of the k-th layer of the previous convolutional layer 1 Line ej + sigma 2 The value of column, (ei, ej) is the ith row and jth column in the feature map, e 1 ×e 2 The upper left corner position coordinates of the region;
the forward propagation of the fully connected layer is calculated as:
a l =f(z l )=f(W l a l-1 +b l ) (8)
Figure BDA0002515692820000055
where f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure BDA0002515692820000061
where rot180 denotes rotating the convolution kernel by 180 degrees, δ l Represents a differential of the output of the l-th layer with respect to the objective function;
the objective function J for establishing the neural network is:
Figure BDA0002515692820000062
wherein
Figure BDA0002515692820000063
Is the output of the output layer, y i (x) Is the value of the true tag or tags,n is the number of samples and λ is the regularization parameter.
Parameter theta inside the network j According to the objective function (12), theta is updated as follows j Including weight W and bias b:
Figure BDA0002515692820000064
Figure BDA0002515692820000065
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000066
representing the output value, y, of the ith input in a small batch at the jth iteration i (x) j Is the corresponding true value, θ j Is an internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
Preferably, in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily deleted randomly, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the resulting loss results are propagated backward through the modified network; after a batch of training samples are subjected to the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
Preferably, in the convolutional neural network model, a piecewise constant attenuation strategy is adopted, and the learning rate is adjusted in the training process instead of being fixed, so that the model is converged quickly.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention utilizes the convolutional neural network to automatically extract the characteristics from the data of voltage, current and capacity, thereby saving the step of manual extraction and avoiding the loss of important information possibly brought by the manual extraction of the characteristics.
(2) Because the service life of the battery is greatly shortened by an accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, the method saves a large amount of time for collecting training data, reduces the size of model input data, and enables the calculation process to be faster.
(3) The model based on the acceleration mode can be quickly transferred to the normal speed mode, and has good generalization.
(4) The invention only takes the data in the constant current charging stage as input, reduces the size of the model input data and accelerates the calculation process.
(5) The method and the device improve the accuracy of the online estimation of the health state of the lithium battery.
Drawings
FIG. 1 is a schematic diagram of a convolutional neural network architecture;
FIG. 2 is a schematic diagram of the voltage, current, and capacity variations during a single charge-discharge cycle;
FIG. 3 is a diagram of transfer learning;
fig. 4 is a diagram illustrating the online estimation result of experiment 2 in table 1, corresponding to the SONYUS18650VTC6 cell;
fig. 5 is a graph showing the results of the on-line estimation of experiment 4 in table 1, corresponding to an FST2000 cell.
Detailed Description
The invention is further illustrated and described below with reference to the drawings and the detailed description. The technical characteristics of the embodiments of the invention can be correspondingly combined without mutual conflict.
The invention provides a lithium battery health state estimation method based on a convolutional neural network and transfer learning, which comprises the following steps of:
s1: the method for acquiring the input data of the convolutional neural network comprises the following specific steps:
s11: selecting a plurality of brand-new lithium batteries with different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycle of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to below 80%. The accelerated aging refers to the overcharge and overdischarge of the battery, namely, the higher upper limit of the constant current charging voltage and the lower limit of the constant current discharging voltage are set, and the same is carried out below.
Meanwhile, waste lithium batteries of the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging, and the cycle is carried out for 35-40 times approximately until the health state is reduced to below 80%. Since the full life of the cell under normal aging conditions is approximately 500 cycles, 35-40 cycles are around 7.5% of the full life.
Obtaining brand-new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out 75 times of charge-discharge cycles according to the processes of constant current charging, constant voltage charging and constant current discharging, and obtaining the first 15% cycle data of the service life of the batteries. Since the full life of a battery under normal aging conditions is approximately 500 cycles, 75 cycles is about 15% of the full life.
S12: according to the voltage and current values of the battery in the constant-current charging stage in different aging experiments collected in S11, the capacity of the battery is calculated by using a coulomb counting method, numerical values of three variables of the voltage, the current and the capacity of the battery form a matrix and serve as input data of a convolutional neural network, and the input X form of the model is as follows:
Figure BDA0002515692820000081
wherein: k is the number of sampling points in the constant current charging stage, V i 、I i 、C i The voltage, the current and the capacity of the ith sampling point are respectively.
S2: and constructing a convolutional neural network model, wherein the whole network comprises a convolutional layer, a pooling layer and a full-connection layer, and a correction linear unit is selected as an activation function and connected with the output of each convolutional layer and the output of each pooling layer.
Forward propagation of convolutional layers is calculated as follows:
Figure BDA0002515692820000082
i'=(i-1)×h s +a (3)
j'=(j-1)×w s +b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column in the k-th layer of the output matrix, b k Is an offset value, h k 、w k And c k Respectively, the height, width and number of channels, w, of the convolution kernel s And h s Step sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrix i',j',c Is the value of the ith 'row and jth' column in the input matrix at layer c, k a,b,c,k Is the value of the c-th layer, a-th row and b-th column in the k-th convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000091
Figure BDA0002515692820000092
wherein w k And h k Width and height, w, of the convolution kernel, respectively s And h s Step sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrix in ,w out Representing the width of the input matrix and the output matrix, h, respectively in And h out Representing the height, w, of the input and output matrices, respectively p And h p Respectively representing the number of zero elements symmetrically filled in the left and right directions and the up and down directions of an input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the maximum pooling layer is calculated as:
Figure BDA0002515692820000093
the above equation (7) shows that the feature map is divided into i × j pieces with the size e 1 ×e 2 For each e 1 ×e 2 Performing one maximum pooling operation on the characteristic points of the region; wherein, M i,j,k Is the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure BDA0002515692820000094
the kth layer of the previous convolutional layer is ei + sigma 1 Line ej + sigma 2 The value of column, (ei, ej) is the ith row and jth column in the feature map, e 1 ×e 2 The position coordinates of the upper left corner of the area;
the forward propagation of the fully connected layer is calculated as:
a l =f(z l )=f(W l a l-1 +b l ) (8)
Figure BDA0002515692820000095
where f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure BDA0002515692820000096
where rot180 denotes rotating the convolution kernel by 180 degrees, δ l Represents the differential of the output of the l-th layer to the objective function;
the objective function J for establishing the neural network is:
Figure BDA0002515692820000101
wherein
Figure BDA0002515692820000102
Is the output of the output layer, y i (x) Is the true label value, n is the number of samples, and λ is the regularization parameter.
Parameter theta inside the network j According to the objective function (12), the following is updated, including the weight W and the bias b:
Figure BDA0002515692820000103
Figure BDA0002515692820000104
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000105
output value, y, representing the ith input in the small batch at the jth iteration i (x) j Is the corresponding true value, θ j Is an internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
S3: pre-training the model constructed in S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by the brand-new lithium battery accelerated aging experiment in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network constructed in the S2 according to the batches, updating parameters through a random gradient descent method in the iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including the value k of a convolution kernel a,b,c,k Offset value b k Weight W of full connection layer l And bias b l
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into a neural network trained in the S31 in batches, performing iterative learning on the basis of model parameter values stored in the first pre-training model, further adjusting parameters by a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and obtaining a value k 'of a new convolution kernel' a,b,c,k And a bias value of b' k Weight W of full connection layer l ' and offset b l ';
The forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000106
a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)
Figure BDA0002515692820000111
wherein: model internal parameter θ' j Value k 'comprising convolution kernel' a,b,c,k Offset value b' k Weight W of full connection layer l ' and offset b l '; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase;
in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily deleted randomly, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the obtained loss result is propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
In the convolutional neural network model, a piecewise constant attenuation strategy is adopted, and the learning rate is adjusted in the training process instead of being fixed, so that the model is rapidly converged.
S4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and fixing convolution layer parameters of the pre-training model in the iterative learning process to be constant, namely keeping k' a,b,c,k And b' k Not changing, only the weight W of the full connection layer l ' and offset b l Update to W l "and b l ", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter updates at this point are as follows:
Figure BDA0002515692820000112
a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)
Figure BDA0002515692820000113
wherein: internal parameter of model theta' j Weight W including full connection layer l "and bias b l "; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: performing a constant current charging experiment on the lithium battery to be estimated to obtain voltage, current and capacity test values of the lithium battery, using a matrix formed by the three test values as an input X of the estimation model obtained in S4, and using the parameter k 'saved in S4 in the calculation process of network forward propagation' a,b,c,k 、b' k 、W l "and b l ", the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000121
a l ”'=f(z l ”')=f(W l ”a l-1 ”'+b l ”) (23)
the superscript ""' "of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
The method is applied to a specific embodiment to show the specific implementation process and technical effect.
Examples
In this embodiment, the specific steps are as follows:
and (1) acquiring input data of the convolutional neural network.
a. 3 brand-new SONYUS18650VTC6 models and 3 brand-new FST2000 models of lithium batteries are selected, and complete overcharge and overdischarge aging experiments and 75 normal-speed aging cycles are respectively carried out on each model. Selecting 1 SONYUS18650VTC6 model and 1 FST2000 model waste batteries close to the end of service life, carrying out normal speed aging experiments respectively, consuming the battery capacity until the health state is reduced to below 80% according to the circulation of constant current charging, constant voltage charging and constant current discharging, and carrying out 35-40 times of circulation. A total of 8 sets of data were obtained. The upper and lower cut-off voltage limits for normal rate aging are 4.2V and 2.75V, respectively, the upper cut-off voltage limit for overcharge is 4.4V, and the lower cut-off voltage limit for overdischarge is 2V.
The voltage, current, and capacity changes during a single charge-discharge cycle are shown in fig. 2.
b. Collecting the voltage and current values of the battery in the constant current charging stage in the charging and discharging cycle, obtaining the capacity by using a coulomb counting method, wherein the numerical values of the three variables form a matrix and are used as the input X of the convolutional neural network, and the dimension of the X is 4000 multiplied by 3.
Figure BDA0002515692820000122
And (2) designing a convolutional neural network algorithm.
a. The whole network comprises a convolution layer, a pooling layer and a full-connection layer. A modified linear element is selected as an activation function, connected to the output of each of the convolutional and pooling layers.
Calculation of forward propagation of convolutional layers:
Figure BDA0002515692820000131
i'=(i-1)×h s +a (3)
j'=(j-1)×w s +b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column in the kth layer of the output matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and number of channels, w, of the convolution kernel s And h s Step sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrix i',j',c Is the value of the ith 'row and the jth' column in the input matrix at layer c, k a,b,c,k Is the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function.
The dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000132
Figure BDA0002515692820000133
wherein w k And h k Width and height, w, of the convolution kernel, respectively s And h s Step sizes, w, of the width and height, respectively, of the convolution kernel when scanning the input matrix in ,w out ,h in And h out Representing the width and height, w, of the input and output matrices, respectively p And h p The number of zero elements symmetrically filled in the left, right, up and down directions of an input matrix is represented, and the loss of boundary information of the matrix along with continuous convolution is prevented;
calculation of the forward propagation of the maximum pooling layer:
Figure BDA0002515692820000134
wherein M is i,j,k Is the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure BDA0002515692820000135
the e + sigma of the k-th layer of the previous convolutional layer 1 Line ej + σ 2 Column values, the above formula representing the value for each e in the feature map 1 ×e 2 One maximum pooling operation is performed for each zone.
Calculation of forward propagation for fully connected layers:
a l =f(z l )=f(W l a l-1 +b l ) (8)
Figure BDA0002515692820000141
where f (x) is an activation function, W l And b l Weights and offset values, a, of the l-th layer, respectively l Is the input of the l-th layer, and the convolution layer corresponds to the convolution calculation.
The size of the convolution kernel in each convolutional layer, the maximum pooling layer, and the dimensional variation of the fully-connected layer can be seen in fig. 1. The dimension of input X is 4000X 3X 1, firstly, 6 convolution kernels with the size of 5X 2X 1 are carried out for convolution operation, and w is taken p =1,h p =0, and the dimension of the convolutional layer output calculated from equations (5) and (6) is 3996 × 4 × 6. Then, after a maximum pooling layer, the maximum pooling operation is performed every 4 × 02 areas, and the output dimension is 999 × 12 × 6. And then carrying out convolution operation by 16 convolution kernels with the size of 5 multiplied by 1 multiplied by 6 to obtain 995 multiplied by 2 multiplied by 16, carrying out maximum pooling operation every 5 multiplied by 1 areas, wherein the output dimension is 199 multiplied by 2 multiplied by 6, tiling the output dimension into column vectors with the dimension of 6348, then respectively passing through 3 full connection layers, sequentially changing the dimensions into 80 and 40, and finally outputting one neuron by the network.
Back propagation of convolutional layer:
Figure BDA0002515692820000142
where rot180 denotes rotating the convolution kernel by 180 degrees, δ l Representing the differential of the output of the l-th layer on the objective function.
Establishing an objective function J:
Figure BDA0002515692820000143
where W and b are the weight and bias value, respectively, internal to the network;
Figure BDA0002515692820000144
is the output of the output layer, y i (x) Is the true label value, n is the number of samples, and λ is the regularization parameter. Take λ =0.001.
The weights and offsets are updated as follows:
Figure BDA0002515692820000145
Figure BDA0002515692820000146
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000151
representing the output value, y, of the ith input in a small batch at the jth iteration i (x) j Is the corresponding true value, θ j The internal parameters at the jth iteration, α is the learning rate and γ is the momentum value. Taking m =64, the learning rate setting can be seen in step c, γ =0.9.
The model evaluation indexes adopt accuracy and root mean square error:
Figure BDA0002515692820000152
Figure BDA0002515692820000153
b. strategies are added on the basis of the network to prevent overfitting, hidden neurons with p proportion in the network are temporarily and randomly deleted, and input and output neurons are kept unchanged. The input is propagated forward through the modified network and the resulting loss results are then propagated backward through the modified network. After a small batch of training samples has been performed, the parameters are updated on the non-deleted neurons according to the stochastic gradient descent method. Hidden neurons stop computation with a probability of p = 0.5.
c. And a piecewise constant attenuation strategy is adopted, so that the learning rate is adjusted in the training process instead of being fixed, and the model is rapidly converged. The initial value of learning rate is 1 × 10 -5 The attenuation is 0.7 times of the original attenuation every certain iteration period.
And (3) pre-training the model.
a. Transfer learning trains a basic network on a basic data set and a basic task, then fine-tunes learned features, transfers the learned features to a second target network, and trains the network by using the target data set and the target task. Since the aging of the battery under accelerated conditions has some similarity to the aging at normal speed, a base model can be pre-trained for normal speed aging using accelerated aging data. The specific strategy of transfer learning in the present invention can be seen in fig. 3.
The accelerated aging experiment performed in the step (1) can obtain cycle data of the battery after the service life is shortened under the overcharge and overdischarge conditions, all cycles are divided into a series of small batches of training samples, the training samples are input into the network according to batches, parameters are updated through a random gradient descent method, a primary pre-training model is obtained, parameters in the network are fixed after iterative learning, and values of the parameters are stored, wherein the values are respectively as follows: value k of the convolution kernel a,b,c,k Offset value b k Weight W of full connection layer l And bias b l
b. If only the accelerated aging data is used for pre-training, and the obtained model is directly used in the fine-tuning and testing process of the step (4), then during testing, the model performs poorly on the last part of the cycle of the battery life cycle, and the SOH estimation is very inaccurate, which may be because the aging conditions of the batteries respectively near the end of the life show obvious differences under accelerated aging and normal conditions, and the characteristics have no good generalization. Thus, after the pre-training phase using the accelerated aging data, the second pre-training phase continues using the normal aging data for a portion of the discarded batteries.
Similarly, the normal speed aging experiment of the waste battery performed in the step (1) can obtain data of the last part of cycles in the life cycle of the waste battery, the cycles are divided into a series of small batches of training samples, the training samples are input into a network according to the batches, parameters are updated through a random gradient descent method, and the parameters are further adjusted on the original preliminary pre-training model, so that a new pre-training model is obtained. Specifically, the parameters saved in the previous stage are initialized on the new model, the parameters inside the network are fixed again after iterative learning, and the values of the parameters are saved, which are: k' a,b,c,k 、b' k 、W l ' and b l '。
Forward propagation and parameter updating at this time are as follows, where θ' j Value k 'comprising a convolution kernel' a,b,c,k And a bias value of b' k Weight W of full connection layer l ' and offset b l ':
Figure BDA0002515692820000161
a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)
Figure BDA0002515692820000162
And (2) designing 4 groups of experiments by using the data obtained in the step (1), wherein each group of experiments are respectively subjected to a primary pre-training stage and a further pre-training stage, and the data used in each stage are shown in a table 1.
Table 1: data used at each stage of 4 experiments
Figure BDA0002515692820000163
Figure BDA0002515692820000171
And (4) fine adjustment and online estimation of the model.
a. And (3) obtaining cycle data of 15% of the battery in the first step in the normal speed aging experiment in the step (1), dividing the cycle data into a series of small batches of training samples, inputting the training samples into the network according to the batches, adjusting parameters on the basis of the pre-training model obtained in the step (3), and estimating the health state of the battery at any moment on line by using the fine-tuned model. In this case, the trimming phase does not need to be updated on all parameters as in the second pre-training phase, but rather the parameters of the fixed convolutional layer are unchanged, i.e. k 'is maintained' a,b,c,k And b' k Not changing, only the weight W of the full connection layer l ' and offset b l ', update to W l "and b l ". The values of these parameters are saved as a model for the final test. The data used in the fine tuning phase is shown in table 1.
The forward propagation and parameter update at this time are as follows, where θ " j Weight W including full connection layer l "and bias b l ”:
Figure BDA0002515692820000172
a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)
Figure BDA0002515692820000173
b. When on-line estimation is carried out, a constant current charging experiment is carried out on a battery at a certain moment, a matrix formed by voltage, current and capacity of the battery is used as an input X of a model, and parameters stored in a fine adjustment stage are used in a calculation process of network forward propagation: k' a,b,c,k 、b' k 、W l "and b l After the calculation of the formulas (2) to (9), the health state of the battery at the moment can be output, and a back propagation process is not needed during testing.
The forward propagation and parameter updates at this time are as follows:
Figure BDA0002515692820000181
a l ”'=f(z l ”')=f(W l ”a l-1 ”'+b l ”) (23)
fig. 4 and 5 are partial experimental results, which correspond to experiment 2 and experiment 4 in table 1, respectively, in which the triangles represent true SOH and the circles represent on-line estimated values, and it can be seen from the results that the accuracy of the present invention respectively reaches 99.56% and 99.01%, and the root mean square error difference respectively is 0.435% and 1.120%.
The above-described embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.

Claims (8)

1. A lithium battery health state estimation method based on a convolutional neural network and transfer learning is characterized by comprising the following steps:
s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:
s11: selecting a plurality of brand-new lithium batteries with different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycles of constant-current charging, constant-voltage charging and constant-current discharging until the health state is reduced to below 80%;
meanwhile, waste lithium batteries with the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, and the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to be below 80%;
acquiring brand new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out charge-discharge cycle according to the processes of constant current charging, constant voltage charging and constant current discharging, and acquiring the first 15% cycle data of the service life of the batteries;
s12: calculating to obtain the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and forming a matrix by using numerical values of three variables of the voltage, the current and the battery capacity as input data of a convolutional neural network;
s2: constructing a convolutional neural network model, wherein the whole network comprises convolutional layers, pooling layers and full-connection layers, and selecting a modified linear unit as an activation function to be connected with the output of each convolutional layer and each pooling layer;
s3: pre-training the model constructed in S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by a brand-new lithium battery accelerated aging experiment in S1 into a plurality of small batches of training samples, inputting the training samples into a neural network constructed in S2 in batches, updating parameters through a random gradient descent method in an iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including a value k of a convolution kernel a,b,c,k Offset value b k Weight W of full connection layer l And bias b l
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into a neural network trained in the S31 according to the batches, performing iterative learning on the basis of model parameter values stored in a first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel' a,b,c,k And a bias value of b' k Weight W of full connection layer l ' and offset b l ';
The forward propagation and parameter update at this time are as follows:
Figure FDA0003916589860000021
a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)
Figure FDA0003916589860000022
wherein: k in equation (16) is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column, x, in the output matrix i',j',c Is the value of the ith 'row and the jth' column in the c-th layer of the input matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and channel number of the convolution kernel, and f is an activation function; in the formula (17), f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;
model internal parameter θ' j Value k 'comprising convolution kernel' a,b,c,k Offset value b' k Weight W of full connection layer l ' and offset b l '; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase; λ is a regularization parameter, α is a learning rate, and γ is a momentum value;
s4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and keeping the convolution layer parameters of the fixed pre-training model in the iterative learning process unchanged, namely keeping k' a,b,c,k And b' k Not changing, only the weight W of the full connection layer l ' and offset b l Update to W l "and b l ", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter update at this time are as follows:
Figure FDA0003916589860000023
a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)
Figure FDA0003916589860000024
wherein: internal parameter of model theta' j Weight W including full connection layer l And bias b l "; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: performing a constant current charging experiment on the lithium battery to be estimated to obtain voltage, current and capacity test values of the lithium battery, taking a matrix formed by the three as an input X of the estimation model obtained in S4, and using the parameter k 'stored in S4 in the calculation process of network forward propagation' a,b,c,k 、b' k 、W l "and b l ", the forward propagation and parameter updates at this time are as follows:
Figure FDA0003916589860000031
a l ”'=f(z l ”')=f(W l ”a l-1 ”'+b l ”) (23)
the superscript ""' of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
2. The method for estimating the state of health of the lithium battery based on the convolutional neural network and the transfer learning as claimed in claim 1, wherein in S1, the accelerated aging test refers to overcharging and overdischarging the battery by setting a higher upper constant-current charging voltage limit and a lower constant-current discharging voltage limit.
3. The method for estimating the health state of a lithium battery based on a convolutional neural network and transfer learning as claimed in claim 1, wherein in S1, the normal speed aging test of the waste lithium battery is performed for 35-40 charge and discharge cycles.
4. The method for estimating the health state of a lithium battery based on a convolutional neural network and transfer learning as claimed in claim 1, wherein in S1, 75 charge and discharge cycles are performed in a normal speed aging experiment of a brand new lithium battery.
5. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in S1, data collected by different aging experiments are respectively constructed as model inputs X:
Figure FDA0003916589860000032
wherein: k is the number of sampling points in the constant current charging stage, V i 、I i 、C i The voltage, current and capacity of the ith sampling point are respectively.
6. The lithium battery state of health estimation method based on convolutional neural network and transfer learning of claim 1, wherein in the convolutional neural network model, the calculation of forward propagation of convolutional layer is as follows:
Figure FDA0003916589860000041
i'=(i-1)×h s +a (3)
j'=(j-1)×w s +b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column in the kth layer of the output matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and number of channels, w, of the convolution kernel s And h s Step sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrix i',j',c Is the value of the ith 'row and the jth' column in the input matrix at layer c, k a,b,c,k Is the value of the c-th layer, a-th row and b-th column in the k-th convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure FDA0003916589860000042
Figure FDA0003916589860000043
wherein w k And h k Width and height, w, of the convolution kernel, respectively s And h s The step sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrix in ,w out Representing the width of the input matrix and the output matrix, h, respectively in And h out Representing the height, w, of the input and output matrices, respectively p And h p Respectively representing the number of zero elements symmetrically filled in the left and right directions and the up and down directions of an input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the maximum pooling layer is calculated as:
Figure FDA0003916589860000044
the above equation (7) shows that the feature map is divided into i × j pieces with the size e 1 ×e 2 For each e 1 ×e 2 Performing one maximum pooling operation on the characteristic points of the region; wherein, M i,j,k Is the value of the k layer ith row and jth column of the pooled layer output,
Figure FDA0003916589860000051
the e + sigma of the k-th layer of the previous convolutional layer 1 Line ej + sigma 2 The value of column, (ei, ej) is the ith row and jth column in the feature map, e 1 ×e 2 The upper left corner position coordinates of the region;
the forward propagation of the fully connected layer is calculated as:
a l =f(z l )=f(W l a l-1 +b l ) (8)
Figure FDA0003916589860000052
where f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure FDA0003916589860000053
where rot180 denotes rotating the convolution kernel by 180 degrees, δ l Represents the differential of the output of the l-th layer to the objective function;
the objective function J for establishing the neural network is:
Figure FDA0003916589860000054
wherein
Figure FDA0003916589860000055
Is the output of the output layer, y i (x) Is the true tag value, n is the sampleNumber, λ is the regularization parameter; w and b are the weight and offset value inside the network, respectively;
updating the network internal parameter theta according to the objective function (12) j Which includes weight W and offset b:
Figure FDA0003916589860000056
Figure FDA0003916589860000057
where m represents the number of samples contained in a small batch,
Figure FDA0003916589860000058
output value, y, representing the ith input in the small batch at the jth iteration i (x) j Is the corresponding true value, θ j Is an internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
7. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily and randomly deleted, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the resulting loss results are propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
8. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein a piecewise constant attenuation strategy is adopted in the convolutional neural network model, and a learning rate is adjusted in a training process instead of being fixed, so that the model is converged quickly.
CN202010475482.1A 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning Active CN111638465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Publications (2)

Publication Number Publication Date
CN111638465A CN111638465A (en) 2020-09-08
CN111638465B true CN111638465B (en) 2023-02-28

Family

ID=72332399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010475482.1A Active CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Country Status (1)

Country Link
CN (1) CN111638465B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112345952A (en) * 2020-09-23 2021-02-09 上海电享信息科技有限公司 Power battery aging degree judging method
CN112444748A (en) * 2020-10-12 2021-03-05 武汉蔚来能源有限公司 Battery abnormality detection method, battery abnormality detection device, electronic apparatus, and storage medium
CN112231975A (en) * 2020-10-13 2021-01-15 中国铁路上海局集团有限公司南京供电段 Data modeling method and system based on reliability analysis of railway power supply equipment
CN112083337B (en) * 2020-10-22 2023-06-16 重庆大学 Predictive operation and maintenance-oriented power battery health prediction method
CN112666480B (en) * 2020-12-02 2023-04-28 西安交通大学 Battery life prediction method based on characteristic attention of charging process
CN112666479B (en) * 2020-12-02 2023-05-16 西安交通大学 Battery life prediction method based on charge cycle fusion
CN112684346B (en) * 2020-12-10 2023-06-20 西安理工大学 Lithium battery health state estimation method based on genetic convolutional neural network
CN112834945A (en) * 2020-12-31 2021-05-25 东软睿驰汽车技术(沈阳)有限公司 Evaluation model establishing method, battery health state evaluation method and related product
CN112798960B (en) * 2021-01-14 2022-06-24 重庆大学 Battery pack residual life prediction method based on migration deep learning
CN113406496B (en) * 2021-05-26 2023-02-28 广州市香港科大霍英东研究院 Battery capacity prediction method, system, device and medium based on model migration
CN113612269B (en) * 2021-07-02 2023-06-27 国网山东省电力公司莱芜供电公司 Method and system for controlling charge and discharge of battery monomer of lead-acid storage battery energy storage station
CN113536676B (en) * 2021-07-15 2022-09-27 重庆邮电大学 Lithium battery health condition monitoring method based on feature transfer learning
JP7269999B2 (en) * 2021-07-26 2023-05-09 本田技研工業株式会社 Battery model construction method and battery deterioration prediction device
CN113740736B (en) * 2021-08-31 2024-04-02 哈尔滨工业大学 Electric vehicle lithium battery SOH estimation method based on deep network self-adaption
CN113777499A (en) * 2021-09-24 2021-12-10 山东浪潮科学研究院有限公司 Lithium battery capacity estimation method based on convolutional neural network
CN113721151B (en) * 2021-11-03 2022-02-08 杭州宇谷科技有限公司 Battery capacity estimation model and method based on double-tower deep learning network
CN114578250B (en) * 2022-02-28 2022-09-02 广东工业大学 Lithium battery SOH estimation method based on double-triangular structure matrix
CN115015760A (en) * 2022-05-10 2022-09-06 香港中文大学(深圳) Lithium battery health state evaluation method based on neural network and migration integrated learning
CN114720882B (en) * 2022-05-20 2023-02-17 东南大学溧阳研究院 Reconstruction method of maximum capacity fading curve of lithium ion battery
CN115184805A (en) * 2022-06-21 2022-10-14 东莞新能安科技有限公司 Battery health state acquisition method, device, equipment and computer program product
CN117054892B (en) * 2023-10-11 2024-02-27 特变电工西安电气科技有限公司 Evaluation method, device and management method for battery state of energy storage power station

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A kind of power system state estimation method based on convolutional neural networks
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical failure diagnostic method, equipment and medium based on migration convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11637331B2 (en) * 2017-11-20 2023-04-25 The Trustees Of Columbia University In The City Of New York Neural-network state-of-charge and state of health estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A kind of power system state estimation method based on convolutional neural networks
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical failure diagnostic method, equipment and medium based on migration convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Danhua Zhou ; Zhanying Li ; Jiali Zhu ; Haichuan Zhang ; Lin Hou.State of Health Monitoring and Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Temporal Convolutional Network.《IEEE Access 》.2020,第8卷 *
Microstrong0305.卷积神经网络(CNN)综述.《CSDN博客》.2018, *
ShengShen,Mohammadkazem Sadoughi,XiangyiChen,MingyiHong,ChaoHu.A deep learning method for online capacity estimation of lithium-ion batteries.《JOURNAL OF ENERGY STORAGE》.2019, *
Yohwan Choi ; Seunghyoung Ryu ; Kyungnam Park ; Hongseok Kim.Machine Learning-Based Lithium-Ion Battery Capacity Estimation Exploiting Multi-Channel Charging Profiles.《IEEE Access》.2019,第7卷 *
蜜丝特湖.Pool层及其公式推导.《CSDN博客》.2018, *

Also Published As

Publication number Publication date
CN111638465A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111638465B (en) Lithium battery health state estimation method based on convolutional neural network and transfer learning
Yang et al. State-of-charge estimation of lithium-ion batteries based on gated recurrent neural network
CN110888058B (en) Algorithm based on power battery SOC and SOH joint estimation
CN110068774A (en) Estimation method, device and the storage medium of lithium battery health status
CN113917337A (en) Battery health state estimation method based on charging data and LSTM neural network
CN110146822A (en) A kind of Vehicular dynamic battery capacity On-line Estimation method based on constant-current charge process
CN113702843B (en) Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm
CN108537337A (en) Lithium ion battery SOC prediction techniques based on optimization depth belief network
Li et al. CNN and transfer learning based online SOH estimation for lithium-ion battery
CN110704790A (en) Lithium battery SOC estimation method based on IFA-EKF
CN112163372B (en) SOC estimation method of power battery
CN110703113A (en) Power battery pack SOC estimation method based on Gaussian process regression
CN112782594B (en) Method for estimating SOC (state of charge) of lithium battery by data-driven algorithm considering internal resistance
CN114726045B (en) Lithium battery SOH estimation method based on IPEA-LSTM model
CN115808633A (en) POA-SVR-based method for monitoring state of energy storage battery in island environment
Hasan et al. Performance comparison of machine learning methods with distinct features to estimate battery SOC
CN113406503A (en) Lithium battery SOH online estimation method based on deep neural network
CN114167295B (en) Lithium ion battery SOC estimation method and system based on multi-algorithm fusion
CN113917336A (en) Lithium ion battery health state prediction method based on segment charging time and GRU
CN115963407A (en) ICGWO (intensive care unit) optimization ELM (element-based robust model) based lithium battery SOC estimation method
CN116643196A (en) Battery health state estimation method integrating mechanism and data driving model
CN113534938B (en) Method for estimating residual electric quantity of notebook computer based on improved Elman neural network
CN110232432B (en) Lithium battery pack SOC prediction method based on artificial life model
CN114779103A (en) Lithium ion battery SOC estimation method based on time-lag convolutional neural network
CN112114254B (en) Power battery open-circuit voltage model fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant