CN114791571B - Lithium ion battery life prediction method and device based on improved CSO-LSTM network - Google Patents

Lithium ion battery life prediction method and device based on improved CSO-LSTM network Download PDF

Info

Publication number
CN114791571B
CN114791571B CN202210406592.1A CN202210406592A CN114791571B CN 114791571 B CN114791571 B CN 114791571B CN 202210406592 A CN202210406592 A CN 202210406592A CN 114791571 B CN114791571 B CN 114791571B
Authority
CN
China
Prior art keywords
data
lithium ion
ion battery
value
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210406592.1A
Other languages
Chinese (zh)
Other versions
CN114791571A (en
Inventor
周欣欣
高志蕊
李心月
王相雨
黄宇宁
李茂源
薛青常
孟炫宇
郭月晨
衣雪婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Electric Power University
Original Assignee
Northeast Dianli University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Dianli University filed Critical Northeast Dianli University
Priority to CN202210406592.1A priority Critical patent/CN114791571B/en
Publication of CN114791571A publication Critical patent/CN114791571A/en
Application granted granted Critical
Publication of CN114791571B publication Critical patent/CN114791571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Abstract

The invention provides a lithium ion battery service life prediction method and a lithium ion battery service life prediction device based on an improved CSO-LSTM network, which specifically comprise the following steps: the method comprises the following steps of (1) obtaining lithium ion battery data; (2) Preprocessing the battery data by adopting a set empirical mode; (3) Processing the preprocessed data by a normalization method, and dividing a training set and a test set; (4) Adopting improved CSO to select LSTM optimal hyper-parameters, and establishing a lithium ion battery service life prediction model based on the improved LSTM; (5) Inputting the training set into a lithium ion battery life prediction model based on the improved LSTM for training to obtain a lithium ion battery life prediction model based on the improved CSO-LSTM; (6) And inputting the test set into a trained lithium ion battery service life prediction model to obtain a prediction result. The lithium ion battery service life prediction method and the lithium ion battery service life prediction device provided by the invention effectively improve the lithium ion battery service life prediction precision, and have important practical engineering significance for improving the stability and the safety of the lithium ion battery.

Description

Lithium ion battery life prediction method and device based on improved CSO-LSTM network
Technical Field
The invention relates to the technical field of new energy, in particular to a lithium ion battery service life prediction method and device based on an improved CSO-LSTM network.
Background
Along with the rapid development of new forms of energy technique, photovoltaic charging station construction will obtain the rapid development, and energy storage system is the core of photovoltaic charging station, and good energy storage battery can ensure photovoltaic charging station normal operating and monitoring. The lithium ion battery has the advantages of long service life, low self-discharge rate, no memory effect, wide working temperature range, zero pollution, good safety performance and the like, can be deployed in different areas, has certain adaptability, and is an ideal energy storage battery of a photovoltaic charging station.
However, the lithium ion battery in the photovoltaic charging station ages over time, and if the lithium ion battery is not replaced in time, equipment and instruments are damaged or the equipment cannot work normally; and the power distribution capacity of the charging station is insufficient, so that a serious safety accident is caused. Therefore, the service life of the lithium ion battery of the photovoltaic charging station is accurately predicted, the reliability and the safety of the lithium ion battery can be improved, and the method has important practical significance on the economic development and the safety of the photovoltaic charging station.
Disclosure of Invention
The invention provides a lithium ion battery service life prediction method and a lithium ion battery service life prediction device based on an improved CSO-LSTM network, wherein the LSTM network is optimized by utilizing an improved Chicken Swarm Optimization (CSO), and the lithium ion battery service life prediction method based on the improved CSO-LSTM network is provided, so that the service life of a lithium ion battery is accurately predicted, the threat of the lithium ion battery to the life safety and an energy storage system of people is reduced, and the reliability and the safety of the lithium ion battery are improved.
In order to achieve the above object, the technical solution provided by the present invention comprises the following steps:
step 1000: acquiring lithium ion battery data to form a first data set;
step 2000: performing data preprocessing on the lithium ion battery data of the first data set by using a set empirical mode decomposition method to form a second data set, further comprising steps 2100 to 2600:
step 2100: initializing parameters, and setting the times N of externally added noise and the intensity alpha of Gaussian white noise;
step 2200: will obey a Gaussian distribution [0, δ ] according to equation (1)]White noise n of i (t) adding to each data parameter sequence x (t) in the first data set in step 1000 to form a new data sequence x i (t) calculating the standard deviation of the original data sequence according to the formula (2) to obtain the variance delta for white noise of Gaussian distribution;
x i (t)=x(t)+n i (t) (1)
δ=std[x(t)] (2)
wherein x is i (t) is a new data sequence formed by adding Gaussian white noise, n i (t) is white gaussian noise added at the ith time, x (t) is the original data sequence, i =1,2.
Step 2300: adopting an empirical mode method to carry out on the new data sequence x i (t) decomposing, further comprising steps 2310 through 2340:
step 2310: the variation trend of each parameter data in the new data sequence is regarded as a signal, which is marked as H (t), all maximum value points in the data variation curve are connected into a line, and an upper envelope line is obtained and marked as A 1 (t), connecting all minimum value points in the data change curve to form a line, and obtaining a lower envelope curve B 1 (t); calculating the upper envelope line A and the lower envelope line A by a cubic spline interpolation method according to the formula (3) 1 (t) and B 1 (t) mean curve, denoted S 1 (t);
Figure GDA0003973361900000021
Wherein A is 1 (t) is the upper envelope formed by all maxima of the new data sequence, B 1 (t) is the lower envelope formed by all minimum points of the new data sequence, S 1 (t) is the mean curve of the upper and lower envelope curves;
step 2320: according to the formula (4), the variation trend H (t) of the new data sequence is plotted against the mean curve S 1 (t) subtracting to obtain M 1 (t) determination of M 1 (t) whether or notThe requirements for the eigenmode components, if satisfied, are saved and denoted as IMF 1 (t); otherwise, go to step 2310 until the requirement of the eigenmode component is met;
M 1 (t)=H(t)-S 1 (t) (4)
where H (t) is the trend of change of each parameter data in the new data sequence, S 1 (t) is the mean curve of the upper and lower envelopes, M 1 (t) is the variation trend H (t) of the data and the mean curve S 1 (t) the difference;
step 2330: the eigenmode decomposition IMF described in step 2320 is subtracted from the data trend H (t) 1 (t), according to the formula (5), a new H is obtained 1 (t) and adding new H 1 (t) as a new trend, jump to step 2310 and execute sequentially until a second IMF is obtained that satisfies the requirements of the eigenmode components 2 (t) and reacting it with H 1 (t) subtracting to obtain H 2 (t) as shown in equation (6);
H 1 (t)=H(t)-IMF 1 (t) (5)
H 2 (t)=H 1 (t)-IMF 2 (t) (6)
wherein H (t) is the variation trend of each parameter data in the lithium ion battery data set, H 1 (t) is the data trend H (t) and the IMF of the previous time 1 (t) new data trend obtained by differencing, IMF 1 (t) is the resulting first eigenmode decomposition, IMF 2 (t) is the resulting second eigenmode decomposition; h 2 (t) is the trend H of the data change at the previous moment 1 (t) and the intrinsic mode component IMF 2 (t) subtracting to obtain the variation trend of new data;
step 2340: by repeating steps 2310 to 2330, n H are finally obtained n (t)、IMF n (t) and a residual C (t), the final data trend can be expressed as n IMFs n (t) and a residual term C (t), as shown in equation (7); IMF n (t) the components are sorted according to frequency;
Figure GDA0003973361900000031
wherein, IMF i (t) is the ith eigenmode decomposition, C (t) is the residual, H (t) is the original data variation trend; n is the number of final empirical mode runs, H n (t) is the trend of the new data change obtained each time;
step 2400: through the step 2300, a set of intrinsic mode components IMF is finally obtained, as shown in formula (8);
Figure GDA0003973361900000032
wherein r is i (t) decomposing and extracting data residual components after n IMFs are extracted; c. C i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the iteration number of the algorithm, and IMF is the intrinsic modal component; k is the number of the intrinsic mode components;
step 2500: repeating the steps 2200 and 2400N times to obtain the following mode set containing the intrinsic mode component IMF and the residual component, as shown in formula (9);
[{c 1,1 (t)},{c 1,2 (t)},...,{c 1,K (t)},...,{c N,1 (t)},{c N,2 (t)},...,{c N,K (t)}] (9)
wherein, c N,k (t) is the empirical mode pair x i (t) decomposing to obtain the kth modal component, wherein K is the number of the intrinsic modal components; n is the number of algorithm iterations;
step 2600: in order to reduce noise influence, the intrinsic modal components IMF obtained by each decomposition are averagely processed according to a formula (10), and finally modal components are obtained;
Figure GDA0003973361900000033
wherein, c k (t) is obtained in the t-th cycleModal component, c i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the cycle number; k is the number of the intrinsic mode components;
step 3000: carrying out normalization processing on the data in the second data set by adopting a normalization method to form a third data set, and dividing the third data set into a training set and a test set;
carrying out data normalization processing by adopting a formula (11);
Figure GDA0003973361900000034
wherein, maxValue represents the maximum value of the lithium ion battery data; minValue represents the minimum value of the lithium ion battery data; x represents the original data of the lithium ion battery; y represents data after normalization;
step 4000: optimizing the hyperparameter of the LSTM neural network by adopting an improved CSO algorithm, and establishing an optimized LSTM lithium ion battery life prediction model; specifically, the method comprises steps 4010 to 4140:
step 4010: determining a topology of an LSTM network, the structure comprising: the LSTM network inputs the number m of layer nodes, the number h of hidden layer nodes and the number d of output layer nodes; the number m of the nodes of the input layer depends on the number of data parameters which can indirectly represent the performance degradation of the lithium ion battery in the lithium ion battery data; the number d of the nodes of the output layer depends on the number of data parameters capable of directly representing the performance degradation of the lithium ion battery; the number h of hidden layer nodes adopts an empirical method and a trial and error method to determine the number of suitable hidden layer nodes; the number h of hidden layer nodes, the number m of input layer nodes and the number d of output layer nodes satisfy the functional relationship shown in formula (12):
Figure GDA0003973361900000041
wherein m is the number of nodes of the input layer; h is the number of hidden layer nodes; d is the number of output layer nodes; a is a random number and the value range is [1,10];
step 4020: initializing chicken flock algorithm parameters, w min 、w max Determining the upper and lower bounds of the hyperparameter of the LSTM model needing optimization, wherein the hyperparameter comprises a general learning rate epsilon, the number of neurons and the number of training iterations;
step 4030: the upper and lower bounds of the search space of the chicken swarm algorithm are respectively the value ranges of the general learning rate epsilon, the number of neurons and the number of training iterations contained in the hyper-parameter, the population initialization is carried out, and the value of the hyper-parameter needing to be optimized in the LSTM model is assigned to each chicken in the population; according to a formula (13), calculating the average relative error of the prediction result of the LSTM model as the fitness value of each chicken in a chicken swarm algorithm, establishing a grade system by descending operation, taking the front Nr individuals with the best fitness value as cocks, taking the rear Nc individuals with the worst fitness value as chicks, removing the cocks and the chicks, and taking the rest as hens;
Figure GDA0003973361900000042
wherein, Y (x) represents the original data sequence of the lithium ion battery; y ^ (x) represents a data sequence predicted by a prediction model; n represents the number of samples used for prediction;
step 4040: calculating according to a formula (14), and when the value is 1, reordering chicken flocks according to the fitness value of each individual in the t iteration, and establishing a new level system;
mod(t,G)=1 (14)
wherein t is the current iteration frequency, G is the updating number of the grade system, and the value is 10;
step 4050: updating the position of the cock according to a formula (15), and improving the convergence precision of the algorithm by introducing a nonlinear adaptive weight strategy w; in the position updating formula of the cock, when the weight w value is increased, the global searching capability of the algorithm is enhanced, otherwise, when the weight w value is reduced, the local searching capability of the algorithm is enhanced, and the nonlinear adaptive weight strategy w is shown in a formula (17);
x i,j (t+1)=(x i,j (t)*(1+randn(0,σ 2 )))*w(t) (15)
wherein x is i,j (t + 1) is the position of the cock at the t +1 th iteration; x is a radical of a fluorine atom i,j (t) is the position of the t iteration cock; randn (0, sigma) 2 ) The mean is 0 and the variance is σ 2 The variance σ is calculated according to the formula (16) 2 (ii) a w (t) is the self-adaptive weight at the time t and is calculated according to the formula (17);
Figure GDA0003973361900000051
wherein, f a Fitness value for individual a; a is any individual in the cock population and the value range is [1, nr ]]If a is not equal to i, epsilon is the minimum constant in a computer, and f represents a fitness value; nr is the number of cocks; f. of i Fitness value for individual i;
w(t)=exp(-(w start -(log(w start )-log(w end ))*(X-1) 2 )) (17)
wherein w (t) is the adaptive weight at the time t; w is a start 、w end An initial value and an end value of w, respectively; x is the difference degree between the current individual position and the optimal position of the population, and calculation is carried out according to a formula (18);
Figure GDA0003973361900000052
wherein X is the difference degree between the current individual position and the optimal position of the population; g is the optimal position of the population at the moment t, and x is the current individual position at the moment t; x is the number of max 、x min The maximum and minimum values of the population, respectively; t is the maximum iteration number; n is the number of data contained in the position where the individual is located;
step 4060: according to the formula (19), the positions of the hens are updated, and the strategy is updated by introducing Gaussian mutation operators and normal distribution learningSlightly, the convergence rate of the algorithm is improved; wherein, a Gaussian mutation operator is obtained by calculation according to a formula (20), information exchange between the population-optimal individual and the current individual is realized, a value of a normal distribution learning updating strategy is obtained by calculation according to a formula (21), and when w of the normal distribution learning updating strategy is used 1 When the value rises, the algorithm search range becomes large, and the w of the updating strategy of normal distribution learning 1 When the value is reduced, the local searching capability of the algorithm is enhanced, and finally the searching mode of the hen can be changed, so that the searching range of the hen is expanded;
Figure GDA0003973361900000053
Figure GDA0003973361900000054
w 1 =w min *w max /w min 1/(1+10*t/T) +Randn() (21)
wherein M is a Gaussian mutation operator; p is a radical of min And p max Minimum and maximum values taken at the point of gaussian variation; r is i Random numbers in the range of (0,1); dim is the maximum dimension of the population; x is the number of best,j The position of the optimal solution in the population; t is the total iteration number; w is a 1 A random learning coefficient that is normally distributed; t is the current iteration times of the algorithm; w is a max 、w min Are respectively w 1 Maximum and minimum values of; randn () is a normally distributed random number; rand is a random number between (0,1); c 1 、C 2 Calculating according to the formulas (22) and (23) for following the coefficient; x is the number of i,j (t + 1) is the position of the hen at the t +1 th iteration; x is the number of i,j (t) is the position of the hen at the t-th iteration;
Figure GDA0003973361900000061
is the position of the cock in the population at the t +1 th iteration; />
Figure GDA0003973361900000062
Is the position of the cock or hen in the population at the t iteration;
Figure GDA0003973361900000063
Figure GDA0003973361900000064
wherein r is 1 Is the randomly selected cock position and has the value range of [1, nr];r 2 Is the random cock or hen position with a value range of [1, nr + Nh +],r 1 、r 2 Are not equal; f. of i Fitness value for individual i;
Figure GDA0003973361900000065
is an individual r 2 A fitness value of; />
Figure GDA0003973361900000066
Is an individual r 1 A fitness value of; nr is the number of cocks; nh is the number of hens; epsilon is the smallest constant in the computer;
step 4070: updating the position of the chicken according to a formula (24), introducing a learning updating strategy of normal distribution, calculating by the formula (21) in the step 4060, increasing the learning chance of the chicken to other individuals, and changing the search mode of the chicken;
x i,j (t+1)=w 1 *(x i,j (t)+F*(x m,j (t)-x i,j (t))+(x s,j (t)-x i,j (t))) (24)
wherein x is m,j (t) is the position of the chick following the mother hen at the tth iteration; m is in the range of [1, nh ]](ii) a Nh is the number of hens; f is the foraging coefficient of the chick following the mother hen; x is a radical of a fluorine atom s,j (t) is the position of a random cock or hen in the population at the tth iteration, the value range of s is (1, nr + Nh), and s and m are unequal; nr is the number of cocks; w is a 1 A random learning coefficient that is normally distributed; x is a radical of a fluorine atom i,j (t + 1) is the numberthe position of the chicken is in t +1 times of iteration; x is the number of i,j (t) is the position of the chicken in the t iteration;
step 4080: circularly updating the positions of the cocks, the hens and the chickens in the steps 4050 to 4070 to obtain the optimal updated position and the fitness value in the circulating process;
step 4090: determining the individual history optimal and the group history optimal of the particles according to the individual fitness value at a certain moment in the cyclic process, and converting when the individual fitness value is superior to the fitness value at the previous moment; otherwise, abandoning the updating;
step 4100: when the algorithm reaches the maximum iteration times or meets the requirement of average relative error precision, the algorithm is terminated; otherwise, returning to the step 4030;
step 4110: after iteration is finished, recording the optimal position in the chicken flock, and assigning the obtained optimal position value to a general learning rate epsilon, the number of neurons and the number of training iteration times;
step 4120: inputting the hyperparameter value obtained in step 4110 into the LSTM network, specifically comprising steps 4121 to 4125;
step 4121: the state C of the neuron memory unit at the last moment is confirmed by the lithium ion battery data through a forgetting gate t-1 How much of it can be reserved to the current time C t The input comprises the neuron output Y at the last moment t-1 And current neuron input Z t Finally, f is calculated according to the formula (25) t
f t =σ(W f *[Y t-1 ,Z t ]+b f ) (25)
Wherein Y is t-1 Is the last time neuron output value; z t Is the current neuron input value; f. of t Is the output value of the forgetting gate at the current moment; w f Is a weight matrix for a forgetting gate; b f Is a forgotten gate bias term; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4122: data f obtained by forgetting to leave t Is sent to an input gate, which stores the newly input data in the neuron, which is represented by i t Selecting new input data and v t The alternative data state is composed of two parts, i is calculated according to formula (26) t Obtaining new input data, and calculating according to formula (27) to obtain v t An alternative data state;
i t =σ(W i *[Y t-1 ,Z t ]+b i ) (26)
v t =tanh(W c *[Y t-1 ,Z t ]+b c ) (27)
wherein, W i And W c Is the weight matrix of the input gate; tan h is a tangent function; b i And b c Biasing terms for the input gate; i.e. i t Selecting new input data by the input gate; v. of t Is an alternative data state; y is t-1 Is the last time neuron output value; z t Is the current neuron input value; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4123: input gate outputs i according to equation (28) t And an alternative data state v t Multiplying, and outputting f from the forgetting gate at the current moment t And the state C of the neuron memory cell at the previous time t-1 Multiplying, retaining the data information in the neuron state at the previous time, adding the two to generate new memory cell information C t
C t =f t *C t-1 +i t *v t (28)
Wherein f is t Is the forgetting gate output value at the current moment; c t-1 Is the state of the neuron memory cell at the previous time; i all right angle t Is the new input data selected; v. of t Is an alternative data state; c t Is new cell information;
step 4124: the state information obtained from the step 4121 to the step 4123 is transmitted to an output gate, the output state of the neuron at the current moment is controlled, the forgetting gate and the input gate are updated to control how many characteristics of the current state are removed, the removed data is transferred to the next neuron, and the output gate o is obtained according to the calculation of a formula (29) t
o t =σ(W o *[Y t-1 ,Z t ]+b o ) (29)
Wherein o is t Outputting the value for the output gate; w o Is a weight matrix of the output gate; b o Is an output gate bias term; y is t-1 The neuron output value at the previous moment; z t Inputting the neuron at the current moment; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4125: calculating according to a formula (30) to obtain a load predicted value;
Y t =o t *tanh(C t ) (30)
wherein o is t Is output by an output gate; y is t Outputting a value for the neuron at the current moment; c t Is new memory cell information; tan h is a tangent function;
step 5000: inputting the training set of the step 3000 into the optimized LSTM network of the step 4000 for model training, and if the accuracy requirement is met, obtaining a lithium ion battery life prediction model of the improved CSO-LSTM network;
step 6000: inputting the test set in the step 3000 into the lithium ion battery life prediction model of the improved CSO-LSTM network in the step 5000 for testing to obtain a final lithium ion battery life prediction model based on the improved CSO-LSTM network.
An apparatus for improving a life prediction method of a lithium ion battery based on a CSO-LSTM network, the apparatus comprising:
a data processing module: collecting data of a lithium ion Chi Yun row to obtain a first data set; performing data preprocessing according to the first data set, specifically including ensemble empirical mode decomposition and data normalization processing, to finally obtain the third data set, and dividing the third data set obtained by data preprocessing into a training set and a test set;
a model training module: optimizing the hyper-parameters of the LSTM network by adopting an improved CSO algorithm, establishing a lithium ion battery life prediction model based on the improved CSO-LSTM network, and training the lithium ion battery life prediction model of the optimized LSTM network by utilizing the training set in the third data set to obtain the trained lithium ion battery life prediction model based on the improved CSO-LSTM network;
the lithium ion battery life prediction module: and preprocessing the lithium ion battery data to be predicted through a data processing module to obtain a test set of a third data set data sequence, inputting the test set of the third data set data sequence into a lithium ion battery service life prediction model trained by the model training module and based on an improved CSO-LSTM network, and predicting the service life of the lithium ion battery to finally obtain a lithium ion battery service life prediction result.
Preferably, in the data processing module, in the ensemble empirical mode, the requirement of the eigenmode component needs to be satisfied that at any time, after the selected maximum and minimum envelopes are averaged, the selected maximum and minimum envelopes are 0, and the number of local extreme points and zero-crossing points must be equal or the difference between the local extreme points and the zero-crossing points is 1, and finally, an eigenmode component can be obtained; and the residual term ends up with a constant or monotonic curve.
Compared with the prior art, the method has the advantages that, the invention has the beneficial effects that:
the invention adopts an improved chicken swarm algorithm based on nonlinear transformation to select the hyperparameters of the LSTM network, solves the problems that the hyperparameters (learning rate epsilon, the number of neurons, training iteration times and the like) in the LSTM network are randomly set, the adjusting process is complex and has certain blindness, and the prediction precision is not high in the lithium ion battery service life prediction process, provides a novel artificial intelligence method for the lithium ion battery service life prediction, and improves the accuracy, reliability and safety of the lithium ion battery prediction.
Drawings
FIG. 1 is a flow chart of a lithium ion battery life prediction method based on an improved CSO-LSTM network;
Detailed Description
In order to more clearly explain the above-mentioned aspects of the present invention, the present invention is further described in detail with reference to the attached drawings, and it should be noted that the specific implementation described herein is only for explaining the present application and is not used to limit the present application.
Fig. 1 is a flow chart of a lithium ion battery life prediction method based on an improved CSO-LSTM network according to the present invention, and the specific steps are as follows:
step 1000: acquiring lithium ion battery data to form a first data set;
step 2000: performing data preprocessing on the lithium ion battery data of the first data set by using a set empirical mode decomposition method to form a second data set, further comprising steps 2100 to 2600:
step 2100: initializing parameters, and setting the times N of externally added noise and the intensity alpha of Gaussian white noise;
step 2200: will obey a Gaussian distribution [0, δ ] according to equation (1)]White noise n of i (t) adding to each data parameter sequence x (t) in the first data set in step 1000 to form a new data sequence x i (t) calculating the standard deviation of the original data sequence according to the formula (2) to obtain the variance delta for white noise of Gaussian distribution;
x i (t)=x(t)+n i (t) (1)
δ=std[x(t)] (2)
wherein x is i (t) is a new data sequence formed by adding Gaussian white noise, n i (t) is white gaussian noise added at the ith time, x (t) is the original data sequence, i =1,2.., N is the number of times noise is added, δ is the variance used for the gaussian distribution;
step 2300: adopting an empirical mode method to carry out on the new data sequence x i (t) decomposing, further comprising steps 2310 through 2340:
step 2310: regarding the change trend of each parameter data in the new data sequence as a signal, marking the change trend as H (t), connecting all maximum value points in a data change curve to form a line, and obtaining an upper envelope line, and marking the upper envelope line as A 1 (t), connecting all minimum value points in the data change curve to form a line, and obtaining a lower envelope curve B 1 (t); calculating the upper envelope line A and the lower envelope line A by a cubic spline interpolation method according to the formula (3) 1 (t) and B 1 (t) mean curve, denoted S 1 (t);
Figure GDA0003973361900000101
/>
Wherein, A 1 (t) is the upper envelope formed by all maxima of the new data sequence, B 1 (t) is the lower envelope formed by all minimum points of the new data sequence, S 1 (t) is the mean curve of the upper and lower envelope curves;
step 2320: according to the formula (4), the variation trend H (t) of the new data sequence is plotted against the mean curve S 1 (t) subtracting to obtain M 1 (t) determination of M 1 (t) if the intrinsic mode component is satisfied, saving it as IMF 1 (t); otherwise, go to step 2310 until the requirement of the eigenmode component is met;
M 1 (t)=H(t)-S 1 (t) (4)
where H (t) is the trend of change of each parameter data in the new data sequence, S 1 (t) is the mean curve of the upper and lower envelopes, M 1 (t) is the variation trend H (t) of the data and the mean curve S 1 (t) the difference;
step 2330: the eigenmode decomposition IMF described in step 2320 is subtracted from the data trend H (t) 1 (t), according to the formula (5), a new H is obtained 1 (t) and adding new H 1 (t) as a new trend, jump to step 2310 and execute sequentially until a second IMF is obtained that satisfies the requirements of the eigenmode components 2 (t) and reacting it with H 1 (t) subtracting to obtain H 2 (t) as shown in equation (6);
H 1 (t)=H(t)-IMF 1 (t) (5)
H 2 (t)=H 1 (t)-IMF 2 (t) (6)
wherein H (t) is the variation trend of each parameter data in the lithium ion battery data set,H 1 (t) is the data variation trend H (t) and the intrinsic mode component IMF at the previous moment 1 (t) new data trend obtained by differencing, IMF 1 (t) is the resulting first eigenmode decomposition, IMF 2 (t) is the resulting second eigenmode decomposition; h 2 (t) is the trend H of the data change at the previous moment 1 (t) and the intrinsic mode component IMF 2 (t) subtracting to obtain the variation trend of new data;
step 2340: by continuously repeating the steps 2310 to 2330, n H are finally obtained n (t)、IMF n (t) and a residual C (t), the final data trend can be expressed as n IMFs n (t) and a residual term C (t), as shown in equation (7); IMF n (t) sorting the components according to frequency;
Figure GDA0003973361900000102
wherein, IMF i (t) is the ith eigenmode decomposition, C (t) is the residual, H (t) is the original data variation trend; n is the number of final empirical mode runs, H n (t) is the trend of the new data change obtained each time;
step 2400: through the step 2300, a set of intrinsic mode components IMF is finally obtained, as shown in formula (8);
Figure GDA0003973361900000111
wherein r is i (t) decomposing and extracting data residual components after n IMFs are extracted; c. C i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the iteration number of the algorithm, and IMF is the intrinsic modal component; k is the number of the intrinsic mode components;
step 2500: repeating the steps 2200 and 2400N times to obtain the following mode set containing the intrinsic mode component IMF and the residual component, as shown in formula (9);
[{c 1,1 (t)},{c 1,2 (t)},...,{c 1,K (t)},...,{c N,1 (t)},{c N,2 (t)},...,{c N,K (t)}] (9)
wherein, c N,k (t) is the empirical mode pair x i (t) decomposing to obtain the kth modal component, wherein K is the number of the intrinsic modal components; n is the number of algorithm iterations;
step 2600: in order to reduce noise influence, the intrinsic modal components IMF obtained by each decomposition are averagely processed according to a formula (10), and finally modal components are obtained;
Figure GDA0003973361900000112
wherein, c k (t) the modal component from the t-th cycle, c i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the cycle number; k is the number of the intrinsic mode components;
step 3000: normalizing the data in the second data set by adopting a normalization method to form a third data set, and dividing the third data set into a training set and a test set;
carrying out data normalization processing by adopting a formula (11);
Figure GDA0003973361900000113
wherein, maxValue represents the maximum value of the lithium ion battery data; minValue represents the minimum value of the lithium ion battery data; x represents the original data of the lithium ion battery; y represents data after normalization;
step 4000: optimizing the hyperparameter of the LSTM neural network by adopting an improved CSO algorithm, and establishing an LSTM lithium ion battery life prediction model based on optimization, wherein the method specifically comprises the following steps 4010 to 4140:
step 4010: determining a topology of an LSTM network, the structure comprising: the LSTM network inputs the number m of nodes of a layer, the number h of nodes of a hidden layer and the number d of nodes of an output layer; the number m of the nodes of the input layer depends on the number of data parameters which can indirectly represent the performance degradation of the lithium ion battery in the lithium ion battery data; the number d of the nodes of the output layer depends on the number of data parameters capable of directly representing the performance degradation of the lithium ion battery; determining the appropriate number of hidden layer nodes by the number h of the hidden layer nodes by adopting an empirical method and a trial and error method; the number h of hidden layer nodes, the number m of input layer nodes and the number d of output layer nodes satisfy the functional relationship shown in formula (12):
Figure GDA0003973361900000121
wherein m is the number of nodes of the input layer; h is the number of hidden layer nodes; d is the number of output layer nodes; a is a random number and the value range is [1,10];
step 4020: initializing chicken flock algorithm parameters, w min 、w max Determining the upper and lower bounds of the hyperparameter of the LSTM model needing optimization, wherein the hyperparameter comprises a general learning rate epsilon, the number of neurons and the number of training iterations;
step 4030: the upper and lower boundaries of a chicken swarm algorithm search space are respectively the value ranges of the general learning rate epsilon, the number of neurons and the training iteration times contained in the hyper-parameter, the swarm initialization is carried out, and the value of the hyper-parameter to be optimized in the LSTM model is assigned to each chicken in the swarm; according to a formula (13), calculating the average relative error of the prediction result of the LSTM model as the fitness value of each chicken in a chicken swarm algorithm, establishing a grade system by descending operation, taking the front Nr individuals with the best fitness value as cocks, taking the rear Nc individuals with the worst fitness value as chicks, removing the cocks and the chicks, and taking the rest as hens;
Figure GDA0003973361900000122
wherein, Y (x) represents the original data sequence of the lithium ion battery; y ^ (x) represents a data sequence predicted by a prediction model; n represents the number of samples used for prediction;
step 4040: calculating according to a formula (14), and when the value is 1, reordering chicken flocks according to the fitness value of each individual in the t iteration, and establishing a new level system;
mod(t,G)=1 (14)
wherein t is the current iteration frequency, G is the updating number of the grade system, and the value is 10;
step 4050: according to the formula (15), the position of the cock is updated, and the convergence precision of the algorithm is improved by introducing a nonlinear adaptive weight strategy w; in the position updating formula of the cock, when the weight w value is increased, the global searching capability of the algorithm is enhanced, otherwise, when the weight w value is reduced, the local searching capability of the algorithm is enhanced, and the nonlinear adaptive weight strategy w is shown in a formula (17);
x i,j (t+1)=(x i,j (t)*(1+randn(0,σ 2 )))*w(t) (15)
wherein x is i,j (t + 1) is the position of the cock at the t +1 th iteration; x is the number of i,j (t) is the position of the t iteration cock; randn (0, sigma) 2 ) Mean 0 and variance σ 2 The variance σ is calculated according to the formula (16) 2 (ii) a w (t) is the self-adaptive weight of t moment and is calculated according to a formula (17);
Figure GDA0003973361900000131
wherein f is a Fitness value for individual a; a is any individual in the cock population, and the value range is [1, nr]If a is not equal to i, epsilon is the minimum constant in a computer, and f represents a fitness value; nr is the number of cocks; f. of i Fitness value for individual i;
w(t)=exp(-(w start -(log(w start )-log(w end ))*(X-1) 2 )) (17)
wherein w (t) is the adaptive weight at the time t; w is a start 、w end An initial value and an end value of w, respectively; x is the difference degree between the current individual position and the optimal position of the population, and calculation is carried out according to a formula (18);
Figure GDA0003973361900000132
wherein X is the difference degree between the current individual position and the optimal position of the population; g is the optimal position of the population at the moment t, and x is the current individual position at the moment t; x is the number of max 、x min The maximum and minimum values of the population, respectively; t is the maximum iteration number; n is the number of data contained in the position where the individual is located;
step 4060: according to a formula (19), the positions of the hens are updated, and the convergence rate of the algorithm is increased by introducing a Gaussian mutation operator and a normal distribution learning updating strategy; wherein, a Gaussian mutation operator is obtained by calculation according to a formula (20), information exchange between the population-optimal individual and the current individual is realized, a value of a normal distribution learning updating strategy is obtained by calculation according to a formula (21), and when w of the normal distribution learning updating strategy is used 1 When the value rises, the algorithm search range becomes large, and the w of the updating strategy of normal distribution learning 1 When the value is reduced, the local searching capability of the algorithm is enhanced, and finally the searching mode of the hen can be changed, so that the searching range of the hen is expanded;
Figure GDA0003973361900000133
Figure GDA0003973361900000134
w 1 =w min *w max /w min 1/(1+10*t/T) +Randn() (21)
wherein M is a Gaussian mutation operator; p is a radical of min And p max Minimum and maximum values taken at the point of gaussian variation; r is i Random numbers in the range of (0,1); dim is the maximum dimension of the population; x is the number of best,j The position of the optimal solution in the population; t is the total iteration number; w is a 1 A random learning coefficient that is normally distributed; t is the current iteration times of the algorithm; w is a max 、w min Are respectively w 1 Maximum and minimum values of; randn () is a random number in normal distribution; rand is a random number between (0,1); c 1 、C 2 Calculating according to the formulas (22) and (23) for following the coefficient; x is the number of i,j (t + 1) is the position of the hen at the t +1 th iteration; x is the number of i,j (t) is the position of the t iteration hen;
Figure GDA0003973361900000141
is the position of the cock in the population at the t +1 th iteration; />
Figure GDA0003973361900000142
Is the position of the cock or hen in the population at the t-th iteration; />
Figure GDA0003973361900000143
Figure GDA0003973361900000144
Wherein r is 1 Is the randomly selected cock position and has a value range of [1, nr];r 2 Is the random cock or hen position with the value range of [1, nr + Nh],r 1 、r 2 Are not equal; f. of i Fitness value for individual i;
Figure GDA0003973361900000145
is a subject r 2 A fitness value of; />
Figure GDA0003973361900000146
Is a subject r 1 A fitness value of; nr is the number of cocks; nh is the number of hens; epsilon is the smallest constant in the computer;
step 4070: according to a formula (24), updating the position of the chicken, introducing a learning updating strategy of normal distribution, calculating by the formula (21) in the step 4060, increasing the learning chance of the chicken to other individuals, and changing the search mode of the chicken;
x i,j (t+1)=w 1 *(x i,j (t)+F*(x m,j (t)-x i,j (t))+(x s,j (t)-x i,j (t))) (24)
wherein x is m,j (t) is the position of the chick following the mother hen at the tth iteration; the value range of m is [1, nh](ii) a Nh is the number of hens; f is the foraging coefficient of the chick following the mother hen; x is the number of s,j (t) is the position of a random cock or hen in the population at the tth iteration, the value range of s is (1, nr + Nh), and s and m are unequal; nr is the number of cocks; w is a 1 A random learning coefficient that is normally distributed; x is a radical of a fluorine atom i,j (t + 1) is the position of the chick in the t +1 th iteration; x is the number of i,j (t) is the position of the chicken in the t iteration;
step 4080: circularly updating the positions of the cocks, the hens and the chickens in the steps 4050 to 4070 to obtain the optimal updated position and the fitness value in the circulating process;
step 4090: determining the individual history optimal and the group history optimal of the particles according to the individual fitness value at a certain moment in the cyclic process, and converting when the individual fitness value is superior to the fitness value at the previous moment; otherwise, abandoning the updating;
step 4100: when the algorithm reaches the maximum iteration times or meets the requirement of average relative error precision, the algorithm is terminated; otherwise, returning to the step 4030;
step 4110: after iteration is finished, recording the optimal position in the chicken flock, and assigning the obtained optimal position value to a general learning rate epsilon, the number of neurons and training iteration times;
step 4120: inputting the hyperparameter value obtained in step 4110 into the LSTM network, specifically comprising steps 4121 to 4125;
step 4121: the state C of the neuron memory unit at the last moment is confirmed by lithium ion battery data through a forgetting gate t-1 How much can be reserved to the current time C t Information of (2)Quantity, input comprising the last time neuron output Y t-1 And current neuron input Z t Finally, f is calculated according to the formula (25) t
f t =σ(W f *[Y t-1 ,Z t ]+b f ) (25)
Wherein, Y t-1 Is the last time neuron output value; z t Is the current neuron input value; f. of t Is the output value of the forgetting gate at the current moment; w f Is a weight matrix for a forgetting gate; b is a mixture of f Is a forget gate bias term; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4122: data f obtained by forgetting to leave t Is sent to an input gate which stores the newly input data in the neuron which is represented by i t Selecting new input data sum v t The alternative data state is composed of two parts, i is calculated according to formula (26) t Obtaining new input data, and calculating according to formula (27) to obtain v t An alternative data state;
i t =σ(W i *[Y t-1 ,Z t ]+b i ) (26)
v t =tanh(W c *[Y t-1 ,Z t ]+b c ) (27)
wherein, W i And W c Is the weight matrix of the input gate; tan h is a tangent function; b i And b c Biasing terms for the input gate; i.e. i t Selecting new input data by the input gate; v. of t Is an alternative data state; y is t-1 Is the last time neuron output value; z t Is the current neuron input value; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4123: input gate outputs i according to equation (28) t And an alternative data state v t Multiplying, and outputting f from the forgetting gate at the current moment t And the state C of the neuron memory cell at the previous time t-1 Multiplying, namely reserving the data information in the neuron state at the previous moment, and performing the multiplication and the preservationAdding to generate new memory cell information C t
C t =f t *C t-1 +i t *v t (28)
Wherein f is t Is the output value of the forgetting gate at the current moment; c t-1 Is the state of the neuron memory cell at the previous time; i.e. i t Is the selected new input data; v. of t Is an alternative data state; c t Is new cell information;
step 4124: the state information obtained from the step 4121 to the step 4123 is transmitted to an output gate to control the output state of the neuron at the current moment, the information of a forgetting gate and an input gate is updated to control how many characteristics of the current state are removed, the removed data is transferred to the next neuron, and the output gate o is obtained by calculation according to a formula (29) t
o t =σ(W o *[Y t-1 ,Z t ]+b o ) (29)
Wherein o is t Outputting the value for the output gate; w o Is a weight matrix of the output gate; b o Is an output gate bias term; y is t-1 The neuron output value at the previous moment; z is a linear or branched member t Inputting the neuron at the current moment; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4125: calculating according to a formula (30) to obtain a load predicted value;
Y t =o t *tanh(C t ) (30)
wherein o is t Is output by an output gate; y is t Outputting a value for the neuron at the current moment; c t Is new memory cell information; tan h is a tangent function;
step 5000: inputting the training set of the step 3000 into the optimized LSTM network of the step 4000 for model training, and if the accuracy requirement is met, obtaining a lithium ion battery life prediction model of the improved CSO-LSTM network;
step 6000: inputting the test set in the step 3000 into the lithium ion battery life prediction model of the improved CSO-LSTM network in the step 5000 for testing to obtain a final lithium ion battery life prediction model based on the improved CSO-LSTM network.
An apparatus for lithium ion battery life prediction method based on improved CSO-LSTM network, the apparatus comprising:
a data processing module: collecting data of a lithium ion Chi Yun row to obtain a first data set; performing data preprocessing according to the first data set, specifically including ensemble empirical mode decomposition and data normalization processing, to finally obtain the third data set, and dividing the third data set obtained by data preprocessing into a training set and a test set;
a model training module: optimizing the hyper-parameters of the LSTM network by adopting an improved CSO algorithm, establishing a lithium ion battery life prediction model based on the improved CSO-LSTM network, and training the lithium ion battery life prediction model of the optimized LSTM network by utilizing the training set in the third data set to obtain the trained lithium ion battery life prediction model based on the improved CSO-LSTM network;
the lithium ion battery life prediction module: and preprocessing the lithium ion battery data to be predicted through a data processing module to obtain a test set of a third data set data sequence, inputting the test set of the third data set data sequence into a lithium ion battery service life prediction model trained by the model training module and based on an improved CSO-LSTM network, and predicting the service life of the lithium ion battery to finally obtain a lithium ion battery service life prediction result.
Preferably, in the data processing module, in the ensemble empirical mode, the requirement of the eigenmode component needs to be satisfied that at any time, after the selected maximum and minimum envelopes are averaged, the selected maximum and minimum envelopes are 0, and the number of local extreme points and zero-crossing points must be equal or the difference between the local extreme points and the zero-crossing points is 1, and finally, an eigenmode component can be obtained; and the residual term ends up with a constant or monotonic curve.
The invention adopts an improved chicken swarm algorithm based on nonlinear transformation to select the hyperparameters of the LSTM network, solves the problems that the hyperparameters (learning rate epsilon, the number of neurons, training iteration times and the like) in the LSTM model are randomly set, the adjustment process is complex and has certain blindness, and the prediction precision of the service life of the lithium ion battery is reduced in the prediction process, provides a novel artificial intelligence method for the prediction of the service life of the lithium ion battery, and improves the reliability and the safety of the lithium ion battery.
The above description is only an example of the present invention and is not intended to limit the scope of the present invention, and it will be understood by those skilled in the art that various changes and modifications may be made, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention are included in the scope of the present invention.

Claims (3)

1. A lithium ion battery life prediction method based on an improved CSO-LSTM network is characterized by comprising the following steps:
step 1000: acquiring lithium ion battery data to form a first data set;
step 2000: performing data preprocessing on the lithium ion battery data of the first data set by using a set empirical mode decomposition method to form a second data set, further comprising steps 2100 to 2600:
step 2100: initializing parameters, and setting the times N of externally added noise and the intensity alpha of Gaussian white noise;
step 2200: will obey a Gaussian distribution [0, δ ] according to equation (1)]White noise n of i (t) adding to each data parameter sequence x (t) in the first data set in step 1000 to form a new data sequence x i (t) calculating the standard deviation of the original data sequence according to the formula (2) to obtain the variance delta for white noise of Gaussian distribution;
x i (t)=x(t)+n i (t) (1)
δ=std[x(t)] (2)
wherein x is i (t) is a new data sequence formed by adding Gaussian white noise, n i (t) is high for the ith additionWhite gaussian noise, x (t) is the original data sequence, i =1,2.., N is the number of noise added, δ is the variance used for the gaussian distribution;
step 2300: adopting an empirical mode method to carry out on the new data sequence x i (t) performing decomposition, further comprising steps 2310 through 2340:
step 2310: regarding the change trend of each parameter data in the new data sequence as a signal, marking the change trend as H (t), connecting all maximum value points in a data change curve to form a line, and obtaining an upper envelope line, and marking the upper envelope line as A 1 (t), connecting all minimum value points in the data change curve to form a line, and obtaining a lower envelope curve B 1 (t); calculating the upper envelope line A and the lower envelope line A by a cubic spline interpolation method according to the formula (3) 1 (t) and B 1 (t) mean curve, denoted S 1 (t);
Figure FDA0003973361890000011
Wherein, A 1 (t) is the upper envelope formed by all maxima of the new data sequence, B 1 (t) is the lower envelope formed by all minimum points of the new data sequence, S 1 (t) is the mean curve of the upper and lower envelope curves;
step 2320: according to the formula (4), the variation trend H (t) of the new data sequence is plotted against the mean curve S 1 (t) subtracting to obtain M 1 (t); judgment M 1 (t) if the intrinsic mode component is satisfied, saving it as IMF 1 (t); otherwise, go to step 2310 until the requirement of the eigenmode component is met;
M 1 (t)=H(t)-S 1 (t) (4)
where H (t) is the trend of change of each parameter data in the new data sequence, S 1 (t) is the mean curve of the upper and lower envelopes, M 1 (t) is the variation trend H (t) of the data and the mean curve S 1 (t) the difference;
step 2330: the eigenmodes described in step 2320 are subtracted from the trend of data H (t)State component IMF 1 (t), according to the formula (5), a new H is obtained 1 (t) and adding new H 1 (t) as a new trend, jump to step 2310 and execute sequentially until a second IMF is obtained that satisfies the requirements of the eigenmode components 2 (t) and reacting it with H 1 (t) subtracting to obtain H 2 (t) as shown in equation (6);
H 1 (t)=H(t)-IMF 1 (t) (5)
H 2 (t)=H 1 (t)-IMF 2 (t) (6)
wherein H (t) is the variation trend of each parameter data in the lithium ion battery data set, H 1 (t) is the data trend H (t) and the IMF of the previous time 1 (t) new data trend obtained by differencing, IMF 1 (t) is the resulting first eigenmode component, IMF 2 (t) is the resulting second eigenmode component; h 2 (t) is the trend H of the data change at the previous moment 1 (t) and the intrinsic mode component IMF 2 (t) subtracting to obtain the variation trend of new data;
step 2340: by repeating steps 2310 to 2330, n H are finally obtained n (t)、IMF n (t) and a residual C (t), the final data trend can be expressed as n IMFs n (t) and a residual term C (t), as shown in equation (7); IMF n (t) sorting the components according to frequency;
Figure FDA0003973361890000021
wherein, IMF i (t) is the ith eigenmode component, C (t) is the residual term, H (t) is the original data trend; n is the number of final empirical mode runs, H n (t) is the trend of the new data change obtained each time;
step 2400: through the step 2300, a set of intrinsic mode components IMF is finally obtained, as shown in formula (8);
Figure FDA0003973361890000022
wherein r is i (t) decomposing and extracting data residual components after n IMFs are extracted; c. C i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the iteration number of the algorithm, and IMF is the intrinsic modal component; k is the number of the intrinsic mode components;
step 2500: repeating the steps 2200 and 2400N times to obtain the following mode set containing the intrinsic mode component IMF and the residual component, as shown in formula (9);
[{c 1,1 (t)},{c 1,2 (t)},...,{c 1,K (t)},...,{c N,1 (t)},{c N,2 (t)},...,{c N,K (t)}] (9)
wherein, c N,k (t) is the empirical mode pair x i (t) decomposing to obtain the kth modal component, wherein K is the number of the intrinsic modal components; n is the number of algorithm iterations;
step 2600: in order to reduce noise influence, the intrinsic modal components IMF obtained by each decomposition are averagely processed according to a formula (10), and finally modal components are obtained;
Figure FDA0003973361890000031
wherein, c k (t) the modal component from the t-th cycle, c i,k (t) is the empirical mode pair x i (t) decomposing the K-th modal component, wherein K =1,2,. Multidot.K, i =1,2,. Multidot.N, N is the cycle number; k is the number of the intrinsic mode components;
step 3000: carrying out normalization processing on the data in the second data set by adopting a normalization method to form a third data set, and dividing the third data set into a training set and a test set;
carrying out data normalization processing by adopting a formula (11);
Figure FDA0003973361890000032
wherein, maxValue represents the maximum value of the lithium ion battery data; minValue represents the minimum value of the lithium ion battery data; x represents the original data of the lithium ion battery; y represents the data after normalization;
step 4000: optimizing the hyperparameter of the LSTM neural network by adopting an improved CSO algorithm, and establishing an optimized LSTM lithium ion battery life prediction model; specifically, the method comprises steps 4010 to 4140:
step 4010: determining a topology of an LSTM network, the structure comprising: the LSTM network inputs the number m of nodes of a layer, the number h of nodes of a hidden layer and the number d of nodes of an output layer; the number m of the nodes of the input layer depends on the number of data parameters which can indirectly represent the performance degradation of the lithium ion battery in the lithium ion battery data; the number d of the nodes of the output layer depends on the number of data parameters capable of directly representing the performance degradation of the lithium ion battery; determining the appropriate number of hidden layer nodes by adopting an empirical method and a trial and error method; the number h of hidden layer nodes, the number m of input layer nodes and the number d of output layer nodes satisfy the functional relationship shown in formula (12):
Figure FDA0003973361890000033
wherein m is the number of nodes of the input layer; h is the number of hidden layer nodes; d is the number of output layer nodes; a is a random number and the value range is [1,10];
step 4020: initializing chicken flock algorithm parameters, w min 、w max The method comprises the following steps of determining an upper and lower super-parameter bound of an LSTM model needing optimization, wherein the super-parameter comprises a general learning rate epsilon, the number of neurons and training iteration times;
step 4030: the upper and lower boundaries of a chicken swarm algorithm search space are respectively the value ranges of the general learning rate epsilon, the number of neurons and the training iteration times contained in the hyper-parameter, the swarm initialization is carried out, and the value of the hyper-parameter to be optimized in the LSTM model is assigned to each chicken in the swarm; according to a formula (13), calculating the average relative error of the prediction result of the LSTM model as the fitness value of each chicken in a chicken swarm algorithm, establishing a grade system by descending operation, taking the front Nr individuals with the best fitness value as cocks, taking the rear Nc individuals with the worst fitness value as chicks, removing the cocks and the chicks, and taking the rest as hens;
Figure FDA0003973361890000041
wherein Y (x) represents the original data sequence of the lithium ion battery; y ^ (x) represents a data sequence predicted by a prediction model; n represents the number of samples used for prediction;
step 4040: calculating according to a formula (14), and when the value is 1, reordering chicken flocks according to the fitness value of each individual in the t iteration, and establishing a new level system;
mod(t,G)=1 (14)
wherein t is the current iteration times, G is the updating number of the grade system, and the value is 10;
step 4050: updating the position of the cock according to a formula (15), and improving the convergence precision of the algorithm by introducing a nonlinear adaptive weight strategy w; in the position updating formula of the cock, when the weight w value is increased, the global searching capability of the algorithm is enhanced, otherwise, when the weight w value is reduced, the local searching capability of the algorithm is enhanced, and the nonlinear adaptive weight strategy w is shown in a formula (17);
x i,j (t+1)=(x i,j (t)*(1+randn(0,σ 2 )))*w(t) (15)
wherein x is i,j (t + 1) is the position of the cock at the t +1 th iteration; x is the number of i,j (t) is the position of the t iteration cock; randn (0, sigma) 2 ) Mean 0 and variance σ 2 The variance σ is calculated according to the formula (16) 2 (ii) a w (t) is the self-adaptive weight of t moment and is calculated according to a formula (17);
Figure FDA0003973361890000042
wherein f is a Fitness value for individual a; a is any individual in the cock population, and the value range is [1, nr]Satisfying a is not equal to i, epsilon is the minimum constant in a computer, and f represents a fitness value; nr is the number of cocks; f. of i Fitness value for individual i;
w(t)=exp(-(w start -(log(w start )-log(w end ))*(X-1) 2 )) (17)
wherein w (t) is the adaptive weight at the time t; w is a start 、w end An initial value and an end value of w, respectively; x is the difference degree between the current individual position and the optimal position of the population, and calculation is carried out according to a formula (18);
Figure FDA0003973361890000043
wherein X is the difference degree between the current individual position and the optimal position of the population; g is the optimal position of the population at the moment t, and x is the current individual position at the moment t; x is a radical of a fluorine atom max 、x min The maximum and minimum values of the population, respectively; t is the maximum iteration number; n is the number of data contained in the position where the individual is located;
step 4060: according to a formula (19), the positions of the hens are updated, and the convergence rate of the algorithm is increased by introducing a Gaussian mutation operator and a normal distribution learning updating strategy; wherein, a Gaussian mutation operator is obtained by calculation according to a formula (20), information exchange between the population-optimal individual and the current individual is realized, a value of a normal distribution learning updating strategy is obtained by calculation according to a formula (21), and when w of the normal distribution learning updating strategy is used 1 When the value rises, the algorithm search range becomes large, and the w of the updating strategy of normal distribution learning 1 When the value is reduced, the local searching capability of the algorithm is enhanced, and finally the searching mode of the hen can be changed, so that the searching range of the hen is expanded;
Figure FDA0003973361890000051
Figure FDA0003973361890000052
w 1 =w min *w max /w min 1/(1+10*t/T) +Randn() (21)
wherein M is a Gaussian mutation operator; p is a radical of min And p max Minimum and maximum values taken at the point of gaussian variation; r is i Random numbers in the range of (0,1); dim is the maximum dimension of the population; x is the number of best,j The position of the optimal solution in the population; t is the total iteration number; w is a 1 A random learning coefficient that is normally distributed; t is the current iteration times of the algorithm; w is a max 、w min Are respectively w 1 Maximum and minimum values of (a); randn () is a normally distributed random number; rand is a random number between (0,1); c 1 、C 2 Calculating according to the formulas (22) and (23) for following the coefficient; x is a radical of a fluorine atom i,j (t + 1) is the position of the hen at the t +1 th iteration; x is the number of i,j (t) is the position of the t iteration hen;
Figure FDA0003973361890000053
is the position of the cock in the population at the t +1 iteration;
Figure FDA0003973361890000054
is the position of the cock or hen in the population at the t-th iteration;
Figure FDA0003973361890000055
Figure FDA0003973361890000056
wherein r is 1 Is the randomly selected cock position and has the value range of [1, nr];r 2 Is the random cock or hen position with the value range of [1, nr + Nh],r 1 、r 2 Not equal; f. of i Fitness value for individual i;
Figure FDA0003973361890000057
is an individual r 2 A fitness value of;
Figure FDA0003973361890000058
is a subject r 1 A fitness value of; nr is the number of cocks; nh is the number of hens; ε is the smallest constant in the computer;
step 4070: updating the position of the chicken according to a formula (24), introducing a learning updating strategy of normal distribution, calculating by the formula (21) in the step 4060, increasing the learning chance of the chicken to other individuals, and changing the search mode of the chicken;
x i,j (t+1)=w 1 *(x i,j (t)+F*(x m,j (t)-x i,j (t))+(x s,j (t)-x i,j (t))) (24)
wherein x is m,j (t) is the position of the chick following the mother hen at the tth iteration; the value range of m is [1, nh]Nh is the number of hens; f is the foraging coefficient of the chick following the mother hen; x is the number of s,j (t) is the position of a random cock or hen in the population at the tth iteration, the value range of s is (1, nr + Nh), and s and m are unequal; nr is the number of cocks; w is a 1 A random learning coefficient that is normally distributed; x is a radical of a fluorine atom i,j (t + 1) is the position of the chick in the t +1 th iteration; x is the number of i,j (t) is the position of the chicken in the t iteration;
step 4080: circularly updating the positions of the cocks, the hens and the chickens in the steps 4050 to 4070 to obtain the optimal updated position and the fitness value in the circulating process;
step 4090: determining the individual history optimal and the group history optimal of the particles according to the fitness value of the individual at a certain moment in the circulation process, and converting when the individual fitness value is superior to the fitness value at the last moment; otherwise, abandoning the updating;
step 4100: when the algorithm reaches the maximum iteration times or meets the requirement of average relative error precision, the algorithm is terminated; otherwise, returning to the step 4030;
step 4110: after iteration is finished, recording the optimal position in the chicken flock, and assigning the obtained optimal position value to a general learning rate epsilon, the number of neurons and training iteration times;
step 4120: inputting the hyper-parameter value obtained in the step 4110 into the LSTM network, specifically comprising a step 4121 to a step 4125;
step 4121: the state C of the neuron memory unit at the last moment is confirmed by the lithium ion battery data through a forgetting gate t-1 How much of it can be reserved to the current time C t The input comprises the neuron output Y at the last moment t-1 And current neuron input Z t Finally, f is calculated according to the formula (25) t
f t =σ(W f *[Y t-1 ,Z t ]+b f ) (25)
Wherein, Y t-1 Is the last time neuron output value; z t Is the current neuron input value; f. of t Is the output value of the forgetting gate at the current moment; w f Is a weight matrix for a forgetting gate; b is a mixture of f Is a forget gate bias term; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4122: data f obtained by forgetting to leave t Is sent to an input gate, which stores the newly input data in the neuron, which is represented by i t Selecting new input data and v t The alternative data state is composed of two parts, i is calculated according to formula (26) t Obtaining new input data, and calculating according to formula (27) to obtain v t An alternative data state;
i t =σ(W i *[Y t-1 ,Z t ]+b i ) (26)
v t =tanh(W c *[Y t-1 ,Z t ]+b c ) (27)
wherein, W i And W c Is the weight matrix of the input gate; tan h is a tangent function; b i And b c Biasing terms for the input gate; i.e. i t Selecting new input data by the input gate; v. of t Is an alternative data state; y is t-1 Is the last time neuron output value; z t Is the current neuron input value; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4123: input gate output i according to equation (28) t And an alternative data state v t Multiplying, and outputting f from the forgetting gate at the current moment t And the state C of the neuron memory cell at the previous time t-1 Multiplying, retaining the data information in the neuron state at the previous time, adding the two to generate new memory cell information C t
C t =f t *C t-1 +i t *v t (28)
Wherein, f t Is the output value of the forgetting gate at the current moment; c t-1 Is the state of the neuron memory cell at the previous time; i.e. i t Is the selected new input data; v. of t Is an alternative data state; c t Is new cell information;
step 4124: the state information obtained from the step 4121 to the step 4123 is transmitted to an output gate, the output state of the neuron at the current moment is controlled, the forgetting gate and the input gate are updated to control how many characteristics of the current state are removed, the removed data is transferred to the next neuron, and the output gate o is obtained according to the calculation of a formula (29) t
o t =σ(W o *[Y t-1 ,Z t ]+b o ) (29)
Wherein o is t Outputting the value for the output gate; w o Is a weight matrix of the output gate; b is a mixture of o Is an output gate bias term; y is t-1 The neuron output value at the previous moment; z t Inputting the neuron at the current moment; sigma is an activation function sigmoid and has a value range of [0,1]Where 0 is completely discarded and 1 is completely reserved;
step 4125: calculating according to a formula (30) to obtain a load predicted value;
Y t =o t *tanh(C t ) (30)
wherein o is t Is output by an output gate; y is t Outputting a value for the neuron at the current moment; c t Is new cell information; tan h is a tangent function;
step 5000: inputting the training set of the step 3000 into the optimized LSTM network of the step 4000 for model training, and if the accuracy requirement is met, obtaining a lithium ion battery life prediction model of the improved CSO-LSTM network;
step 6000: inputting the test set in the step 3000 into the lithium ion battery life prediction model of the improved CSO-LSTM network in the step 5000 for testing, so as to obtain a final lithium ion battery life prediction model based on the improved CSO-LSTM network.
2. An apparatus adopting the method for predicting the lifetime of a lithium ion battery based on an improved CSO-LSTM network according to claim 1, wherein the apparatus comprises:
a data processing module: collecting data of a lithium ion Chi Yun row to obtain a first data set; performing data preprocessing according to the first data set, specifically including ensemble empirical mode decomposition and data normalization processing, to finally obtain the third data set, and dividing the third data set obtained by data preprocessing into a training set and a test set;
a model training module: optimizing the hyper-parameters of the LSTM network by adopting an improved CSO algorithm, establishing a lithium ion battery life prediction model based on the improved CSO-LSTM network, and training the lithium ion battery life prediction model of the optimized LSTM network by utilizing the training set in the third data set to obtain the trained lithium ion battery life prediction model based on the improved CSO-LSTM network;
the lithium ion battery life prediction module: and preprocessing the lithium ion battery data to be predicted through a data processing module to obtain a test set of a third data set data sequence, inputting the test set of the third data set data sequence into a lithium ion battery service life prediction model trained by the model training module and based on an improved CSO-LSTM network, and predicting the service life of the lithium ion battery to finally obtain a lithium ion battery service life prediction result.
3. The device of claim 2, wherein in the data processing module, in the ensemble empirical mode, the requirement of the eigenmode component needs to be satisfied that at any time, the selected maxima and minima envelopes are averaged to be 0, and the number of local maxima and zero-crossing points must be equal or 1 difference is obtained, so that an eigenmode component can be obtained; and the residual term ends up with a constant or monotonic curve.
CN202210406592.1A 2022-04-18 2022-04-18 Lithium ion battery life prediction method and device based on improved CSO-LSTM network Active CN114791571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210406592.1A CN114791571B (en) 2022-04-18 2022-04-18 Lithium ion battery life prediction method and device based on improved CSO-LSTM network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210406592.1A CN114791571B (en) 2022-04-18 2022-04-18 Lithium ion battery life prediction method and device based on improved CSO-LSTM network

Publications (2)

Publication Number Publication Date
CN114791571A CN114791571A (en) 2022-07-26
CN114791571B true CN114791571B (en) 2023-03-24

Family

ID=82461782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210406592.1A Active CN114791571B (en) 2022-04-18 2022-04-18 Lithium ion battery life prediction method and device based on improved CSO-LSTM network

Country Status (1)

Country Link
CN (1) CN114791571B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386215B (en) * 2023-03-16 2024-04-19 淮阴工学院 Intelligent charging method for mobile electric box based on people flow
CN117374469B (en) * 2023-12-07 2024-02-09 深圳市普裕时代新能源科技有限公司 Control method based on immersed liquid cooling energy storage system
CN117828482B (en) * 2024-03-04 2024-05-07 北京航空航天大学 Grey model chip life prediction method based on chicken swarm algorithm and Markov method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015135286A (en) * 2014-01-17 2015-07-27 株式会社豊田中央研究所 Characteristic estimation device of secondary battery
CN109143093A (en) * 2018-06-29 2019-01-04 广东工业大学 Based on the battery SOC evaluation method for intersecting optimization neural network in length and breadth
CN109492814B (en) * 2018-11-15 2021-04-20 中国科学院深圳先进技术研究院 Urban traffic flow prediction method, system and electronic equipment
CN110119778B (en) * 2019-05-10 2024-01-05 辽宁大学 Equipment health state detection method for improving chicken flock optimization RBF neural network
CN112966432A (en) * 2021-02-09 2021-06-15 东北电力大学 Method and device for predicting remaining effective life of lithium ion battery

Also Published As

Publication number Publication date
CN114791571A (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN114791571B (en) Lithium ion battery life prediction method and device based on improved CSO-LSTM network
CN112241608B (en) Lithium battery life prediction method based on LSTM network and transfer learning
CN110059844B (en) Energy storage device control method based on ensemble empirical mode decomposition and LSTM
CN106600059B (en) Intelligent power grid short-term load prediction method based on improved RBF neural network
CN110888059B (en) Charge state estimation algorithm based on improved random forest combined volume Kalman
CN108985515B (en) New energy output prediction method and system based on independent cyclic neural network
CN114325450A (en) Lithium ion battery health state prediction method based on CNN-BilSTM-AT hybrid model
CN111203887B (en) Robot control system optimization method based on NSGA-II fuzzy logic reasoning
CN109995031B (en) Probability power flow deep learning calculation method based on physical model
CN111523728B (en) Four-stage hybrid short-term wind direction prediction method
CN111144552A (en) Multi-index grain quality prediction method and device
CN113591954A (en) Filling method of missing time sequence data in industrial system
CN112734002A (en) Service life prediction method based on data layer and model layer joint transfer learning
CN115219906A (en) Multi-model fusion battery state of charge prediction method and system based on GA-PSO optimization
CN112884236A (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
CN116579371A (en) Double-layer optimization heterogeneous proxy model assisted multi-objective evolutionary optimization computing method
CN117312835A (en) Fuel cell performance prediction method based on mixed probability data driving model
CN111079902A (en) Decomposition fuzzy system optimization method and device based on neural network
Smith et al. Multi-objective evolutionary recurrent neural network ensemble for prediction of computational fluid dynamic simulations
CN116542382A (en) Sewage treatment dissolved oxygen concentration prediction method based on mixed optimization algorithm
CN115800274A (en) Automatic self-adaption method and device for feeder of 5G power distribution network and storage medium
CN113807005A (en) Bearing residual life prediction method based on improved FPA-DBN
CN114139783A (en) Wind power short-term power prediction method and device based on nonlinear weighted combination
CN113379149A (en) Air micro-station concentration prediction method based on LSTM neural network
CN116609672B (en) Energy storage battery SOC estimation method based on improved BWOA-FNN algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant