CN113534938B - Method for estimating residual electric quantity of notebook computer based on improved Elman neural network - Google Patents

Method for estimating residual electric quantity of notebook computer based on improved Elman neural network Download PDF

Info

Publication number
CN113534938B
CN113534938B CN202110733278.XA CN202110733278A CN113534938B CN 113534938 B CN113534938 B CN 113534938B CN 202110733278 A CN202110733278 A CN 202110733278A CN 113534938 B CN113534938 B CN 113534938B
Authority
CN
China
Prior art keywords
neural network
layer
data
network model
battery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110733278.XA
Other languages
Chinese (zh)
Other versions
CN113534938A (en
Inventor
柯春凯
陈思哲
王玉乐
王裕
常乐
章云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110733278.XA priority Critical patent/CN113534938B/en
Publication of CN113534938A publication Critical patent/CN113534938A/en
Application granted granted Critical
Publication of CN113534938B publication Critical patent/CN113534938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for estimating the residual electric quantity of a notebook computer based on an improved Elman neural network, which is suitable for estimating the residual electric quantity of a battery when the notebook computer enters a sleep state, and comprises the following steps: constructing an original data set; preprocessing the data set; dividing the data set; constructing a neural network model structure; training a neural network model; optimizing the neural network model; evaluating the neural network model and embedding it into a battery management system; and estimating the residual electric quantity of the notebook computer. Compared with the prior art, the method avoids the adverse effect of single data on the estimation precision of the residual electric quantity by increasing the quantity of the input data, establishes an attention mechanism layer to reasonably distribute the weight of the input characteristic data, and effectively solves the problem of distraction of the model, thereby improving the accuracy of the estimation result and avoiding the influence of the discharge current change of the battery on the estimation precision of the residual electric quantity.

Description

Method for estimating residual electric quantity of notebook computer based on improved Elman neural network
Technical Field
The invention relates to the technical field of batteries, in particular to a notebook computer residual capacity estimation method based on an improved Elman neural network.
Background
A notebook computer is an important portable office device, and has a limited amount of battery power. Therefore, the accurate estimation of the residual battery capacity is the key for reasonably arranging the running task and time of the battery management system and the user of the notebook computer, and is helpful for overcoming the 'battery capacity anxiety disorder'.
At present, the estimation method of the remaining battery capacity mainly comprises an ampere-hour integration method, an open-circuit voltage method, a Kalman filtering method and a data driving method. Due to the strong computing capability of the notebook computer, the estimation of the residual battery capacity based on data driving has good application prospect. In the existing method for estimating the remaining capacity of the battery based on data driving, the voltage, the current and the temperature of the battery at a certain moment are generally used as the input of a neural network to estimate the remaining capacity of the battery, but the method has the following problems: firstly, the input data volume is too small, and the measurement error of a single volume can have great influence on the estimation precision of the residual electric quantity; the problem of weight distribution of input characteristic data is not considered, and the estimation precision is not improved; and thirdly, the time sequence dependence is relatively serious, and the residual capacity of the battery is difficult to accurately estimate under the condition of discharge current change.
Disclosure of Invention
The invention provides a notebook computer residual electric quantity estimation method based on an improved Elman neural network according to transient transition characteristics of battery terminal voltage during the notebook computer entering a sleep state, aiming at solving the problems of estimation precision and characteristic weight distribution in the existing battery residual electric quantity estimation method based on data driving.
The technical scheme of the invention is as follows:
a method for estimating the residual electric quantity of a notebook computer based on an improved Elman neural network is suitable for estimating the residual electric quantity of a battery when the notebook computer enters a sleep state, and comprises the following specific processes:
s1: constructing an original data set DrawThe method comprises the following steps of periodically discharging batteries of a plurality of notebook computers of the same model, recording the current of the batteries before the end of discharging, the terminal voltage of the batteries within a period of time after the end of discharging and the average temperature as input characteristic data of an original data set when each discharging is finished, and recording the residual electric quantity of the batteries at the end of discharging as a target value of the original data set, wherein the specific steps comprise:
s101: selecting batteries of M laptops with the same model; from 0 to the rated current ImaxConstructing an inclusion within the intervalAn arithmetic sequence of N elements to form a discharge current set Idis=[i1,i2,…,iN]The battery capacity interval is [0, 100%]Evenly dividing the region into P regions;
s102: selecting a battery of the 1 st notebook computer to perform an intermittent discharge experiment;
s103: from the set of discharge currents IdisThe 1 st element is selected as the discharge current idis
S104: applying a selected current i to the selected celldisPerforming constant current discharge, stopping discharge every time 1/P of rated capacity of the battery is discharged, maintaining the discharge stopping state for T seconds, and discharging current idisVoltage within T seconds of stopping discharge
Figure GDA0003420445310000021
Mean temperature TpStoring the current residual capacity SOC as input characteristic datapStoring as target value, forming a sample data
Figure GDA0003420445310000022
The following were used:
Figure GDA0003420445310000023
wherein p is the discharge current i of the batterydisNumber of times of discharge of 1/P rated capacity, utTerminal voltage at the t-th moment after the battery stops discharging;
s105: repeating the step S104P times until the remaining battery capacity is zero, and integrating all data saved during the execution into a data set D, so as to obtain:
Figure GDA0003420445310000024
s106: the battery is charged to full capacity by adopting a constant-current constant-voltage charging mode;
s107: sequentially from the set of discharge currents Idis2 nd to N th elements are selected as discharge current idisAnd circularly executing the steps S104 to S106 until the set IdisAll discharge currents in (1) are selected, and all data sets D are saved to the original data set DrawPerforming the following steps;
s108: sequentially selecting batteries of the 2 nd to the M notebook computers, and circularly executing the steps S103 to S107 until the batteries of all the M notebook computers finish the discharge experiment, and storing all the data sets D into the original data set DrawPerforming the following steps;
s2: preprocessing a data set, namely performing data cleaning, data expansion and data normalization on data in the original data set to obtain a data matrix Dnew
S3: dividing a data set, namely dividing the data matrix into a training set and a verification set;
s4: constructing a neural network model structure, namely constructing an attention mechanism layer and an Elman neural network, and forming an Elman neural network model with the attention mechanism layer;
s5: training a neural network model, namely importing the data in the training set into the neural network model for network training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s6: optimizing a neural network model, namely optimizing the Elman neural network model by adopting an ant colony algorithm;
s7: evaluating a neural network model and embedding the neural network model into a battery management system, namely evaluating the neural network model by using the verification set, embedding the neural network model into the battery management system of a notebook computer if the neural network model meets the precision requirement, and re-executing S5 to S6 and re-training and optimizing the model if the neural network model does not meet the precision requirement;
s8: and (3) estimating the residual electric quantity of the notebook computer, namely acquiring the current of the battery before dormancy, the terminal voltage of the battery and the average temperature in a period of time after dormancy when the notebook computer enters the dormant state every time, carrying out normalization processing on the acquired data, inputting the normalized data into a neural network model in the battery management system, and estimating the residual electric quantity of the battery.
In this scheme, the preprocessing is performed on the data set in step S2, and the specific process is as follows:
s201: for the battery raw data set D acquired in step S1rawCarrying out data cleaning to obtain a first data set;
s202: recording a first column of the first data set as a target value LSOCThe second to last columns are marked as an eigenvalue matrix F, each row of which is an eigenvector:
Figure GDA0003420445310000031
wherein the content of the first and second substances,
Figure GDA0003420445310000032
for the battery with idisDischarging and the residual capacity after stopping discharging is SOCpAverage temperature of TpA feature vector of a case;
s203: for the feature vector
Figure GDA0003420445310000033
Discharge current i indisAnd an average temperature T within T seconds after stopping dischargepIs extended, i.e. idisAnd TpCopying c-1 parts, and then putting back the copied c-1 parts into the original feature vector to obtain:
Figure GDA0003420445310000034
s204: all the feature vectors are subjected to normalization processing, and data are mapped into a range of 0-1 to obtain new feature vectors
Figure GDA0003420445310000035
S205: the new feature vector is processed
Figure GDA0003420445310000036
And a target value LSOCOne-to-one correspondence is formed into a new data matrix Dnew
In this solution, the neural network model structure constructed in step S4 is specifically composed of the following parts:
s401: constructing an attention mechanism layer, determining c full-connection layers and activation function layers corresponding to the c full-connection layers, and combining the c full-connection layers and the activation function layers one by one, wherein the activation function layer corresponding to the last full-connection layer is a softmax function layer;
s402: determining respective corresponding neurons of an Elman neural network and an input layer, a hidden layer, a receiving layer and an output layer contained in the Elman neural network, wherein the number of input channels of the input layer corresponds to the number of output channels of an attention mechanism layer, and the corresponding value of the output layer is the residual capacity of a battery;
s403: determining a weight value and a threshold value in the Elman neural network according to neurons corresponding to an input layer, a hidden layer, a carrying layer and an output layer in the Elman neural network;
s404: and combining the attention mechanism layer built in the S401 and the Elman neural network built in the S402 and the S403 to form an Elman neural network model with the attention mechanism layer.
In this solution, the training of the neural network model in step S5 includes the following specific steps:
s501: using the one-dimensional convolutional layer as an embedding layer, generating an embedding function to capture the dependency relationship between different input characteristic data, and using the data matrix D obtained in step S2newObtaining a data matrix after dimension reduction through a one-dimensional convolution layer
Figure GDA0003420445310000041
S502: importing the data of the training set obtained in the step S3 into a network model for training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s503: for the attention mechanism layer described in step S4, the output of the last fully-connected layer is set as etI.e. by
et=Wt×θ(Wt-1×xt-1+bt-1)+bt
Wherein, WtIs the weight of the last full link layer, btFor biasing of the last fully-connected layer, Wt-1Is the weight of the penultimate full link layer, bt-1Is the bias of the penultimate fully-connected layer, xt-1The input of the last but one full connection layer is theta (-) which is the activation function layer corresponding to the last but one full connection layer;
s504: setting the weight output of the softmax function layer corresponding to the last full-connection layer as
Figure GDA0003420445310000042
Figure GDA0003420445310000043
S504: obtained in S501
Figure GDA0003420445310000044
And S504 attention weight
Figure GDA0003420445310000045
The polymerization is carried out to obtain a final output of the attention-suppressing layer of
Figure GDA0003420445310000046
Figure GDA0003420445310000047
S505: inputting characteristic data with weight
Figure GDA0003420445310000048
Input to the Elman neural network.
In this solution, in step S6, the ant colony algorithm is used to optimize the neural network model, and the specific steps are as follows:
s601: taking the weights and thresholds of the hidden layer and the output layer of the Elman neural network and the number of neurons of the hidden layer as parameters to be optimized of the ant colony algorithm;
s602: setting initialization parameters of ant colony algorithm, including maximum iteration number G of antsmaxNumber of ants K, pheromone intensity tauij
S603: at the beginning of the algorithm, K ants are randomly placed on K position points, and elements on each position comprise weights and thresholds of a hidden layer and an output layer and the number of neurons in the hidden layer. In this case, the pheromones on the respective paths are equal, and are set as:
τij(0)=δ
wherein δ is a constant with a small value;
s604: each ant independently selects the next anchor point according to the rest pheromones and heuristic information on the path, namely, the position of the ant is updated, wherein the probability that the ant k moves from the point i to the point j is as follows:
Figure GDA0003420445310000051
wherein, JkDenotes nodes not visited by ant k, τij(t) intensity of pheromone from position i to position j at time t, ηijIs a heuristic factor, which is also the reciprocal of the distance between the position point i and the position point j, and represents the heuristic factor of the expected level of the ant k moving from the position point i to the position point j, and alpha and beta are two constants which respectively represent the weighted values of the pheromone and the heuristic factor;
s605: when all ants complete the search, the pheromone is updated, and the following results can be obtained:
Figure GDA0003420445310000052
wherein K is the number of ants, p represents the evaporation coefficient of the pheromone on the path and is set to be 0.5,
Figure GDA0003420445310000053
the pheromone left for the kth ant on the path i to j,
Figure GDA0003420445310000054
is defined as:
Figure GDA0003420445310000055
wherein Q is a constant, CkThe total length of the complete path from i to j for ant k;
s606: when all ants finish searching the next positioning point by using the transition probability, recording the best search result, and updating the element information quantity of the position;
s607: the variance SSE is used as an evaluation function of the algorithm, and the specific expression is as follows:
Figure GDA0003420445310000061
therein, SOCpreEstimated value of remaining capacity, SOC, output for networkrealThe actual value of the corresponding residual electric quantity is obtained;
s608: and if the termination condition is met, ending the search process, outputting the optimal values of the weight, the threshold and the neuron number, and obtaining the optimized Elman neural network model.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the method, the discharge current data at the sampling moment before dormancy, the transient process data of the battery terminal voltage during dormancy and the average battery temperature during dormancy are used as the input of the neural network by utilizing the dormancy state of the notebook computer, so that the quantity of input data is increased, and the adverse effect of single data on the estimation precision of the residual electric quantity is avoided; an attention mechanism layer is built, reasonable weight distribution can be carried out on input characteristic data, the problem of distraction of the model is effectively solved, and the accuracy of an estimation result is improved; and terminal voltage data of the battery during the sleep period of the notebook computer is collected, so that the estimation of the residual electric quantity of the battery is not influenced by the discharge current change of the battery.
Drawings
FIG. 1 is a flowchart of a method for estimating the remaining power of a notebook computer based on an improved Elman neural network according to the present invention;
FIG. 2 is a schematic diagram of an Elman neural network with attention-limiting layers according to the present invention;
FIG. 3 is a schematic diagram of an Elman neural network proposed by the present invention;
fig. 4 is a flowchart for optimizing the Elman neural network by using the ant colony algorithm according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
In a specific embodiment, as shown in fig. 1, a method for estimating remaining power of a notebook computer based on an improved Elman neural network includes the following steps:
s1: constructing an original data set DrawThe method comprises the following steps of periodically discharging batteries of a plurality of notebook computers of the same model, recording the current of the batteries before the end of discharging, the terminal voltage of the batteries within a period of time after the end of discharging and the average temperature as input characteristic data of an original data set when each discharging is finished, and recording the residual electric quantity of the batteries at the end of discharging as a target value of the original data set, wherein the specific steps comprise:
s101: selecting batteries of M laptops with the same model; from 0 to the rated current ImaxInterval inner structureCreating an arithmetic sequence containing N elements to form a discharge current set Idis=[i1,i2,…,iN]The battery capacity interval is [0, 100%]Evenly dividing the region into P regions;
s102: selecting a battery of the 1 st notebook computer to perform an intermittent discharge experiment;
s103: from the set of discharge currents IdisThe 1 st element is selected as the discharge current idis
S104: applying a selected current i to the selected celldisPerforming constant current discharge, stopping discharge every time 1/P of rated capacity of the battery is discharged, maintaining the discharge stopping state for T seconds, and discharging current idisVoltage within T seconds of stopping discharge
Figure GDA0003420445310000071
Mean temperature TpStoring the current residual capacity SOC as input characteristic datapStoring as target value, forming a sample data
Figure GDA0003420445310000072
The following were used:
Figure GDA0003420445310000073
wherein p is the discharge current i of the batterydisNumber of times of discharge of 1/P rated capacity, utTerminal voltage at the t-th moment after the battery stops discharging;
s105: repeating the step S104P times until the remaining battery capacity is zero, and integrating all data saved during the execution into a data set D, so as to obtain:
Figure GDA0003420445310000074
s106: the battery is charged to full capacity by adopting a constant-current constant-voltage charging mode;
s107: sequentially from the set of discharge currents Idis2 nd to N th elements are selected as discharge current idisAnd circularly executing the steps S104 to S106 until the set IdisAll discharge currents in (1) are selected, and all data sets D are saved to the original data set DrawPerforming the following steps;
s108: sequentially selecting batteries of the 2 nd to the M notebook computers, and circularly executing the steps S103 to S107 until the batteries of all the M notebook computers finish the discharge experiment, and storing all the data sets D into the original data set DrawPerforming the following steps;
s2: preprocessing a data set, namely performing data cleaning, data expansion and data normalization on data in the original data set to obtain a data matrix Dnew
S3: dividing a data set, namely dividing the data matrix into a training set and a verification set;
s4: constructing a neural network model structure, namely constructing an attention mechanism layer and an Elman neural network, and forming an Elman neural network model with the attention mechanism layer;
s5: training a neural network model, namely importing the data in the training set into the neural network model for network training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s6: optimizing a neural network model, namely optimizing the Elman neural network model by adopting an ant colony algorithm;
s7: evaluating a neural network model and embedding the neural network model into a battery management system, namely evaluating the neural network model by using the verification set, embedding the neural network model into the battery management system of a notebook computer if the neural network model meets the precision requirement, and re-executing S5 to S6 and re-training and optimizing the model if the neural network model does not meet the precision requirement;
s8: and (3) estimating the residual electric quantity of the notebook computer, namely acquiring the current of the battery before dormancy, the terminal voltage of the battery and the average temperature in a period of time after dormancy when the notebook computer enters the dormant state every time, carrying out normalization processing on the acquired data, inputting the normalized data into a neural network model in the battery management system, and estimating the residual electric quantity of the battery.
In this embodiment, the data preprocessing method described in step S2 includes the following steps:
s201: for the battery raw data set D acquired in step S1rawCarrying out data cleaning to obtain a first data set;
s202: recording a first column of the first data set as a target value LSOCThe second to last columns are marked as an eigenvalue matrix F, each row of which is an eigenvector:
Figure GDA0003420445310000081
wherein the content of the first and second substances,
Figure GDA0003420445310000082
for the battery with idisDischarging and the residual capacity after stopping discharging is SOCpAverage temperature of TpA feature vector of a case;
s203: for the feature vector
Figure GDA0003420445310000083
Discharge current i indisAnd an average temperature T within T seconds after stopping dischargepIs extended, i.e. idisAnd TpCopying c-1 parts, and then putting back the copied c-1 parts into the original feature vector to obtain:
Figure GDA0003420445310000084
s204: all the feature vectors are subjected to normalization processing, and data are mapped into a range of 0-1 to obtain new feature vectors
Figure GDA0003420445310000085
S205: the new feature vector is processed
Figure GDA0003420445310000086
And a target value LSOCOne-to-one correspondence is formed into a new data matrix Dnew
In this scheme, in step S3, the data matrix is divided into a training set and a verification set, and the specific method is as follows:
80% of the data in the data matrix is used as the training set, and the remaining 20% is used as the test set.
In this scheme, fig. 2 is a block diagram of an attention mechanism layer, fig. 3 is a structural diagram of an Elman neural network, and the method for constructing the neural network model structure in step S4 includes the following specific steps:
s401: constructing an attention mechanism layer, determining 2 full-connection layers and corresponding activation function layers thereof, and combining the full-connection layers one by one, wherein the activation function layer corresponding to the last full-connection layer is a softmax function layer;
s402: determining the Elman neural network and the neurons corresponding to an input layer, a hidden layer, a receiving layer and an output layer contained in the Elman neural network, wherein an activation function layer corresponding to a 1 st fully-connected layer is determined as a tanh function layer, and an activation function layer corresponding to a 2 nd fully-connected layer is determined as a softmax function layer;
s403: determining a weight value and a threshold value in the Elman neural network according to neurons corresponding to an input layer, a hidden layer, a carrying layer and an output layer in the Elman neural network;
s404: and combining the attention mechanism layer built in the S401 and the Elman neural network built in the S402 and the S403 to form an Elman neural network model with the attention mechanism layer.
In this solution, the training of the neural network model in step S5 includes the following specific steps:
s501: using the one-dimensional convolutional layer as an embedding layer, generating an embedding function to capture the dependency relationship between different input characteristic data, and using the data matrix D obtained in step S2newObtaining a data matrix after dimension reduction through a one-dimensional convolution layer
Figure GDA0003420445310000091
S502: importing the data of the training set obtained in the step S3 into a network model for training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s503: for the attention mechanism layer described in step S4, the output of the last fully-connected layer is set as etI.e. by
et=Wt×θ(Wt-1×xt-1+bt-1)+bt
Wherein, WtIs the weight of the last full link layer, btFor biasing of the last fully-connected layer, Wt-1Is the weight of the penultimate full link layer, bt-1Is the bias of the penultimate fully-connected layer, xt-1The input of the last but one full connection layer is theta (-) which is the activation function layer corresponding to the last but one full connection layer;
s504: setting the weight output of the softmax function layer corresponding to the last full-connection layer as
Figure GDA0003420445310000092
Figure GDA0003420445310000093
S504: obtained in S501
Figure GDA0003420445310000094
And S504 attention weight
Figure GDA0003420445310000095
The polymerization is carried out to obtain a final output of the attention-suppressing layer of
Figure GDA0003420445310000101
Figure GDA0003420445310000102
S505: inputting characteristic data with weight
Figure GDA0003420445310000103
Input to the Elman neural network.
In this solution, as shown in fig. 4, the step S6 of optimizing the network model by using the ant colony algorithm specifically includes the following steps:
s601: taking the weights and thresholds of the hidden layer and the output layer of the Elman neural network and the number of neurons of the hidden layer as parameters to be optimized of the ant colony algorithm;
s602: setting initialization parameters of ant colony algorithm, including maximum iteration number G of antsmaxNumber of ants K, pheromone intensity tauij
S603: at the beginning of the algorithm, K ants are randomly placed on K position points, and elements on each position comprise weights and thresholds of a hidden layer and an output layer and the number of neurons in the hidden layer. In this case, the pheromones on the respective paths are equal, and are set as:
τij(0)=δ
wherein δ is a constant with a small value;
s604: each ant independently selects the next anchor point according to the rest pheromones and heuristic information on the path, namely, the position of the ant is updated, wherein the probability that the ant k moves from the point i to the point j is as follows:
Figure GDA0003420445310000104
wherein, JkDenotes nodes not visited by ant k, τij(t) intensity of pheromone from position i to position j at time t, ηijIs a heuristic factor, which is also the reciprocal of the distance between the position point i and the position point j, and represents the heuristic factor of the expected level of the ant k moving from the position point i to the position point j, and alpha and beta are two constants which respectively represent the weighted values of the pheromone and the heuristic factor;
s605: when all ants complete the search, the pheromone is updated, and the following results can be obtained:
Figure GDA0003420445310000105
wherein K is the number of ants, p represents the evaporation coefficient of the pheromone on the path and is set to be 0.5,
Figure GDA0003420445310000106
the pheromone left for the kth ant on the path i to j,
Figure GDA0003420445310000107
is defined as:
Figure GDA0003420445310000111
wherein Q is a constant, CkThe total length of the complete path from i to j for ant k;
s606: when all ants finish searching the next positioning point by using the transition probability, recording the best search result, and updating the element information quantity of the position;
s607: the variance SSE is used as an evaluation function of the algorithm, and the specific expression is as follows:
Figure GDA0003420445310000112
therein, SOCpreEstimated value of remaining capacity, SOC, output for networkrealThe actual value of the corresponding residual electric quantity is obtained;
s608: and if the termination condition is met, ending the search process, outputting the optimal values of the weight, the threshold and the neuron number, and obtaining the optimized Elman neural network model.
In this embodiment, the step S7 of evaluating the neural network model and embedding it into the battery management system includes the following specific steps:
setting an error reference value epsilon, inputting the input characteristic data of the test set into the trained neural network model to obtain an estimated value L of the residual electric quantitypre_socThe estimated value L of the remaining capacitypre_socWith the true value LSOCIn contrast, if the condition | L is satisfiedpre_soc-LSOCIf the | < epsilon, outputting the neural network model and embedding the neural network model into the battery management system, otherwise returning to the step S4;
in this solution, the step S8 of estimating the remaining power of the notebook computer specifically includes the following steps:
the battery management system detects the voltage, the current and the temperature of the battery unit in real time, when the current sudden change is detected to be 0, the battery starts to enter a dormant state, and the battery management system records the current i of the battery at a sampling time before dormancydisVoltage within T seconds of stopping discharge
Figure GDA0003420445310000113
And the average temperature Tp
Then, i isdis
Figure GDA0003420445310000114
And TpProcessing according to the preprocessing method in the step S2;
and finally, inputting the processed data into the neural network model output in the step S7 to obtain an estimated value of the residual electric quantity of the notebook computer.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (5)

1. A method for estimating the residual capacity of a notebook computer based on an improved Elman neural network is suitable for estimating the residual capacity of a battery when the notebook computer enters a sleep state, and is characterized by comprising the following steps of:
s1: constructing an original data set DrawThe method comprises the following steps of periodically discharging batteries of a plurality of notebook computers of the same model, recording the current of the batteries before the end of discharging, the terminal voltage of the batteries within a period of time after the end of discharging and the average temperature as input characteristic data of an original data set when each discharging is finished, and recording the residual electric quantity of the batteries at the end of discharging as a target value of the original data set, wherein the specific steps comprise:
s101: selecting batteries of M laptops with the same model; from 0 to the rated current ImaxConstructing an arithmetic sequence containing N elements in the interval to form a discharge current set Idis=[i1,i2,…,iN]The battery capacity interval is [0, 100%]Evenly dividing the region into P regions;
s102: selecting a battery of the 1 st notebook computer to perform an intermittent discharge experiment;
s103: from the set of discharge currents IdisThe 1 st element is selected as the discharge current idis
S104: applying a selected current i to the selected celldisPerforming constant current discharge, stopping discharge every time 1/P of rated capacity of the battery is discharged, maintaining the discharge stopping state for T seconds, and discharging current idisVoltage within T seconds of stopping discharge
Figure FDA0003420445300000011
Mean temperature TpStoring the current residual capacity SOC as input characteristic datapStoring as target value, forming a sample data
Figure FDA0003420445300000012
The following were used:
Figure FDA0003420445300000013
wherein p is the discharge current i of the batterydisLet outNumber of times of 1/P rated capacity, utTerminal voltage at the t-th moment after the battery stops discharging;
s105: repeating the step S104P times until the remaining battery capacity is zero, and integrating all data saved during the execution into a data set D, so as to obtain:
Figure FDA0003420445300000014
s106: the battery is charged to full capacity by adopting a constant-current constant-voltage charging mode;
s107: sequentially from the set of discharge currents Idis2 nd to N th elements are selected as discharge current idisAnd circularly executing the steps S104 to S106 until the set IdisAll discharge currents in (1) are selected, and all data sets D are saved to the original data set DrawPerforming the following steps;
s108: sequentially selecting batteries of the 2 nd to the M notebook computers, and circularly executing the steps S103 to S107 until the batteries of all the M notebook computers finish the discharge experiment, and storing all the data sets D into the original data set DrawPerforming the following steps;
s2: preprocessing a data set, namely performing data cleaning, data expansion and data normalization on data in the original data set to obtain a data matrix Dnew
S3: dividing a data set, namely dividing the data matrix into a training set and a verification set;
s4: constructing a neural network model structure, namely constructing an attention mechanism layer and an Elman neural network, and forming an Elman neural network model with the attention mechanism layer;
s5: training a neural network model, namely importing the data in the training set into the neural network model for network training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s6: optimizing a neural network model, namely optimizing the Elman neural network model by adopting an ant colony algorithm;
s7: evaluating a neural network model and embedding the neural network model into a battery management system, namely evaluating the neural network model by using the verification set, embedding the neural network model into the battery management system of a notebook computer if the neural network model meets the precision requirement, and re-executing S5 to S6 and re-training and optimizing the model if the neural network model does not meet the precision requirement;
s8: and (3) estimating the residual electric quantity of the notebook computer, namely acquiring the current of the battery before dormancy, the terminal voltage of the battery and the average temperature in a period of time after dormancy when the notebook computer enters the dormant state every time, carrying out normalization processing on the acquired data, inputting the normalized data into a neural network model in the battery management system, and estimating the residual electric quantity of the battery.
2. The method for estimating the remaining power of a notebook computer based on the improved Elman neural network as claimed in claim 1, wherein the step S2 of preprocessing the data set specifically comprises the following steps:
s201: for the battery raw data set D acquired in step S1rawCarrying out data cleaning to obtain a first data set;
s202: recording a first column of the first data set as a target value LSOCThe second to last columns are marked as an eigenvalue matrix F, each row of which is an eigenvector:
Figure FDA0003420445300000021
wherein the content of the first and second substances,
Figure FDA0003420445300000022
for the battery with idisDischarging and the residual capacity after stopping discharging is SOCpAverage temperature of TpA feature vector of a case;
s203: for the feature vector
Figure FDA0003420445300000031
Discharge current i indisAnd an average temperature T within T seconds after stopping dischargepIs extended, i.e. idisAnd TpCopying c-1 parts, and then putting back the copied c-1 parts into the original feature vector to obtain:
Figure FDA0003420445300000032
s204: all the feature vectors are subjected to normalization processing, and data are mapped into a range of 0-1 to obtain new feature vectors
Figure FDA0003420445300000033
S205: the new feature vector is processed
Figure FDA0003420445300000034
And a target value LSOCOne-to-one correspondence is formed into a new data matrix Dnew
3. The method for estimating the remaining power of the notebook computer based on the improved Elman neural network as claimed in claim 1, wherein the step S4 of constructing the neural network model structure specifically comprises the following steps:
s401: constructing an attention mechanism layer, determining c full-connection layers and activation function layers corresponding to the c full-connection layers, and combining the c full-connection layers and the activation function layers one by one, wherein the activation function layer corresponding to the last full-connection layer is a softmax function layer;
s402: determining respective corresponding neurons of an Elman neural network and an input layer, a hidden layer, a receiving layer and an output layer contained in the Elman neural network, wherein the number of input channels of the input layer corresponds to the number of output channels of an attention mechanism layer, and the corresponding value of the output layer is the residual capacity of a battery;
s403: determining a weight value and a threshold value in the Elman neural network according to neurons corresponding to an input layer, a hidden layer, a carrying layer and an output layer in the Elman neural network;
s404: and combining the attention mechanism layer built in the S401 and the Elman neural network built in the S402 and the S403 to form an Elman neural network model with the attention mechanism layer.
4. The method for estimating the remaining power of the notebook computer based on the improved Elman neural network as claimed in claim 1, wherein the step S5 of training the neural network model specifically comprises the following steps:
s501: using the one-dimensional convolutional layer as an embedding layer, generating an embedding function to capture the dependency relationship between different input characteristic data, and using the data matrix D obtained in step S2newObtaining a data matrix after dimension reduction through a one-dimensional convolution layer
Figure FDA0003420445300000035
S502: importing the data of the training set obtained in the step S3 into a network model for training, and performing weight distribution on the input characteristic data in the training set through an attention mechanism layer;
s503: for the attention mechanism layer described in step S4, the output of the last fully-connected layer is set as etI.e. by
et=Wt×θ(Wt-1×xt-1+bt-1)+bt
Wherein, WtIs the weight of the last full link layer, btFor biasing of the last fully-connected layer, Wt-1Is the weight of the penultimate full link layer, bt-1Is the bias of the penultimate fully-connected layer, xt-1The input of the last but one full connection layer is theta (-) which is the activation function layer corresponding to the last but one full connection layer;
s504: setting the weight output of the softmax function layer corresponding to the last full-connection layer as
Figure FDA0003420445300000041
Figure FDA0003420445300000042
S504: obtained in S501
Figure FDA0003420445300000043
And S504 attention weight
Figure FDA0003420445300000044
The polymerization is carried out to obtain a final output of the attention-suppressing layer of
Figure FDA0003420445300000045
Figure FDA0003420445300000046
S505: inputting characteristic data with weight
Figure FDA0003420445300000047
Input to the Elman neural network.
5. The method for estimating the remaining power of the notebook computer based on the improved Elman neural network as claimed in claim 1, wherein the step S6 of optimizing the neural network model specifically comprises the following steps:
s601: taking the weights and thresholds of the hidden layer and the output layer of the Elman neural network and the number of neurons of the hidden layer as parameters to be optimized of the ant colony algorithm;
s602: setting initialization parameters of ant colony algorithm, including maximum iteration number G of antsmaxNumber of ants K, pheromone intensity tauij
S603: when the algorithm starts, K ants are randomly placed on K position points, and elements on each position comprise weights and thresholds of a hidden layer and an output layer and the number of neurons of the hidden layer; in this case, the pheromones on the respective paths are equal, and are set as:
τij(0)=δ
wherein δ is a constant with a small value;
s604: each ant independently selects the next anchor point according to the rest pheromones and heuristic information on the path, namely, the position of the ant is updated, wherein the probability that the ant k moves from the point i to the point j is as follows:
Figure FDA0003420445300000048
wherein, JkDenotes nodes not visited by ant k, τij(t) intensity of pheromone from position i to position j at time t, ηijIs a heuristic factor, which is also the reciprocal of the distance between the position point i and the position point j, and represents the heuristic factor of the expected level of the ant k moving from the position point i to the position point j, and alpha and beta are two constants which respectively represent the weighted values of the pheromone and the heuristic factor;
s605: when all ants complete the search, the pheromone is updated, and the following results can be obtained:
Figure FDA0003420445300000051
wherein K is the number of ants, rho represents the evaporation coefficient of the pheromone on the path and is set to be 0.5,
Figure FDA0003420445300000052
the pheromone left for the kth ant on the path i to j,
Figure FDA0003420445300000053
is defined as:
Figure FDA0003420445300000054
wherein Q is a constant, CkFor ant k from i to jTotal length of the diameter;
s606: when all ants finish searching the next positioning point by using the transition probability, recording the best search result, and updating the element information quantity of the position;
s607: the variance SSE is used as an evaluation function of the algorithm, and the specific expression is as follows:
Figure FDA0003420445300000055
therein, SOCpreEstimated value of remaining capacity, SOC, output for networkrealThe actual value of the corresponding residual electric quantity is obtained;
s608: and if the termination condition is met, ending the search process, outputting the optimal values of the weight, the threshold and the neuron number, and obtaining the optimized Elman neural network model.
CN202110733278.XA 2021-06-29 2021-06-29 Method for estimating residual electric quantity of notebook computer based on improved Elman neural network Active CN113534938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110733278.XA CN113534938B (en) 2021-06-29 2021-06-29 Method for estimating residual electric quantity of notebook computer based on improved Elman neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110733278.XA CN113534938B (en) 2021-06-29 2021-06-29 Method for estimating residual electric quantity of notebook computer based on improved Elman neural network

Publications (2)

Publication Number Publication Date
CN113534938A CN113534938A (en) 2021-10-22
CN113534938B true CN113534938B (en) 2022-04-01

Family

ID=78126277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110733278.XA Active CN113534938B (en) 2021-06-29 2021-06-29 Method for estimating residual electric quantity of notebook computer based on improved Elman neural network

Country Status (1)

Country Link
CN (1) CN113534938B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598557B (en) * 2022-08-26 2023-08-25 广东工业大学 Lithium battery SOH estimation method based on constant-voltage charging current
CN116937752B (en) * 2023-09-14 2023-12-26 广州德姆达光电科技有限公司 Charging and discharging control method for outdoor mobile energy storage power supply

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107037373A (en) * 2017-05-03 2017-08-11 广西大学 Battery dump energy Forecasting Methodology based on neutral net
CN112949610A (en) * 2021-04-21 2021-06-11 华南理工大学 Improved Elman neural network prediction method based on noise reduction algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109738807B (en) * 2019-01-03 2021-03-30 温州大学 Method for estimating SOC (State of Charge) based on BP (Back propagation) neural network optimized by ant colony algorithm
CN110687452B (en) * 2019-09-05 2022-05-20 南京理工大学 Lithium battery capacity online prediction method based on K-means clustering and Elman neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107037373A (en) * 2017-05-03 2017-08-11 广西大学 Battery dump energy Forecasting Methodology based on neutral net
CN112949610A (en) * 2021-04-21 2021-06-11 华南理工大学 Improved Elman neural network prediction method based on noise reduction algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Elman neural network using ant colony optimization algorithm for estimating of state of charge of lithium-ion battery";Xiaobo Zhao et.al.;《Journal of Energy Storage》;20200828;文献摘要、第2-4节 *

Also Published As

Publication number Publication date
CN113534938A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113917337A (en) Battery health state estimation method based on charging data and LSTM neural network
Song et al. Combined CNN-LSTM network for state-of-charge estimation of lithium-ion batteries
CN109991542B (en) Lithium ion battery residual life prediction method based on WDE optimization LSTM network
CN113534938B (en) Method for estimating residual electric quantity of notebook computer based on improved Elman neural network
CN111832220A (en) Lithium ion battery health state estimation method based on codec model
CN113702843B (en) Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm
CN114325450A (en) Lithium ion battery health state prediction method based on CNN-BilSTM-AT hybrid model
CN110888059A (en) Algorithm based on improved random forest combined cubature Kalman power battery state of charge estimation
CN110888058A (en) Algorithm based on power battery SOC and SOH joint estimation
CN113010504B (en) Electric power data anomaly detection method and system based on LSTM and improved K-means algorithm
CN112731183B (en) Improved ELM-based lithium ion battery life prediction method
CN112834927A (en) Lithium battery residual life prediction method, system, device and medium
CN115856678A (en) Lithium ion battery health state estimation method
Takyi-Aninakwa et al. A hybrid probabilistic correction model for the state of charge estimation of lithium-ion batteries considering dynamic currents and temperatures
CN113820604A (en) Lithium battery SOH estimation method based on temperature prediction
CN114660497A (en) Lithium ion battery service life prediction method aiming at capacity regeneration phenomenon
CN115308608A (en) All-vanadium redox flow battery voltage prediction method, device and medium
CN115389946A (en) Lithium battery health state estimation method based on isobaric rise energy and improved GRU
Bak et al. Accurate estimation of battery SOH and RUL based on a progressive lstm with a time compensated entropy index
How et al. SOC estimation using deep bidirectional gated recurrent units with tree parzen estimator hyperparameter optimization
CN112327165B (en) Battery SOH prediction method based on unsupervised transfer learning
CN111337833B (en) Lithium battery capacity integrated prediction method based on dynamic time-varying weight
CN109738807B (en) Method for estimating SOC (State of Charge) based on BP (Back propagation) neural network optimized by ant colony algorithm
CN113466718B (en) SOC correction method for mobile phone battery
CN113393035B (en) Daily charge and discharge power prediction method for electric automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant