CN112329352A - Supercapacitor service life prediction method and device applied to power system - Google Patents
Supercapacitor service life prediction method and device applied to power system Download PDFInfo
- Publication number
- CN112329352A CN112329352A CN202110010643.4A CN202110010643A CN112329352A CN 112329352 A CN112329352 A CN 112329352A CN 202110010643 A CN202110010643 A CN 202110010643A CN 112329352 A CN112329352 A CN 112329352A
- Authority
- CN
- China
- Prior art keywords
- particles
- particle
- data set
- iteration
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/25—Design optimisation, verification or simulation using particle-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/04—Ageing analysis or optimisation against ageing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the application discloses a supercapacitor service life prediction method and device applied to a power system, and the method adopts a data-based method and combines a heuristic Kalman filtering HKF algorithm and an extreme learning machine ELM to predict the remaining service life of a supercapacitor. By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.
Description
Technical Field
The application relates to the technical field of super capacitors, in particular to a super capacitor service life prediction method and device applied to a power system.
Background
The super capacitor is a novel energy storage device between a traditional capacitor and a rechargeable battery, has the characteristics of rapid charging and discharging of the capacitor and the energy storage characteristic of the battery, and is a novel energy storage element with high efficiency, practicability and environmental protection. The super capacitor is widely applied to the power system by virtue of the advantages of small volume, low production cost, simple internal structure and the like.
In the actual operation process, the aging and end-of-life states of the super capacitor have serious influence on the safety and reliability of the power system. In order to ensure stable and safe power supply of the power system and accurately predict the residual service life of the super capacitor, the replacement or maintenance of the super capacitor before the super capacitor reaches the end-of-life state has important significance on the operation quality of the power system.
At present, the commonly used methods for predicting the remaining service life of the super capacitor are mainly divided into two types: model-driven based methods and data-driven based methods. The model-based driving method is characterized in that an equivalent circuit model is established according to the energy storage principle and the internal structure of the super capacitor, model parameter identification is completed according to a charge-discharge experiment, and then the prediction of the remaining service life of the super capacitor is realized. The prediction model of the residual service life of the super capacitor comprises the following steps: the model comprises an aging mechanism model, a particle filter model, an Arrhenius law model, a Weibull failure statistical theory model and an RC equivalent circuit model. A data-driven based approach would be to train the model from a historical dataset by means of machine learning or the like to achieve a prediction of the unknown part. The data-based method can also achieve higher accuracy while ensuring the accuracy of the historical data. Currently, common data driving methods are: support vector machines, correlation vector machines, autoregressive models, artificial neural networks, and the like.
Due to the fact that the electrochemical structure of the inner portion of the super capacitor is complex, the model-based super capacitor residual service life prediction method is high in complexity and difficult to operate. Compared with a prediction method based on a model, the prediction method based on data has the advantages of simple structure, low complexity and the like. Under the condition of ensuring the precision of a training data set, the data-based prediction method can achieve higher precision without researching the internal structure and the operation mechanism of the super capacitor.
The artificial neural network has obvious advantages in prediction due to the self-learning function, but the traditional artificial neural network usually needs to set a large number of training parameters and is easy to generate local optimal values. Therefore, a better method for predicting the remaining service life of the super capacitor is in urgent need.
Disclosure of Invention
The embodiment of the application provides a super capacitor service life prediction method and device applied to a power system, and is beneficial to solving the technical problems in the prior art.
In a first aspect, an embodiment of the present application provides a method for predicting remaining service life of a supercapacitor, including:
step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said Gaussian generator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk;
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1;
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
Preferably, the step S105 specifically includes:
the Gaussian generator N (m)k, Sk 2) Generating N groups of particle swarms satisfying normal distributionAnd is recorded as:wherein x isk(i) For particle labeling, i =1,2,3, … N, N being the number of particles.
Preferably, the step S108 specifically includes:
calculating particle x by loss functionk(i) And the mean value mkAccording to the order of the errors from small to large, the first N is selectedζForming the best candidate particle groupAnd is recorded as:wherein N isζFor optimal particle numbers.
Preferably, the step S109 specifically includes:
passing through typeCalculating NζOptimum average value ζ of group particleskOf the through typeCalculating the observed noise vk。
Preferably, the step S110 specifically includes:
in the HKF algorithm, the expression of kalman estimation is:. Wherein the content of the first and second substances,is a posterior estimate of the particle;the optimal value of the prior estimated value of the particles is obtained; a. thekAnd LkIs a state transition matrix.
Preferably, the step S110 further includes:
the state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:。
wherein the content of the first and second substances,as an observed valueAnd a posteriori estimating the particlesThe posterior estimation error between, the expression is:。
wherein the observed valueSatisfies the following conditions:(ii) a Wherein v iskThe noise was observed for the system.
Preferably, the step S110 further includes:
error of current a posteriori estimationAt a minimum, the a posteriori estimation error is expected to be:。
wherein the content of the first and second substances,is the optimal value of the prior estimated value of the particle,satisfies the following:state transition matrix AkAnd LkSatisfies the following conditions:wherein I is an identity matrix.
Preferably, the step S110 further includes: ,,wherein, in the step (A),estimating error covariance for the prior;
the posterior estimated value obtained by the k-th iterationAs the group of particles at the k +1 th iterationMean value of:;
Estimating error covariance a posterioriDiagonal matrix ofSubstitution intoVariance of (2) ,(ii) a Output mean m for cycle k +1k+1Sum variance。
Preferably, theThe method specifically comprises the following steps:wherein, in the step (A),is a deceleration factor;wherein, alpha is deceleration coefficient, and alpha belongs to (0, 1)]。
In a second aspect, an embodiment of the present application provides an apparatus for predicting remaining service life of a super capacitor, including:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any of the first aspect.
By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a capacitance aging prediction model based on an extreme learning machine ELM according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a capacitance aging prediction method based on an extreme learning machine ELM according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a heuristic kalman filtering HKF algorithm according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a heuristic kalman filtering HKF algorithm according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a capacitor aging prediction model based on HKF-ELM according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart of a capacitor aging prediction method based on HKF-ELM according to an embodiment of the present disclosure;
fig. 7A is a simulation diagram for predicting the remaining service life of a super capacitor based on an ELM model according to an embodiment of the present disclosure;
FIG. 7B is a simulation diagram of predicting the remaining service life of an ultracapacitor based on an HKF-ELM model according to an embodiment of the application;
fig. 7C is a graph illustrating a simulation of predicting the remaining service life of an ultracapacitor based on the PSO-ELM according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the data-based prediction method, the artificial neural network has obvious advantages in prediction due to the self-learning function, but the traditional artificial neural network usually needs to set a large number of training parameters and is easy to generate local optimal values. The extreme learning machine ELM is a simple, easy-to-use and effective single-hidden-layer feedforward neural network learning algorithm. The ELM only needs to set the number of hidden layers of the network, does not need to adjust the input weight of the network and the bias of neurons of the hidden layers in the algorithm execution process, and generates a unique optimal solution, so that the ELM has the advantages of high learning speed and good generalization performance.
Fig. 1 is a schematic diagram of a capacitance aging prediction model based on an extreme learning machine ELM according to an embodiment of the present disclosure; fig. 2 is a schematic flowchart of a capacitance aging prediction method based on an extreme learning machine ELM according to an embodiment of the present disclosure. Therein, reference numerals S201-S208 in fig. 1 correspond to the steps in fig. 2 for characterizing the data stream flow direction in the model. As shown in fig. 1 in conjunction with fig. 2, the method mainly comprises the following steps.
Step S201: and taking the capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a testing data set.
Step S202: determining a hidden layer excitation function, a loss function, a cost function, and a number of hidden layer neurons.
Specifically, determining the Sigmoid function as a hidden layer excitation function f (x); determining the AE function as a loss function; determining the MSE function, the RMSE function and the R2 decision coefficient as an ELM model cost function; according to the dimension of the training set, the passing formulaThe number of hidden layer neurons L is determined. Wherein h is the hidden layer neuron dimension; m is the dimension of the neuron of the input layer and is equal to the dimension of the input matrix; n is the output layer neuron dimension, and is the same as the output layer neuron dimension; k is an additional constant term, and k is ∈ [0, 10 ]]The specific numerical value is set by an operator.
Step S203: randomly generating input weights WiAnd offset Bi。
Step S204: with the independent variable of the training data set, the input weight WiAnd offset BiAs input matrix, an output matrix F is calculated by an excitation function.
Step S205: and calculating the error between the dependent variable of the output matrix F and the dependent variable of the training data set by using a loss function, and updating the output weight beta according to the error between the dependent variable of the output matrix F and the dependent variable of the training data set to obtain a correction model.
In particular according to formulaAnd updating the output weight beta according to the error between the dependent variable of the output matrix F and the dependent variable of the training data set to obtain a corrected model.
The modified model is a modified ELM model. Wherein, F+Moore-Penrose generalized inverse matrix as output matrix F, Y being of training data setDependent variables.
Step S206: and predicting through the correction model by taking the independent variable of the training data set and the independent variable of the testing data set as input values to obtain a predicted data set.
Step S207: a predicted image is rendered and output through the prediction data set.
Step S208: and calculating and outputting an error between the prediction data set and the original data set through a cost function.
And finishing prediction.
Although the extreme learning machine ELM has certain advantages in applications where prediction of the remaining useful life of a supercapacitor is made. However, in the research process, the inventor finds that a complex collinearity matrix is generated with a certain probability in the process of randomly generating the input weight and the offset in the conventional ELM model, so that the matrix FF + singularity occurs when the model generates the inverse matrix F +, and further random fluctuation occurs when the output weight β is solved. Random fluctuations in output weights can negatively impact ELM prediction accuracy. Through analysis, compared with other KF algorithms such as the traditional KF, Extended Kalman Filtering (EKF) and Unscented Kalman Filtering (UKF), the heuristic Kalman Filtering HKF algorithm can solve the matrix singularity when the extreme learning machine randomly generates the input weight and the offset and has the advantage of solving the problem of non-convex optimization; in the past iterations, the HKF algorithm needs less initialization parameters, and compared with other heuristic algorithms, the HKF algorithm has lower time cost and operation difficulty. Thus, the input weights and offsets of the ELM model can be optimized by the HKF algorithm. Therefore, the method constructs a heuristic Kalman filtering optimization extreme learning machine for predicting the aging life of the supercapacitor.
The heuristic kalman filter HKF algorithm is first explained below.
Fig. 3 is a schematic structural diagram of a heuristic kalman filtering HKF algorithm provided in the embodiment of the present application, and fig. 4 is a schematic flow diagram of a heuristic kalman filtering HKF algorithm provided in the embodiment of the present application. As shown in fig. 3 in conjunction with fig. 4, it mainly includes the following steps.
Step S401: in the k-th iteration, the algorithm parameters are initialized HKF and the number of particles N generated by the Gaussian generator is defined, the optimal number of particles NζAnd a deceleration coefficient alpha.
Step S402: generating a coincidence mean m by a Probability Density Function (PDF) through a Gaussian generatorkStandard deviation SkThe N groups of normal distribution particle groups.
Specifically, by a Gaussian generator N (m)k, Sk 2) Generating N groups satisfying normal distributionOf the particle swarmAnd is recorded as: 。
wherein x isk(i) For particle labeling, i =1,2,3, … N, N being the number of particles.
Step S403: selecting a loss function by which to calculate the particle xk(i) And the observed value xoptThe error of (2).
Step S404: selecting N groups of normal distribution particle groups from small to large according to errorsζAnd grouping the particles to form the best candidate particle group.
In particular, particle x is calculated by a loss functionk(i) And the mean value mkAccording to the order of the errors from small to large, the first N is selectedζForming the best candidate particle groupAnd is recorded as:。
wherein N isζFor optimal particle numbers.
Step S405: computing an optimal mean of the best candidate population of particlesζkAnd a disturbance matrix Vk。
step S406: and performing Kalman estimation on the optimal candidate particle swarm, and outputting the covariance of the posterior estimation and the posterior error.
Specifically, in the HKF algorithm, i.e., the kalman estimation module, the expression of kalman estimation is:。
wherein the content of the first and second substances,is a posterior estimate of the particle;the optimal value of the prior estimated value of the particles is obtained; a. thekAnd LkIs a state transition matrix.
The state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:。
wherein the content of the first and second substances,as an observed valueAnd a posteriori estimating the particlesThe posterior estimation error between, the expression is:。
wherein v iskThe noise observed by the system can be estimated according to the actual operation condition of the system.
Error of current a posteriori estimationAt a minimum, the a posteriori estimation error is expected to be:。
wherein the content of the first and second substances,is the optimal value of the prior estimated value of the particle,satisfies the following:。
at this time, if it is to be guaranteedThen state transition matrix AkAnd LkSatisfies the following conditions:。
wherein I is an identity matrix.
Substituting the formula into a Kalman estimation module expression, and arranging to obtain:l is determined by the following formulakThe covariance of the a posteriori error is minimized.
Step S407: setting mk+1And Sk+1For the (k + 1) th iteration, the parameters of the (k + 1) th iteration are initialized.
Specifically, the posterior estimate obtained by the k-th iteration is usedAs the group of particles at the k +1 th iterationMean value m ofk+1:Estimate the error covariance a posterioriDiagonal matrix ofSubstitution intoVariance of (2):。
To this end, the kth Kalman estimation is ended, and the mean value m for the (k + 1) th cycle is outputk+1Sum variance。
In the iteration process, the problem of premature convergence is easy to occur in continuous iteration, and in order to solve the problem, a deceleration factor a is introducedk:。
Wherein alpha is a deceleration coefficient, and alpha belongs to (0, 1)]Can be set independently;is a diagonal matrix WkThe ith element of (1). Covariance of a posteriori error after addition of a deceleration factorThe rewrite is:。
step S408: judging whether the particles in the optimal candidate particle swarm meet or not after the k iterationClosing a preset condition, if the preset condition is met, terminating the iteration and outputting mk+1And Sk+1(ii) a If not, return to step S402.
Wherein, the preset condition may be formula:. ρ is a specified radius, set before the HKF algorithm runs. In an iterative process, the HKF algorithm performs an approximate estimation by repeatedly selecting predicted values within the range of the observed value ρ (ρ is usually set to a smaller value), and by appropriately adjusting the parameters, the HKF algorithm can converge to an optimal approach scheme with low variance.
The embodiment of the application combines the HKF algorithm with the ELM model to provide a heuristic Kalman filtering HKF-extreme learning machine ELM model.
FIG. 5 is a schematic diagram of a capacitor aging prediction model based on HKF-ELM according to an embodiment of the present disclosure; fig. 6 is a schematic flowchart of a capacitor aging prediction method based on HKF-ELM according to an embodiment of the present disclosure. Wherein reference numerals S101-S115 in fig. 5 correspond to the steps in fig. 6 for characterizing the data flow direction in the model. As shown in fig. 5 in combination with fig. 6, the method mainly includes the following steps.
Step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said GaussGenerator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk;
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1;
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
Specifically, after step S115, an error between the dependent variable of the original data set and the dependent variable of the predicted data set may also be calculated through a cost function, so as to obtain a prediction error, and further verify the prediction effect.
It should be noted that for the sake of brevity, the specific algorithm formula is omitted in the HKF-ELM capacitance aging prediction method section, and the details can be found in the description of the HKF algorithm and the ELM model section.
By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.
Fig. 7A is a simulation diagram for predicting the remaining service life of a super capacitor based on an ELM model according to an embodiment of the present disclosure; FIG. 7B is a simulation diagram for predicting the remaining service life of an ultracapacitor based on an HKF-ELM model according to an embodiment of the application. Comparing fig. 7A and fig. 7B, it is found that the coincidence degree between the actual capacitance value and the predicted capacitance value is higher in fig. 7B, which shows that a better prediction effect can be obtained by using the remaining service life prediction scheme of the supercapacitor based on the HKF-ELM model.
Fig. 7C is a graph illustrating a simulation of predicting the remaining service life of an ultracapacitor based on the PSO-ELM according to an embodiment of the present disclosure.
Wherein, PSO is a particle swarm optimization algorithm. In the process of searching the optimal solution by the particle swarm, the initial position of a single particle is random, and the searching direction is random. Over time, the particles at the initial random positions spontaneously organize themselves into particle groups through experience accumulation, information sharing, mutual learning and the like. Through the sharing of local optimum among individuals, the particle swarm search center gradually moves from the local optimum to the global optimum, and finally the global optimum solution is obtained. The PSO algorithm utilizes the particle speed and the particle position to complete the optimal value search, only the optimal particles are needed to be searched in the iteration process, and cross operation and variation operation are avoided, so that the PSO algorithm is simple in structure, low in complexity and few in adjustment parameters. The ELM is optimized by utilizing the PSO algorithm, and the method has the advantages of high search speed, easiness in implementation, high convergence speed and the like. However, the PSO algorithm lacks dynamic adjustment of particle velocity and has strong dependence on parameters, so that the PSO algorithm has low local search capability and accuracy, and the particles may miss the global optimal solution in the dive process.
Comparing fig. 7B and fig. 7C, it is found that the coincidence degree of the actual capacitance value and the predicted capacitance value is higher in fig. 7B, which shows that a better prediction effect can be obtained by using the remaining service life prediction scheme of the supercapacitor based on the HKF-ELM model.
Corresponding to the method embodiment, the application also provides a device for predicting the residual service life of the super capacitor. The apparatus may include: a processor, a memory, and a communication unit. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not limiting of the application, and may be a bus architecture, a star architecture, a combination of more or fewer components than those shown, or a different arrangement of components.
The communication unit is used for establishing a communication channel so that the storage device can communicate with other devices. And receiving data sent by other equipment or sending data to other equipment.
The processor, which is a control center of the storage device, connects various parts of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and/or processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, a processor may include only a Central Processing Unit (CPU). In the embodiments of the present application, the CPU may be a single arithmetic core or may include multiple arithmetic cores.
The memory, which is used to store instructions for execution by the processor, may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The execution instructions in the memory, when executed by the processor, enable the apparatus to perform some or all of the steps in the above-described method embodiments.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the device and electronic apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the description in the method embodiments.
The above-described embodiments of the present application do not limit the scope of the present application.
Claims (10)
1. A super capacitor life prediction method applied to a power system is characterized by comprising the following steps:
step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said Gaussian generator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk;
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1;
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
3. The method according to claim 2, wherein the step S108 specifically includes:
5. The method according to claim 4, wherein the step S110 specifically includes:
6. The method according to claim 5, wherein the step S110 further comprises:
the state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:;
wherein the content of the first and second substances,as an observed valueAnd a posteriori estimating the particlesThe posterior estimation error between, the expression is:;
wherein v iskThe noise was observed for the system.
7. The method according to claim 6, wherein the step S110 further comprises:
error of current a posteriori estimationAt a minimum, the a posteriori estimation error is expected to be:;
8. The method according to claim 7, wherein the step S110 further comprises:,,wherein, in the step (A),estimating error covariance for the prior;
the posterior estimated value obtained by the k-th iterationAs the group of particles at the k +1 th iterationMean value of:;
10. An ultracapacitor life predicting device applied to a power system is characterized by comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110010643.4A CN112329352A (en) | 2021-01-06 | 2021-01-06 | Supercapacitor service life prediction method and device applied to power system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110010643.4A CN112329352A (en) | 2021-01-06 | 2021-01-06 | Supercapacitor service life prediction method and device applied to power system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112329352A true CN112329352A (en) | 2021-02-05 |
Family
ID=74302495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110010643.4A Pending CN112329352A (en) | 2021-01-06 | 2021-01-06 | Supercapacitor service life prediction method and device applied to power system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112329352A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113655314A (en) * | 2021-08-12 | 2021-11-16 | 华南理工大学 | Super capacitor cycle life prediction method, system, device and medium |
CN113945818A (en) * | 2021-10-26 | 2022-01-18 | 电子科技大学 | MOSFET service life prediction method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108761346A (en) * | 2018-06-20 | 2018-11-06 | 首都师范大学 | A kind of vehicle lithium battery method for predicting residual useful life |
CN110441669A (en) * | 2019-06-27 | 2019-11-12 | 合肥工业大学 | The gradual failure diagnosis of uncertain sophisticated circuitry system and life-span prediction method |
CN111060834A (en) * | 2019-12-19 | 2020-04-24 | 中国汽车技术研究中心有限公司 | Power battery state of health estimation method |
WO2020191980A1 (en) * | 2019-03-22 | 2020-10-01 | 江南大学 | Blind calibration method for wireless sensor network data drift |
-
2021
- 2021-01-06 CN CN202110010643.4A patent/CN112329352A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108761346A (en) * | 2018-06-20 | 2018-11-06 | 首都师范大学 | A kind of vehicle lithium battery method for predicting residual useful life |
WO2020191980A1 (en) * | 2019-03-22 | 2020-10-01 | 江南大学 | Blind calibration method for wireless sensor network data drift |
CN110441669A (en) * | 2019-06-27 | 2019-11-12 | 合肥工业大学 | The gradual failure diagnosis of uncertain sophisticated circuitry system and life-span prediction method |
CN111060834A (en) * | 2019-12-19 | 2020-04-24 | 中国汽车技术研究中心有限公司 | Power battery state of health estimation method |
Non-Patent Citations (1)
Title |
---|
ROSARIO TOSCANO AND PATRICK LYONNET: ""Heuristic Kalman Algorithm for Solving Optimization Problems"", 《IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113655314A (en) * | 2021-08-12 | 2021-11-16 | 华南理工大学 | Super capacitor cycle life prediction method, system, device and medium |
CN113655314B (en) * | 2021-08-12 | 2022-07-26 | 华南理工大学 | Super capacitor cycle life prediction method, system, device and medium |
CN113945818A (en) * | 2021-10-26 | 2022-01-18 | 电子科技大学 | MOSFET service life prediction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Synchronous estimation of state of health and remaining useful lifetime for lithium-ion battery using the incremental capacity and artificial neural networks | |
Xiang et al. | Probabilistic power flow with topology changes based on deep neural network | |
CN110334726A (en) | A kind of identification of the electric load abnormal data based on Density Clustering and LSTM and restorative procedure | |
CN112329352A (en) | Supercapacitor service life prediction method and device applied to power system | |
Tang et al. | Design of power lithium battery management system based on digital twin | |
CN108090615B (en) | Minimum frequency prediction method after power system fault based on cross entropy integrated learning | |
CN110826774A (en) | Bus load prediction method and device, computer equipment and storage medium | |
CN114004155B (en) | Transient stability evaluation method and device considering topological structure characteristics of power system | |
CN112731183B (en) | Improved ELM-based lithium ion battery life prediction method | |
Hasanpour et al. | Software defect prediction based on deep learning models: Performance study | |
CN114065649A (en) | Short-term prediction method and prediction system for top layer oil temperature of distribution transformer | |
Wanner et al. | Quality modelling in battery cell manufacturing using soft sensoring and sensor fusion-A review | |
CN113010504A (en) | Electric power data anomaly detection method and system based on LSTM and improved K-means algorithm | |
CN113988558B (en) | Power grid dynamic security assessment method based on blind area identification and electric coordinate system expansion | |
CN114779089A (en) | Method for calculating battery state of charge based on energy storage lithium battery equivalent circuit model | |
Gu et al. | A Fletcher‐Reeves conjugate gradient optimized multi‐reservoir echo state network for state of charge estimation in vehicle battery | |
CN113093014B (en) | Online collaborative estimation method and system for SOH and SOC based on impedance parameters | |
Kharlamova et al. | Evaluating machine-learning-based methods for modeling a digital twin of battery systems providing frequency regulation | |
CN117407795A (en) | Battery safety prediction method and device, electronic equipment and storage medium | |
CN113033898A (en) | Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network | |
CN111061708A (en) | Electric energy prediction and restoration method based on LSTM neural network | |
CN114895190B (en) | Method and equipment for estimating charge quantity based on extreme learning and extended Kalman filtering | |
Qin et al. | Direct Data-Driven Methods for Risk Limiting Dispatch | |
Bodin et al. | Making Differentiable Architecture Search less local | |
Jihin et al. | Health state assessment and lifetime prediction based on unsupervised state estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210205 |