CN112329352A - Supercapacitor service life prediction method and device applied to power system - Google Patents

Supercapacitor service life prediction method and device applied to power system Download PDF

Info

Publication number
CN112329352A
CN112329352A CN202110010643.4A CN202110010643A CN112329352A CN 112329352 A CN112329352 A CN 112329352A CN 202110010643 A CN202110010643 A CN 202110010643A CN 112329352 A CN112329352 A CN 112329352A
Authority
CN
China
Prior art keywords
particles
particle
data set
iteration
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110010643.4A
Other languages
Chinese (zh)
Inventor
戴吉勇
闵锐
孙建瑞
孟宾
王凯
李晓晨
赵坤
韩书婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Wide Area Technology Co ltd
Original Assignee
Shandong Wide Area Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Wide Area Technology Co ltd filed Critical Shandong Wide Area Technology Co ltd
Priority to CN202110010643.4A priority Critical patent/CN112329352A/en
Publication of CN112329352A publication Critical patent/CN112329352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/25Design optimisation, verification or simulation using particle-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/04Ageing analysis or optimisation against ageing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a supercapacitor service life prediction method and device applied to a power system, and the method adopts a data-based method and combines a heuristic Kalman filtering HKF algorithm and an extreme learning machine ELM to predict the remaining service life of a supercapacitor. By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.

Description

Supercapacitor service life prediction method and device applied to power system
Technical Field
The application relates to the technical field of super capacitors, in particular to a super capacitor service life prediction method and device applied to a power system.
Background
The super capacitor is a novel energy storage device between a traditional capacitor and a rechargeable battery, has the characteristics of rapid charging and discharging of the capacitor and the energy storage characteristic of the battery, and is a novel energy storage element with high efficiency, practicability and environmental protection. The super capacitor is widely applied to the power system by virtue of the advantages of small volume, low production cost, simple internal structure and the like.
In the actual operation process, the aging and end-of-life states of the super capacitor have serious influence on the safety and reliability of the power system. In order to ensure stable and safe power supply of the power system and accurately predict the residual service life of the super capacitor, the replacement or maintenance of the super capacitor before the super capacitor reaches the end-of-life state has important significance on the operation quality of the power system.
At present, the commonly used methods for predicting the remaining service life of the super capacitor are mainly divided into two types: model-driven based methods and data-driven based methods. The model-based driving method is characterized in that an equivalent circuit model is established according to the energy storage principle and the internal structure of the super capacitor, model parameter identification is completed according to a charge-discharge experiment, and then the prediction of the remaining service life of the super capacitor is realized. The prediction model of the residual service life of the super capacitor comprises the following steps: the model comprises an aging mechanism model, a particle filter model, an Arrhenius law model, a Weibull failure statistical theory model and an RC equivalent circuit model. A data-driven based approach would be to train the model from a historical dataset by means of machine learning or the like to achieve a prediction of the unknown part. The data-based method can also achieve higher accuracy while ensuring the accuracy of the historical data. Currently, common data driving methods are: support vector machines, correlation vector machines, autoregressive models, artificial neural networks, and the like.
Due to the fact that the electrochemical structure of the inner portion of the super capacitor is complex, the model-based super capacitor residual service life prediction method is high in complexity and difficult to operate. Compared with a prediction method based on a model, the prediction method based on data has the advantages of simple structure, low complexity and the like. Under the condition of ensuring the precision of a training data set, the data-based prediction method can achieve higher precision without researching the internal structure and the operation mechanism of the super capacitor.
The artificial neural network has obvious advantages in prediction due to the self-learning function, but the traditional artificial neural network usually needs to set a large number of training parameters and is easy to generate local optimal values. Therefore, a better method for predicting the remaining service life of the super capacitor is in urgent need.
Disclosure of Invention
The embodiment of the application provides a super capacitor service life prediction method and device applied to a power system, and is beneficial to solving the technical problems in the prior art.
In a first aspect, an embodiment of the present application provides a method for predicting remaining service life of a supercapacitor, including:
step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said Gaussian generator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
Preferably, the step S105 specifically includes:
the Gaussian generator N (m)k, Sk 2) Generating N groups of particle swarms satisfying normal distribution
Figure 982315DEST_PATH_IMAGE001
And is recorded as:
Figure 824369DEST_PATH_IMAGE002
wherein x isk(i) For particle labeling, i =1,2,3, … N, N being the number of particles.
Preferably, the step S108 specifically includes:
calculating particle x by loss functionk(i) And the mean value mkAccording to the order of the errors from small to large, the first N is selectedζForming the best candidate particle group
Figure 219578DEST_PATH_IMAGE003
And is recorded as:
Figure 237213DEST_PATH_IMAGE004
wherein N isζFor optimal particle numbers.
Preferably, the step S109 specifically includes:
passing through type
Figure 325254DEST_PATH_IMAGE005
Calculating NζOptimum average value ζ of group particleskOf the through type
Figure 952807DEST_PATH_IMAGE006
Calculating the observed noise vk
Preferably, the step S110 specifically includes:
in the HKF algorithm, the expression of kalman estimation is:
Figure 886128DEST_PATH_IMAGE007
. Wherein the content of the first and second substances,
Figure 795088DEST_PATH_IMAGE008
is a posterior estimate of the particle;
Figure 647507DEST_PATH_IMAGE009
the optimal value of the prior estimated value of the particles is obtained; a. thekAnd LkIs a state transition matrix.
Preferably, the step S110 further includes:
the state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:
Figure 136257DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 935586DEST_PATH_IMAGE011
as an observed value
Figure 458971DEST_PATH_IMAGE012
And a posteriori estimating the particles
Figure 623236DEST_PATH_IMAGE013
The posterior estimation error between, the expression is:
Figure 333703DEST_PATH_IMAGE014
wherein the observed value
Figure 733460DEST_PATH_IMAGE015
Satisfies the following conditions:
Figure 111352DEST_PATH_IMAGE016
(ii) a Wherein v iskThe noise was observed for the system.
Preferably, the step S110 further includes:
error of current a posteriori estimation
Figure 10300DEST_PATH_IMAGE017
At a minimum, the a posteriori estimation error is expected to be:
Figure 473643DEST_PATH_IMAGE018
defining the a priori estimation error as:
Figure 286878DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 784855DEST_PATH_IMAGE020
is the optimal value of the prior estimated value of the particle,
Figure 618819DEST_PATH_IMAGE021
satisfies the following:
Figure 835037DEST_PATH_IMAGE022
state transition matrix AkAnd LkSatisfies the following conditions:
Figure 514280DEST_PATH_IMAGE023
wherein I is an identity matrix.
Preferably, the step S110 further includes:
Figure 866764DEST_PATH_IMAGE024
Figure 137208DEST_PATH_IMAGE025
Figure 575143DEST_PATH_IMAGE026
wherein, in the step (A),
Figure 261339DEST_PATH_IMAGE027
estimating error covariance for the prior;
the posterior estimated value obtained by the k-th iteration
Figure 202750DEST_PATH_IMAGE028
As the group of particles at the k +1 th iteration
Figure 316200DEST_PATH_IMAGE029
Mean value of
Figure 536703DEST_PATH_IMAGE030
Figure 761011DEST_PATH_IMAGE031
Estimating error covariance a posteriori
Figure 681563DEST_PATH_IMAGE032
Diagonal matrix of
Figure 965913DEST_PATH_IMAGE033
Substitution into
Figure 847282DEST_PATH_IMAGE029
Variance of (2)
Figure 875281DEST_PATH_IMAGE034
Figure 56863DEST_PATH_IMAGE035
(ii) a Output mean m for cycle k +1k+1Sum variance
Figure 902328DEST_PATH_IMAGE036
Preferably, the
Figure 802151DEST_PATH_IMAGE037
The method specifically comprises the following steps:
Figure 837103DEST_PATH_IMAGE038
wherein, in the step (A),
Figure 873192DEST_PATH_IMAGE039
is a deceleration factor;
Figure 859865DEST_PATH_IMAGE040
wherein, alpha is deceleration coefficient, and alpha belongs to (0, 1)]。
In a second aspect, an embodiment of the present application provides an apparatus for predicting remaining service life of a super capacitor, including:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any of the first aspect.
By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a capacitance aging prediction model based on an extreme learning machine ELM according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a capacitance aging prediction method based on an extreme learning machine ELM according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a heuristic kalman filtering HKF algorithm according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a heuristic kalman filtering HKF algorithm according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a capacitor aging prediction model based on HKF-ELM according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart of a capacitor aging prediction method based on HKF-ELM according to an embodiment of the present disclosure;
fig. 7A is a simulation diagram for predicting the remaining service life of a super capacitor based on an ELM model according to an embodiment of the present disclosure;
FIG. 7B is a simulation diagram of predicting the remaining service life of an ultracapacitor based on an HKF-ELM model according to an embodiment of the application;
fig. 7C is a graph illustrating a simulation of predicting the remaining service life of an ultracapacitor based on the PSO-ELM according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the data-based prediction method, the artificial neural network has obvious advantages in prediction due to the self-learning function, but the traditional artificial neural network usually needs to set a large number of training parameters and is easy to generate local optimal values. The extreme learning machine ELM is a simple, easy-to-use and effective single-hidden-layer feedforward neural network learning algorithm. The ELM only needs to set the number of hidden layers of the network, does not need to adjust the input weight of the network and the bias of neurons of the hidden layers in the algorithm execution process, and generates a unique optimal solution, so that the ELM has the advantages of high learning speed and good generalization performance.
Fig. 1 is a schematic diagram of a capacitance aging prediction model based on an extreme learning machine ELM according to an embodiment of the present disclosure; fig. 2 is a schematic flowchart of a capacitance aging prediction method based on an extreme learning machine ELM according to an embodiment of the present disclosure. Therein, reference numerals S201-S208 in fig. 1 correspond to the steps in fig. 2 for characterizing the data stream flow direction in the model. As shown in fig. 1 in conjunction with fig. 2, the method mainly comprises the following steps.
Step S201: and taking the capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a testing data set.
Step S202: determining a hidden layer excitation function, a loss function, a cost function, and a number of hidden layer neurons.
Specifically, determining the Sigmoid function as a hidden layer excitation function f (x); determining the AE function as a loss function; determining the MSE function, the RMSE function and the R2 decision coefficient as an ELM model cost function; according to the dimension of the training set, the passing formula
Figure 981405DEST_PATH_IMAGE041
The number of hidden layer neurons L is determined. Wherein h is the hidden layer neuron dimension; m is the dimension of the neuron of the input layer and is equal to the dimension of the input matrix; n is the output layer neuron dimension, and is the same as the output layer neuron dimension; k is an additional constant term, and k is ∈ [0, 10 ]]The specific numerical value is set by an operator.
Step S203: randomly generating input weights WiAnd offset Bi
Step S204: with the independent variable of the training data set, the input weight WiAnd offset BiAs input matrix, an output matrix F is calculated by an excitation function.
Step S205: and calculating the error between the dependent variable of the output matrix F and the dependent variable of the training data set by using a loss function, and updating the output weight beta according to the error between the dependent variable of the output matrix F and the dependent variable of the training data set to obtain a correction model.
In particular according to formula
Figure 820048DEST_PATH_IMAGE042
And updating the output weight beta according to the error between the dependent variable of the output matrix F and the dependent variable of the training data set to obtain a corrected model.
The modified model is a modified ELM model. Wherein, F+Moore-Penrose generalized inverse matrix as output matrix F, Y being of training data setDependent variables.
Step S206: and predicting through the correction model by taking the independent variable of the training data set and the independent variable of the testing data set as input values to obtain a predicted data set.
Step S207: a predicted image is rendered and output through the prediction data set.
Step S208: and calculating and outputting an error between the prediction data set and the original data set through a cost function.
And finishing prediction.
Although the extreme learning machine ELM has certain advantages in applications where prediction of the remaining useful life of a supercapacitor is made. However, in the research process, the inventor finds that a complex collinearity matrix is generated with a certain probability in the process of randomly generating the input weight and the offset in the conventional ELM model, so that the matrix FF + singularity occurs when the model generates the inverse matrix F +, and further random fluctuation occurs when the output weight β is solved. Random fluctuations in output weights can negatively impact ELM prediction accuracy. Through analysis, compared with other KF algorithms such as the traditional KF, Extended Kalman Filtering (EKF) and Unscented Kalman Filtering (UKF), the heuristic Kalman Filtering HKF algorithm can solve the matrix singularity when the extreme learning machine randomly generates the input weight and the offset and has the advantage of solving the problem of non-convex optimization; in the past iterations, the HKF algorithm needs less initialization parameters, and compared with other heuristic algorithms, the HKF algorithm has lower time cost and operation difficulty. Thus, the input weights and offsets of the ELM model can be optimized by the HKF algorithm. Therefore, the method constructs a heuristic Kalman filtering optimization extreme learning machine for predicting the aging life of the supercapacitor.
The heuristic kalman filter HKF algorithm is first explained below.
Fig. 3 is a schematic structural diagram of a heuristic kalman filtering HKF algorithm provided in the embodiment of the present application, and fig. 4 is a schematic flow diagram of a heuristic kalman filtering HKF algorithm provided in the embodiment of the present application. As shown in fig. 3 in conjunction with fig. 4, it mainly includes the following steps.
Step S401: in the k-th iteration, the algorithm parameters are initialized HKF and the number of particles N generated by the Gaussian generator is defined, the optimal number of particles NζAnd a deceleration coefficient alpha.
Step S402: generating a coincidence mean m by a Probability Density Function (PDF) through a Gaussian generatorkStandard deviation SkThe N groups of normal distribution particle groups.
Specifically, by a Gaussian generator N (m)k, Sk 2) Generating N groups satisfying normal distribution
Figure 976223DEST_PATH_IMAGE043
Of the particle swarm
Figure 570015DEST_PATH_IMAGE044
And is recorded as:
Figure 178851DEST_PATH_IMAGE045
wherein x isk(i) For particle labeling, i =1,2,3, … N, N being the number of particles.
Step S403: selecting a loss function by which to calculate the particle xk(i) And the observed value xoptThe error of (2).
Step S404: selecting N groups of normal distribution particle groups from small to large according to errorsζAnd grouping the particles to form the best candidate particle group.
In particular, particle x is calculated by a loss functionk(i) And the mean value mkAccording to the order of the errors from small to large, the first N is selectedζForming the best candidate particle group
Figure 86764DEST_PATH_IMAGE046
And is recorded as:
Figure 956500DEST_PATH_IMAGE047
wherein N isζFor optimal particle numbers.
Step S405: computing an optimal mean of the best candidate population of particlesζkAnd a disturbance matrix Vk
Specifically, the optimum mean value ζkThe expression of (a) is:
Figure 924456DEST_PATH_IMAGE048
disturbance matrix VkThe expression of (a) is:
Figure 489430DEST_PATH_IMAGE049
step S406: and performing Kalman estimation on the optimal candidate particle swarm, and outputting the covariance of the posterior estimation and the posterior error.
Wherein the posterior estimate is noted
Figure 201034DEST_PATH_IMAGE050
The covariance of the a posteriori error is noted as
Figure 128539DEST_PATH_IMAGE051
Specifically, in the HKF algorithm, i.e., the kalman estimation module, the expression of kalman estimation is:
Figure 267396DEST_PATH_IMAGE052
wherein the content of the first and second substances,
Figure 677255DEST_PATH_IMAGE053
is a posterior estimate of the particle;
Figure 192550DEST_PATH_IMAGE054
the optimal value of the prior estimated value of the particles is obtained; a. thekAnd LkIs a state transition matrix.
The state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:
Figure 912244DEST_PATH_IMAGE055
wherein the content of the first and second substances,
Figure 222003DEST_PATH_IMAGE056
as an observed value
Figure 620623DEST_PATH_IMAGE057
And a posteriori estimating the particles
Figure 674030DEST_PATH_IMAGE058
The posterior estimation error between, the expression is:
Figure 310548DEST_PATH_IMAGE059
wherein the observed value
Figure 791208DEST_PATH_IMAGE060
Satisfies the following conditions:
Figure 818070DEST_PATH_IMAGE061
wherein v iskThe noise observed by the system can be estimated according to the actual operation condition of the system.
Error of current a posteriori estimation
Figure 409588DEST_PATH_IMAGE062
At a minimum, the a posteriori estimation error is expected to be:
Figure 369454DEST_PATH_IMAGE063
defining the a priori estimation error as:
Figure 880069DEST_PATH_IMAGE064
wherein the content of the first and second substances,
Figure 659807DEST_PATH_IMAGE065
is the optimal value of the prior estimated value of the particle,
Figure 353218DEST_PATH_IMAGE066
satisfies the following:
Figure 636432DEST_PATH_IMAGE067
at this time, if it is to be guaranteed
Figure 458895DEST_PATH_IMAGE068
Then state transition matrix AkAnd LkSatisfies the following conditions:
Figure 584982DEST_PATH_IMAGE069
wherein I is an identity matrix.
Substituting the formula into a Kalman estimation module expression, and arranging to obtain:
Figure 783883DEST_PATH_IMAGE070
l is determined by the following formulakThe covariance of the a posteriori error is minimized.
Figure 249499DEST_PATH_IMAGE071
Figure 242863DEST_PATH_IMAGE072
Wherein the content of the first and second substances,
Figure 466034DEST_PATH_IMAGE073
the error covariance is estimated a priori.
Step S407: setting mk+1And Sk+1For the (k + 1) th iteration, the parameters of the (k + 1) th iteration are initialized.
Specifically, the posterior estimate obtained by the k-th iteration is used
Figure 327679DEST_PATH_IMAGE074
As the group of particles at the k +1 th iteration
Figure 851065DEST_PATH_IMAGE075
Mean value m ofk+1
Figure 484171DEST_PATH_IMAGE076
Estimate the error covariance a posteriori
Figure 725797DEST_PATH_IMAGE077
Diagonal matrix of
Figure 561772DEST_PATH_IMAGE078
Substitution into
Figure 939664DEST_PATH_IMAGE079
Variance of (2)
Figure 133885DEST_PATH_IMAGE080
Figure 597227DEST_PATH_IMAGE081
To this end, the kth Kalman estimation is ended, and the mean value m for the (k + 1) th cycle is outputk+1Sum variance
Figure 676042DEST_PATH_IMAGE082
In the iteration process, the problem of premature convergence is easy to occur in continuous iteration, and in order to solve the problem, a deceleration factor a is introducedk
Figure 908440DEST_PATH_IMAGE083
Wherein alpha is a deceleration coefficient, and alpha belongs to (0, 1)]Can be set independently;
Figure 273562DEST_PATH_IMAGE084
is a diagonal matrix WkThe ith element of (1). Covariance of a posteriori error after addition of a deceleration factor
Figure 224201DEST_PATH_IMAGE085
The rewrite is:
Figure 903444DEST_PATH_IMAGE086
step S408: judging whether the particles in the optimal candidate particle swarm meet or not after the k iterationClosing a preset condition, if the preset condition is met, terminating the iteration and outputting mk+1And Sk+1(ii) a If not, return to step S402.
Wherein, the preset condition may be formula:
Figure 255928DEST_PATH_IMAGE087
. ρ is a specified radius, set before the HKF algorithm runs. In an iterative process, the HKF algorithm performs an approximate estimation by repeatedly selecting predicted values within the range of the observed value ρ (ρ is usually set to a smaller value), and by appropriately adjusting the parameters, the HKF algorithm can converge to an optimal approach scheme with low variance.
The embodiment of the application combines the HKF algorithm with the ELM model to provide a heuristic Kalman filtering HKF-extreme learning machine ELM model.
FIG. 5 is a schematic diagram of a capacitor aging prediction model based on HKF-ELM according to an embodiment of the present disclosure; fig. 6 is a schematic flowchart of a capacitor aging prediction method based on HKF-ELM according to an embodiment of the present disclosure. Wherein reference numerals S101-S115 in fig. 5 correspond to the steps in fig. 6 for characterizing the data flow direction in the model. As shown in fig. 5 in combination with fig. 6, the method mainly includes the following steps.
Step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said GaussGenerator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
Specifically, after step S115, an error between the dependent variable of the original data set and the dependent variable of the predicted data set may also be calculated through a cost function, so as to obtain a prediction error, and further verify the prediction effect.
It should be noted that for the sake of brevity, the specific algorithm formula is omitted in the HKF-ELM capacitance aging prediction method section, and the details can be found in the description of the HKF algorithm and the ELM model section.
By adopting the technical scheme provided by the embodiment of the application, the defects of high complexity, poor generalization capability and the like of the existing model-based method are overcome, and the prediction precision is ensured, meanwhile, the model complexity and the operation time are reduced, and the generalization capability of the model is improved by using the data-based method. In addition, because the traditional extreme learning machine-based method is easy to generate the matrix singularity when the input weight and the offset are randomly generated, the matrix singularity generated when the input weight and the offset are randomly generated by the extreme learning machine is solved by combining the heuristic Kalman filtering algorithm in the embodiment of the application, and the more accurate prediction result of the residual service life of the supercapacitor is obtained.
Fig. 7A is a simulation diagram for predicting the remaining service life of a super capacitor based on an ELM model according to an embodiment of the present disclosure; FIG. 7B is a simulation diagram for predicting the remaining service life of an ultracapacitor based on an HKF-ELM model according to an embodiment of the application. Comparing fig. 7A and fig. 7B, it is found that the coincidence degree between the actual capacitance value and the predicted capacitance value is higher in fig. 7B, which shows that a better prediction effect can be obtained by using the remaining service life prediction scheme of the supercapacitor based on the HKF-ELM model.
Fig. 7C is a graph illustrating a simulation of predicting the remaining service life of an ultracapacitor based on the PSO-ELM according to an embodiment of the present disclosure.
Wherein, PSO is a particle swarm optimization algorithm. In the process of searching the optimal solution by the particle swarm, the initial position of a single particle is random, and the searching direction is random. Over time, the particles at the initial random positions spontaneously organize themselves into particle groups through experience accumulation, information sharing, mutual learning and the like. Through the sharing of local optimum among individuals, the particle swarm search center gradually moves from the local optimum to the global optimum, and finally the global optimum solution is obtained. The PSO algorithm utilizes the particle speed and the particle position to complete the optimal value search, only the optimal particles are needed to be searched in the iteration process, and cross operation and variation operation are avoided, so that the PSO algorithm is simple in structure, low in complexity and few in adjustment parameters. The ELM is optimized by utilizing the PSO algorithm, and the method has the advantages of high search speed, easiness in implementation, high convergence speed and the like. However, the PSO algorithm lacks dynamic adjustment of particle velocity and has strong dependence on parameters, so that the PSO algorithm has low local search capability and accuracy, and the particles may miss the global optimal solution in the dive process.
Comparing fig. 7B and fig. 7C, it is found that the coincidence degree of the actual capacitance value and the predicted capacitance value is higher in fig. 7B, which shows that a better prediction effect can be obtained by using the remaining service life prediction scheme of the supercapacitor based on the HKF-ELM model.
Corresponding to the method embodiment, the application also provides a device for predicting the residual service life of the super capacitor. The apparatus may include: a processor, a memory, and a communication unit. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not limiting of the application, and may be a bus architecture, a star architecture, a combination of more or fewer components than those shown, or a different arrangement of components.
The communication unit is used for establishing a communication channel so that the storage device can communicate with other devices. And receiving data sent by other equipment or sending data to other equipment.
The processor, which is a control center of the storage device, connects various parts of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and/or processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, a processor may include only a Central Processing Unit (CPU). In the embodiments of the present application, the CPU may be a single arithmetic core or may include multiple arithmetic cores.
The memory, which is used to store instructions for execution by the processor, may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The execution instructions in the memory, when executed by the processor, enable the apparatus to perform some or all of the steps in the above-described method embodiments.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the device and electronic apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the description in the method embodiments.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (10)

1. A super capacitor life prediction method applied to a power system is characterized by comprising the following steps:
step S101: initializing parameters of an ELM model of an extreme learning machine;
step S102: designated radius rho, particle number N and optimal particle number N for heuristic Kalman filtering HKF algorithmζAssigning a deceleration coefficient alpha and an iteration number k, wherein the initial iteration number k =0 is set, and the maximum iteration number is set;
step S103: generating a Gaussian generator N (m)k, Sk 2) Said Gaussian generator N (m)k, Sk 2) For generating a satisfied mean m in an iterationkStandard deviation SkThe particle swarm is normally distributed;
step S104: taking a capacitance aging detection data set as an original data set, and dividing the original data set into a training data set and a test data set;
step S105: by said Gaussian generator N (m)k, Sk 2) Generating N groups of particles;
step S106: taking the N groups of particles as random parameters of the HKF algorithm, and introducing the training data set into the ELM model for data training;
step S107: taking an AE function as a loss function of the ELM model and the HKF algorithm, and performing function correction through an AE value generated in the ELM model by the training data set; taking an MSE function, an RMSE function and an R2 decision coefficient as cost functions of the ELM model;
step S108: selecting N from the N groups of particles according to a loss functionζGroup particles are used as optimal candidate values;
step S109: calculating the NζOptimum average value ζ of group particleskAnd observation noise vk
Step S110: performing Kalman update in the HKF algorithm to calculate mk+1And Sk+1
Step S111: let mk= mk+1, Sk= Sk+1Completing the (k + 1) th iteration initialization;
step S112: judging the mkWhether a preset condition is met or not, if the preset condition is met or the iteration frequency reaches the maximum value, the iteration is finished, and the step is entered into S113; if the preset condition is not met, returning to the step S105;
step S113: the m iskTaking values as input weight and offset of the ELM model;
step S114: inputting the independent variable of the original data set into the ELM model to start prediction;
step S115: a capacitance aging prediction dataset comprising independent variables and predicted values for the original dataset is obtained.
2. The method according to claim 1, wherein the step S105 specifically includes:
the Gaussian generator N (m)k, Sk 2) Generating N groups of particle swarms satisfying normal distribution
Figure 509916DEST_PATH_IMAGE001
And is recorded as:
Figure 982486DEST_PATH_IMAGE002
wherein x isk(i) For particle labeling, i =1,2,3, … N, N being the number of particles.
3. The method according to claim 2, wherein the step S108 specifically includes:
calculating particle x by loss functionk(i) And the mean value mkAccording to the order of the errors from small to large, the first N is selectedζForming the best candidate particle group
Figure 689411DEST_PATH_IMAGE004
And is recorded as:
Figure 614641DEST_PATH_IMAGE005
wherein N isζFor optimal particle numbers.
4. The method according to claim 3, wherein the step S109 specifically includes:
passing through type
Figure 838949DEST_PATH_IMAGE006
Calculating NζOptimum average value ζ of group particleskOf the through type
Figure 464228DEST_PATH_IMAGE007
Calculating the observed noise vk
5. The method according to claim 4, wherein the step S110 specifically includes:
in the HKF algorithm, the expression of kalman estimation is:
Figure 748579DEST_PATH_IMAGE008
wherein, in the step (A),
Figure 161105DEST_PATH_IMAGE009
is a posterior estimate of the particle;
Figure 392367DEST_PATH_IMAGE010
the optimal value of the prior estimated value of the particles is obtained; a. thekAnd LkIs a state transition matrix.
6. The method according to claim 5, wherein the step S110 further comprises:
the state transition matrix is used for ensuring the minimum error of the particles in Kalman estimation, and the error expression is as follows:
Figure 573949DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 29201DEST_PATH_IMAGE012
as an observed value
Figure 788079DEST_PATH_IMAGE013
And a posteriori estimating the particles
Figure 354189DEST_PATH_IMAGE014
The posterior estimation error between, the expression is:
Figure 390279DEST_PATH_IMAGE015
wherein the observed value
Figure 485273DEST_PATH_IMAGE016
Satisfies the following conditions:
Figure 872392DEST_PATH_IMAGE017
wherein v iskThe noise was observed for the system.
7. The method according to claim 6, wherein the step S110 further comprises:
error of current a posteriori estimation
Figure 773352DEST_PATH_IMAGE018
At a minimum, the a posteriori estimation error is expected to be:
Figure 929527DEST_PATH_IMAGE019
defining the a priori estimation error as:
Figure 726582DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 928893DEST_PATH_IMAGE021
is the optimal value of the prior estimated value of the particle,
Figure 102386DEST_PATH_IMAGE022
satisfies the following:
Figure 113067DEST_PATH_IMAGE023
state transition matrix AkAnd LkSatisfies the following conditions:
Figure 815444DEST_PATH_IMAGE024
it is composed ofIn (1), I is an identity matrix.
8. The method according to claim 7, wherein the step S110 further comprises:
Figure 911576DEST_PATH_IMAGE025
Figure 652873DEST_PATH_IMAGE026
Figure 783640DEST_PATH_IMAGE027
wherein, in the step (A),
Figure 922498DEST_PATH_IMAGE028
estimating error covariance for the prior;
the posterior estimated value obtained by the k-th iteration
Figure 99401DEST_PATH_IMAGE029
As the group of particles at the k +1 th iteration
Figure 349117DEST_PATH_IMAGE030
Mean value of
Figure 334390DEST_PATH_IMAGE031
Figure 378570DEST_PATH_IMAGE032
Estimating error covariance a posteriori
Figure 449294DEST_PATH_IMAGE033
Diagonal matrix of
Figure 96176DEST_PATH_IMAGE034
Substitution into
Figure 935956DEST_PATH_IMAGE035
Variance of (2)
Figure 416616DEST_PATH_IMAGE036
Figure 505795DEST_PATH_IMAGE037
Output mean m for cycle k +1k+1Sum variance
Figure DEST_PATH_IMAGE038
9. The method of claim 8, wherein the step of applying the coating comprises applying a coating to the substrate
Figure 831734DEST_PATH_IMAGE039
The method specifically comprises the following steps:
Figure DEST_PATH_IMAGE040
wherein, in the step (A),
Figure 886540DEST_PATH_IMAGE041
is a deceleration factor;
Figure DEST_PATH_IMAGE042
wherein, alpha is deceleration coefficient, and alpha belongs to (0, 1)]。
10. An ultracapacitor life predicting device applied to a power system is characterized by comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-9.
CN202110010643.4A 2021-01-06 2021-01-06 Supercapacitor service life prediction method and device applied to power system Pending CN112329352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110010643.4A CN112329352A (en) 2021-01-06 2021-01-06 Supercapacitor service life prediction method and device applied to power system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110010643.4A CN112329352A (en) 2021-01-06 2021-01-06 Supercapacitor service life prediction method and device applied to power system

Publications (1)

Publication Number Publication Date
CN112329352A true CN112329352A (en) 2021-02-05

Family

ID=74302495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110010643.4A Pending CN112329352A (en) 2021-01-06 2021-01-06 Supercapacitor service life prediction method and device applied to power system

Country Status (1)

Country Link
CN (1) CN112329352A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655314A (en) * 2021-08-12 2021-11-16 华南理工大学 Super capacitor cycle life prediction method, system, device and medium
CN113945818A (en) * 2021-10-26 2022-01-18 电子科技大学 MOSFET service life prediction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108761346A (en) * 2018-06-20 2018-11-06 首都师范大学 A kind of vehicle lithium battery method for predicting residual useful life
CN110441669A (en) * 2019-06-27 2019-11-12 合肥工业大学 The gradual failure diagnosis of uncertain sophisticated circuitry system and life-span prediction method
CN111060834A (en) * 2019-12-19 2020-04-24 中国汽车技术研究中心有限公司 Power battery state of health estimation method
WO2020191980A1 (en) * 2019-03-22 2020-10-01 江南大学 Blind calibration method for wireless sensor network data drift

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108761346A (en) * 2018-06-20 2018-11-06 首都师范大学 A kind of vehicle lithium battery method for predicting residual useful life
WO2020191980A1 (en) * 2019-03-22 2020-10-01 江南大学 Blind calibration method for wireless sensor network data drift
CN110441669A (en) * 2019-06-27 2019-11-12 合肥工业大学 The gradual failure diagnosis of uncertain sophisticated circuitry system and life-span prediction method
CN111060834A (en) * 2019-12-19 2020-04-24 中国汽车技术研究中心有限公司 Power battery state of health estimation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROSARIO TOSCANO AND PATRICK LYONNET: ""Heuristic Kalman Algorithm for Solving Optimization Problems"", 《IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655314A (en) * 2021-08-12 2021-11-16 华南理工大学 Super capacitor cycle life prediction method, system, device and medium
CN113655314B (en) * 2021-08-12 2022-07-26 华南理工大学 Super capacitor cycle life prediction method, system, device and medium
CN113945818A (en) * 2021-10-26 2022-01-18 电子科技大学 MOSFET service life prediction method

Similar Documents

Publication Publication Date Title
Zhang et al. Synchronous estimation of state of health and remaining useful lifetime for lithium-ion battery using the incremental capacity and artificial neural networks
Xiang et al. Probabilistic power flow with topology changes based on deep neural network
CN110334726A (en) A kind of identification of the electric load abnormal data based on Density Clustering and LSTM and restorative procedure
CN112329352A (en) Supercapacitor service life prediction method and device applied to power system
Tang et al. Design of power lithium battery management system based on digital twin
CN108090615B (en) Minimum frequency prediction method after power system fault based on cross entropy integrated learning
CN110826774A (en) Bus load prediction method and device, computer equipment and storage medium
CN114004155B (en) Transient stability evaluation method and device considering topological structure characteristics of power system
CN112731183B (en) Improved ELM-based lithium ion battery life prediction method
Hasanpour et al. Software defect prediction based on deep learning models: Performance study
CN114065649A (en) Short-term prediction method and prediction system for top layer oil temperature of distribution transformer
Wanner et al. Quality modelling in battery cell manufacturing using soft sensoring and sensor fusion-A review
CN113010504A (en) Electric power data anomaly detection method and system based on LSTM and improved K-means algorithm
CN113988558B (en) Power grid dynamic security assessment method based on blind area identification and electric coordinate system expansion
CN114779089A (en) Method for calculating battery state of charge based on energy storage lithium battery equivalent circuit model
Gu et al. A Fletcher‐Reeves conjugate gradient optimized multi‐reservoir echo state network for state of charge estimation in vehicle battery
CN113093014B (en) Online collaborative estimation method and system for SOH and SOC based on impedance parameters
Kharlamova et al. Evaluating machine-learning-based methods for modeling a digital twin of battery systems providing frequency regulation
CN117407795A (en) Battery safety prediction method and device, electronic equipment and storage medium
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN111061708A (en) Electric energy prediction and restoration method based on LSTM neural network
CN114895190B (en) Method and equipment for estimating charge quantity based on extreme learning and extended Kalman filtering
Qin et al. Direct Data-Driven Methods for Risk Limiting Dispatch
Bodin et al. Making Differentiable Architecture Search less local
Jihin et al. Health state assessment and lifetime prediction based on unsupervised state estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210205