CN114943372A - Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network - Google Patents

Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network Download PDF

Info

Publication number
CN114943372A
CN114943372A CN202210500365.5A CN202210500365A CN114943372A CN 114943372 A CN114943372 A CN 114943372A CN 202210500365 A CN202210500365 A CN 202210500365A CN 114943372 A CN114943372 A CN 114943372A
Authority
CN
China
Prior art keywords
neural network
data
fuel cell
bayesian
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210500365.5A
Other languages
Chinese (zh)
Inventor
叶夏明
王激华
秦如意
应芳义
马丽军
俞佳捷
姚剑琪
杨跃平
李琪
董平
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202210500365.5A priority Critical patent/CN114943372A/en
Publication of CN114943372A publication Critical patent/CN114943372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/30Hydrogen technology
    • Y02E60/50Fuel cells

Abstract

The invention discloses a method and a device for predicting the service life of a proton exchange membrane based on a Bayesian recurrent neural network, which can quantify the uncertainty of a fuel cell in the operation process, and specifically comprises the following steps: preprocessing original data by using a principal component analysis method; establishing different cyclic neural networks, predicting different characteristic data, and selecting a proper neural network according to a prediction result; replacing fixed parameters in the deep learning neural network model with random variables, and quantifying uncertainty of each characteristic data through probability density distribution to establish a Bayesian neural network; fourthly, training network parameters of the Bayesian neural network by using the training set data; inputting the former operation data of the fuel cell into the circulation neural network to obtain the operation result of the characteristic data. The invention can not only perform interval estimation on the voltage of the fuel cell, but also perform point estimation on the voltage of the fuel cell, thereby effectively improving the fault tolerance rate and the accuracy of the life prediction of the fuel cell.

Description

Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network
Technical Field
The invention relates to the technical field of power generation of proton exchange membrane fuel cells, in particular to a method and a device for predicting the service life of a proton exchange membrane based on a Bayesian circulating neural network.
Background
The fuel cell is a novel electrochemical energy storage cell, effectively utilizes hydrogen energy, and efficiently and environmentally converts chemical energy into electric energy through electrochemical reaction. Among various types of fuel cells, the proton exchange membrane fuel cell has the characteristics of low noise, zero pollution, no corrosion, long service life and the like, also has the advantages of large output current, low working temperature, high energy efficiency, quick start, compact structure and the like, and is widely applied to portable power supplies, motor vehicle power supplies and medium and small power generation systems. Meanwhile, a power station established on the basis of the proton exchange membrane fuel cell can be connected with a power grid dispatching system during the peak period of power utilization, so that the load of the power grid is balanced.
However, the short lifetime, fast performance decay, high maintenance cost and expensive hydrogen fuel have significantly hindered the deployment and commercialization of PEMFCs. In order to predict the remaining service life (RUL) of the pem fuel cell before a failure occurs and to arrange to maintain the fuel cell system in time to prolong the service life, RUL prediction of the PENFC becomes an urgent problem to be solved, so research on the prediction method of the remaining service life of the pem fuel cell has great research significance for improving the durability and the actual remaining life of the PEMFC.
At present, various methods for predicting the residual service life of the proton exchange membrane fuel cell exist. The patent CN202110385900.2 relates to a fuel cell remaining service life prediction method based on deep learning, which comprises the steps of fitting fuel cell operation data after noise reduction processing, establishing a nonlinear mapping relation between current input data and target data, establishing a loss function, optimizing neural network model parameters through a feedback derivation method, obtaining an optimal neural network model, and realizing fuel cell RUL prediction through the most neural network model. Patent CN201910260003.1 discloses a fuel cell life prediction method based on deep convolutional neural network, which includes selecting characteristic variables as model input data for original data in experimental test data set; carrying out data normalization on the original data; extracting sample data, and dividing the sample data into multiple batches; setting parameters of the fuel cell life prediction model, and then performing convolution layer calculation; after the calculation of one convolution layer is completed, before entering the next convolution layer, performing maximum pooling on the obtained characteristic mapping matrix; after the calculation of the multilayer convolution layer, carrying out global average pooling, and carrying out full-connection operation on the multilayer characteristic mapping matrix of the single characteristic; and (4) outputting a prediction result through multi-round training according to batch division.
Although many scholars have already done a lot of work on predicting the remaining service life of proton exchange membrane fuel cells, such as using various neural networks such as DNN, RNN, CNN, etc., most prediction methods belong to point estimation prediction, and basically do not consider uncertainty factors of fuel cell operation data. Therefore, in order to quantify uncertainty factors in the operation data, the invention discloses a fuel cell service life prediction method and a fuel cell service life prediction device based on a Bayesian recurrent neural network.
Disclosure of Invention
The invention discloses a life prediction method and a life prediction device of a proton exchange membrane based on a Bayesian recurrent neural network, which are used for making up the deficiency of the life prediction of a fuel cell in the aspect of interval estimation.
In order to achieve the purpose, the invention designs a life prediction method of a proton exchange membrane based on a Bayesian recurrent neural network, which is characterized by comprising the following steps:
1) performing data preprocessing on a fuel cell operation sample data set, extracting main characteristic data in the data set by adopting a principal component analysis method to realize data dimension reduction, and dividing the processed data set into a training set and a test set;
2) establishing three different recurrent neural networks to respectively predict the characteristic numbers selected in the step 1), and selecting the optimal recurrent neural network according to the prediction result;
3) replacing fixed parameters in the classical deep learning neural network with random variables, quantifying uncertainty factors contained in various types of characteristic data through probability density distribution, and establishing a Bayesian neural network;
4) taking the data in the training set as the input of a Bayesian neural network, training network parameters until a global optimal solution which enables a loss function to reach the minimum is found;
5) preprocessing the operation data of the fuel cell by using a principal component analysis method, predicting the selected characteristic data by using the optimal cyclic neural network selected in the step 2), and taking the predicted result as the input of the trained Bayesian neural network to obtain the distribution and interval of the predicted voltage of the fuel cell at each moment.
Preferably, the step 1) of extracting data features includes:
11) according to the fuel cell operation sample data, recording an original sample-index data matrix as follows:
Figure BDA0003634162150000031
wherein x 1 ,x 2 ,…,x p Respectively, index data of operation of the fuel cell, i.e. x p =[x 1p ,x 2p ,…x np ] T N is the number of samples, and p is the number of indexes of the fuel cell during operation;
12) standardizing the operation sample data of the fuel cell, wherein Var () represents the mean square error of the sample data:
Figure BDA0003634162150000032
Figure BDA0003634162150000033
13) calculating a sample data correlation coefficient matrix:
Figure BDA0003634162150000034
Figure BDA0003634162150000035
wherein r is ij The closer | is to 1, the variable x i ,x j The more similar, the less similar if closer to 0;
14) calculating the eigenvalues and corresponding eigenvectors of the correlation coefficient matrix R:
characteristic value: lambda [ alpha ] 12 ...λ p
Feature vector: a is 1 ,a 2 ...a p
15) Calculating the contribution rate of the sample data:
Figure BDA0003634162150000041
16) extracting data characteristics of the original data set by using a principal component analysis method, and selecting the characteristics with the accumulative contribution rate of 90% as the input characteristics of a subsequent Bayes recurrent neural network: the contribution rates are arranged according to a descending order, the first k principal components are taken to satisfy
Figure BDA0003634162150000042
Wherein T is 0 =90%
And the data set after the data characteristic extraction is according to the following steps of 8: 2 into a training set and a test set.
Preferably, the specific step of selecting the optimal recurrent neural network in the step 2) includes:
21) three mainstream recurrent neural networks were selected: RNN, LSTM, GRU recurrent neural networks;
22) respectively predicting characteristic data of the fuel cell during operation by using three kinds of circulating neural networks, and comparing the prediction results of the three kinds of networks;
23) and comparing and analyzing the prediction results of the three networks by taking the mean square error, the root mean square error and the average absolute error as indexes, and selecting the optimal recurrent neural network according to the comparison result.
Preferably, the step 3) of establishing the bayesian neural network specifically comprises the following steps:
31) assuming that a network parameter in the neural network is ω, p (ω) is a prior distribution of the parameter, and a given training sample is denoted as D ═ X, Y, where X is input data and Y is label data, a value of the network parameter ω is related to the training sample, and ω obeys a certain distribution and is denoted as p (ω | D), a predicted value is expressed as:
Figure BDA0003634162150000051
wherein p (ω | D) is a posterior probability, p (D | ω) is a likelihood function, and p (D) is an edge function; because omega is a random variable, the predicted value is also a random variable and is used for representing the uncertainty of the predicted value; y is * Is a predicted value; Δ ω is the minimum infinitesimal of the network parameter ω;
32) since the posterior probability p (ω | D) in the bayesian neural network cannot be directly solved, the posterior probability needs to be approximated by variational inference and a required loss function is constructed;
using a variation inference method, the true a posteriori probability p (ω | D) is approximated by a distribution q (ω | θ) controlled by a set of parameters θ, approximated by a gaussian distribution, i.e. with θ equal to (μ, σ), and each network parameter ω is then approximated by (μ, σ) i Compliance parameter is (mu) ii ) (ii) a gaussian distribution of; the difference of the KL divergence measures q (ω | θ) and p (ω | D) is used:
θ * =argmin KL[q(ω|θ||p(ω|D)],
in the formula, theta * Characterizing the difference between q (ω | θ) and p (ω | D), in relation to the parameter θ;
according to the formula of KL divergence, the following is further deduced:
Figure BDA0003634162150000052
since the edge function p (D) is independent of θ, the loss function is written as:
Figure BDA0003634162150000053
in the formula, E q(ω|θ) Represents a mathematical expectation obeying a q (ω | θ) distribution; l (D, θ) is a loss function whose value varies with the parameter θ;
33) using Monte Carlo to simulate the loss function approximation to obtain:
Figure BDA0003634162150000054
in the formula, theta i Is a parameter omega i The gaussian distribution obeyed; x j Is j input data; y is j Is X j Corresponding tag data;
34) scaling the model; suppose ^ divide the entire dataset into M batches, and average each small batch of data, resulting in:
Figure BDA0003634162150000061
in the formula, L i EQ (D i θ) represents the loss function for each small batch of data, i.e.
Figure BDA0003634162150000062
35) Updating network parameters by a gradient descent method, namely:
Figure BDA0003634162150000063
in the formula, theta' is the updated parameter theta; v θ Is the gradient with respect to the parameter θ; alpha is the learning rate; l is the loss function L (D, theta).
Preferably, the specific steps of step 4) include:
41) inputting the data processed by the principal component analysis algorithm in the step 1) into the Bayesian neural network;
42) from a Gaussian distribution (. mu.) ii ) Middle sampling to obtain network parameter omega i
43) Separately calculate log q (ω) ii )、log p(ω i )、log p(Y j |ω,X j ) Thereby obtaining a loss function
Figure BDA0003634162150000064
44) Repeatedly updating the parameter θ '═ θ' - α θ L until the loss function takes a minimum value.
Preferably, the specific steps of step 5) include:
51) preprocessing the operation data of the fuel cell by using a principal component analysis method, and selecting the characteristic that the accumulated contribution rate reaches 90% to realize data dimension reduction;
52) predicting the selected characteristic data by using the optimal recurrent neural network selected in the step 2);
53) because the uncertain factors of the fuel cell operation data are quantified by using probability density distribution, the result of the predicted voltage obeys the distribution rule, and a confidence interval is manually fetched;
54) and taking the prediction result of the circulating neural network as the input of the trained Bayes neural network to obtain the distribution and interval of the predicted voltage of the fuel cell at each moment.
The invention also provides a device for predicting the service life of the proton exchange membrane fuel cell based on the Bayesian recurrent neural network, which is characterized by comprising a data set preprocessing unit, a recurrent neural network selecting unit, a Bayesian neural network establishing unit, a model training unit and a fuel cell service life predicting unit:
the data set preprocessing unit: preprocessing a given data set, extracting main data features in the data set by adopting a principal component analysis method, realizing data dimension reduction, and simultaneously processing the processed data according to the following steps of 8: 2, dividing the ratio into a training set and a testing set;
the selection unit of the recurrent neural network comprises: respectively predicting characteristic data of the fuel cell during operation by using RNN, LSTM and GRU, comparing and analyzing prediction results of the three networks by using mean square error, root mean square error and average absolute error as indexes, and selecting an optimal recurrent neural network according to the comparison result;
the Bayesian neural network establishing unit: replacing fixed parameters in the deep learning neural network with random variables, and quantifying uncertainty through probability density distribution to establish a Bayesian neural network;
the model training unit: taking the data in the training set as the input of a Bayesian neural network, training network parameters until a global optimal solution which enables a loss function to reach the minimum is found;
the fuel cell life prediction unit: and taking the fuel cell characteristic data predicted by the optimal circulating neural network as the input of the Bayesian neural network to obtain the predicted voltage interval of the fuel cell at each moment.
Further, in the bayesian neural network establishing unit, the bayesian neural network is established according to the following steps:
31) assuming that a network parameter in the neural network is ω, p (ω) is a prior distribution of the parameter, and a given training sample is denoted as D ═ X, Y, where X is input data and Y is label data, a value of the network parameter ω is related to the training sample, and ω obeys a certain distribution and is denoted as p (ω | D), a predicted value is expressed as:
Figure BDA0003634162150000071
where p (ω | D) is a posterior probability, p (D | ω) is a likelihood function, and p (D) is an edge function; because omega is a random variable, the predicted value is also a random variable and is used for representing the uncertainty of the predicted value; y is * Is a predicted value; Δ ω is the minimum infinitesimal of the network parameter ω;
32) since the posterior probability p (ω | D) in the bayesian neural network cannot be directly solved, the posterior probability needs to be approximated by variational inference and a required loss function is constructed;
using a method of variational inference, the true a posteriori probability p (ω | D) is approximated by a distribution q (ω | θ) controlled by a set of parameters θ, approximated using a gaussian distribution, i.e. let θ be (μ, σ), then each network parameter ω is approximated by (μ, σ) and the parameters p (ω | D) are approximated by a set of parameters θ i Compliance parameter is (mu) ii ) (ii) a gaussian distribution of; the difference of KL divergence measures q (ω | θ) and p (ω | D) is used:
θ * =argmin KL[q(ω|θ||p(ω|D)],
in the formula, theta * Characterizing the difference between q (ω | θ) and p (ω | D), in relation to the parameter θ;
according to the formula of KL divergence, the following is further deduced:
Figure BDA0003634162150000081
since the edge function p (D) is independent of θ, the loss function is written as:
Figure BDA0003634162150000082
in the formula, E q(ω|θ) Represents a mathematical expectation obeying a q (ω | θ) distribution; l (D, θ) is a loss function whose value varies with the parameter θ;
33) using Monte Carlo to simulate the loss function approximation to obtain:
Figure BDA0003634162150000083
in the formula, theta i Is a parameter omega i The gaussian distribution obeyed; x j Is j input data; y is j Is X j Corresponding tag data;
34) scaling the model; suppose ^ divide the entire dataset into M batches, and average each small batch of data, resulting in:
Figure BDA0003634162150000084
in the formula, L i EQ (D i θ) represents a loss function for each small batch of data, i.e.
Figure BDA0003634162150000085
35) Updating network parameters by a gradient descent method, namely:
Figure BDA0003634162150000091
in the formula, theta' is the updated parameter theta; v θ Is the gradient with respect to the parameter θ; alpha is the learning rate; l is the loss function L (D, theta).
Further, in the model training unit, a bayesian neural network is trained as follows:
41) inputting the data processed by the principal component analysis algorithm into the Bayesian neural network;
42) from a Gaussian distribution (. mu.) ii ) Middle sampling to obtain network parameter omega i
43) Separately calculate log q (ω) ii )、log p(ω i )、log p(Y j |ω,X j ) Thereby obtaining a loss function
Figure BDA0003634162150000092
44) Repeatedly updating the parameter θ '═ θ' - α + θ L until the loss function takes a minimum value.
The present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method for predicting lifetime of a proton exchange membrane fuel cell based on a bayesian recurrent neural network described above.
Compared with the prior art, the invention has the beneficial effects that: firstly, the method uses the optimal cyclic neural network to predict the operation data of the fuel cell without using a large amount of measurement data; and secondly, quantifying uncertainty factors of the prediction data by using probability density distribution, introducing the uncertainty factors into the prediction of the residual service life of the fuel cell, and establishing a proton exchange membrane fuel cell service life prediction model based on the Bayesian recurrent neural network, so that the point estimation and interval estimation can be performed on the predicted voltage at each moment, and the fault tolerance rate and accuracy of the prediction of the service life of the fuel cell are improved.
In addition, the data set is subjected to data preprocessing by adopting a principal component analysis method, and main characteristic data is selected as the input of the Bayesian recurrent neural network, so that the complexity of the data set is reduced, and the model solving speed is higher.
Drawings
FIG. 1 is a flow chart of a life prediction method of a proton exchange membrane fuel cell based on a Bayesian recurrent neural network of the present invention;
FIG. 2 is a flow chart of the optimal recurrent neural network selection of the present invention;
fig. 3 is a flowchart of the bayesian neural network training process in the present invention.
Fig. 4 is a distribution diagram of the predicted voltage at some time points in the embodiment of the present invention, which is a distribution diagram of the predicted voltage at the time points of 400h, 430h, 460h, 490h, 520h, 550h, 580h, 610h, 640h, 670h, and 700h from left to right in sequence.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The following detailed description of the embodiments of the present invention will be made with reference to the drawings and examples, which are provided for illustrating the present invention and are not intended to limit the scope of the present invention.
The invention discloses a life prediction method of a proton exchange membrane based on a Bayesian recurrent neural network, which comprises the following steps: preprocessing original data by using a Principal Component Analysis (PCA), selecting characteristic data with the accumulative contribution rate exceeding 90%, and carrying out data analysis according to the ratio of 8: 2, dividing the training set into a training set and a testing set; establishing different cyclic neural networks, respectively predicting different characteristic data, and selecting a proper neural network according to a prediction result; replacing fixed parameters in the deep learning neural network model with random variables, and quantifying uncertainty of each characteristic data through probability density distribution to establish a Bayesian neural network; fourthly, training network parameters of the Bayesian neural network by using the training set data; after the network training is finished, inputting the previous operation data of the fuel cell into a circulating neural network to obtain the operation result of the characteristic data, and inputting the characteristic data into a Bayesian neural network to finally obtain the interval of the predicted voltage of the fuel cell.
Examples
The embodiment provides a lifetime prediction method of a proton exchange membrane based on a Bayesian cycle neural network, which comprises five steps of sample data set preprocessing, namely, selection of characteristic data, selection of an optimal cycle neural network, construction of the Bayesian neural network, model training and lifetime prediction of a fuel cell, and specifically comprises the following steps:
1) and carrying out data preprocessing on a given sample data set, extracting main characteristic data in the data set by adopting a principal component analysis method, realizing data dimension reduction, and dividing the processed data set into a training set and a testing set.
2) Establishing three different recurrent neural networks including RNN, LSTM and GRU, predicting the characteristic numbers selected in the step 1) respectively, and selecting the optimal recurrent neural network according to the prediction result.
3) Fixed parameters in the classical deep learning neural network are replaced by random variables, uncertainty factors contained in various types of feature data are quantified through probability density distribution, and the Bayesian neural network is established.
4) And taking the data in the training set as the input of the Bayesian neural network, and training network parameters until a global optimal solution which enables the loss function to be minimum is found.
5) And predicting the characteristic data of the fuel cell by using the optimal cyclic neural network, taking the predicted result as the input of the trained Bayesian neural network, and giving the interval of the predicted voltage at each moment.
Step 1): preprocessing sample data by using a principal component analysis method, and extracting characteristic data, wherein the specific method comprises the following steps:
step 11) recording an original sample-index data matrix according to the fuel cell operation data as follows:
Figure BDA0003634162150000111
wherein x 1 ,x 2 ,…,x p Respectively, index data of operation of the fuel cell, i.e. x p =[x 1p ,x 2p ,…x np ] T N is the number of samples, and p is the number of indexes when the fuel cell operates;
step 12) standardizing the operation sample data of the fuel cell, wherein Var () represents the mean square error of the sample data:
Figure BDA0003634162150000112
Figure BDA0003634162150000113
step 13) calculating a sample correlation coefficient matrix:
Figure BDA0003634162150000121
Figure BDA0003634162150000122
wherein r is ij The closer | is to 1, the variable x i ,x j The more similar, the less similar if closer to 0.
Step 14) calculating the eigenvalues and corresponding eigenvectors of the correlation coefficient matrix R:
characteristic value: lambda [ alpha ] 12 ...λ p
Feature vector: a is 1 ,a 2 ...a p
Step 15) calculating the contribution rate:
Figure BDA0003634162150000123
step 16) selecting important principal components:
the contribution rates are arranged according to a descending order, the first k principal components are taken to satisfy
Figure BDA0003634162150000124
Wherein T is 0 =90%
Extracting data characteristics of an original data set by using a principal component analysis method, selecting the characteristics with the accumulative contribution rate of 90% as the input characteristics of a subsequent Bayes recurrent neural network, and processing the data according to the following ratio of 8: 2 into a training set and a test set.
In this embodiment, selecting the main components includes:
Figure BDA0003634162150000125
Figure BDA0003634162150000131
step 2) using three kinds of circulating neural networks to respectively predict the characteristics selected in the step one, and comparing the prediction effects, wherein the specific steps are as follows:
step 21) selecting three mainstream recurrent neural networks, namely RNN, LSTM and GRU recurrent neural networks;
step 22) using three kinds of circulating neural networks to respectively predict the characteristic data of the fuel cell during operation and comparing the prediction results of the three kinds of networks;
and step 23) comparing and analyzing the prediction results of the three networks by taking the mean square error, the root mean square error and the average absolute error as indexes, and selecting the optimal recurrent neural network according to the comparison result.
Through comparison of prediction effects and error analysis, the selected optimal recurrent neural network is GRU, and prediction errors of the GRU on certain characteristics of the fuel cell are as follows:
Figure BDA0003634162150000132
step 3) establishing a Bayesian neural network according to the following steps, as shown in FIG. 2:
step 31) assuming that a network parameter in the neural network is ω, p (ω) is prior distribution of the parameter, and a given training sample is denoted as D ═ X, Y, where X is input data and Y is label data, a value of the network parameter ω is related to the training sample, and ω obeys a certain distribution and is denoted as p (ω | D), then a predicted value is expressed as:
Figure BDA0003634162150000141
wherein p (ω | D) is a posterior probability, p (D | ω) is a likelihood function, and p (D) is an edge function; because omega is a random variable, the predicted value is also a random variable and is used for representing the uncertainty of the predicted value; y is * Is a predicted value; Δ ω is the minimum infinitesimal of the network parameter ω;
step 32), because the posterior probability p (omega | D) in the Bayesian neural network can not be directly solved, the posterior probability needs to be approximated by variational inference and a required loss function is constructed;
using a method of variational inference, the true a posteriori probability p (ω | D) is approximated by a distribution q (ω | θ) controlled by a set of parameters θ, approximated using a gaussian distribution, i.e. let θ be (μ, σ), then each network parameter ω is approximated by (μ, σ) and the parameters p (ω | D) are approximated by a set of parameters θ i Compliance parameter is (mu) ii ) (ii) a gaussian distribution of; using the KL divergence measure difference of q (ω | θ) and p (ω | D), i.e. optimizing:
θ * =argmin KL[q(ω|θ||p(ω|D)],
in the formula, theta * Characterizing the difference between q (ω | θ) and p (ω | D), in relation to the parameter θ;
according to the formula of KL divergence, the following is further deduced:
Figure BDA0003634162150000142
since the edge function p (D) is independent of θ, the loss function is written as:
Figure BDA0003634162150000143
in the formula, E q(ω|θ) Represents a mathematical expectation obeying a q (ω | θ) distribution; l (D, θ) is a loss function whose value varies with the parameter θ;
step 33) approximating by a Monte Carlo simulation loss function to obtain:
Figure BDA0003634162150000151
in the formula, theta i Is a parameter omega i The gaussian distribution obeyed; x j Is j input data; y is j Is X j Corresponding tag data;
step 34) in order to reduce the complexity of the model, the model needs to be scaled to a certain extent; suppose ^ divide the entire dataset into M batches, and average each small batch of data, resulting in:
Figure BDA0003634162150000152
in the formula, L i EQ (D i θ) represents a loss function for each small batch of data, i.e.
Figure BDA0003634162150000153
Step 35) updating the network parameters by a gradient descent method, namely:
Figure BDA0003634162150000154
in the formula, theta is the updated parameter theta; v θ Is the gradient with respect to parameter θ; alpha is the learning rate; l is the loss function L (D, theta).
Step 4) training a Bayesian neural network according to the following steps, as shown in FIG. 3:
step 41) inputting the data processed by the principal component analysis algorithm into a network;
step 42) from a Gaussian distribution (μ) ii ) Middle sampling to obtain network parameter omega i
Step 43) separately calculating log q (ω) ii )、log p(ω i )、log p(Y j |ω,X j ) Thereby obtaining a loss function
Figure BDA0003634162150000155
Step 44) repeatedly updating the parameter θ '═ θ' - α ∑ Δ ∑ θ L until the loss function takes a minimum value.
Step 5) inputting the prediction result of the optimal cyclic neural network into the trained Bayes neural network to obtain the predicted voltage distribution and interval at each moment, and the specific steps are as follows:
step 51) preprocessing the operation data of the fuel cell by using a principal component analysis method, and selecting the characteristic that the accumulated contribution rate reaches 90 percent to realize data dimension reduction;
step 52) using the optimal recurrent neural network to predict the selected characteristic data;
step 53) because the uncertain factors of the fuel cell operation data are quantified by using probability density distribution, and the result of the predicted voltage follows certain distribution, a certain confidence interval needs to be manually selected;
and step 54) taking the prediction result of the circulating neural network as the input of the trained Bayes neural network to obtain the distribution and interval of the predicted voltage of the fuel cell at each moment.
Based on the method, the invention provides a life prediction device of a proton exchange membrane fuel cell based on a Bayesian circulation neural network, which comprises a data set preprocessing unit, a selection unit of the circulation neural network, a Bayesian neural network establishing unit, a model training unit and a life prediction unit of the fuel cell; wherein the content of the first and second substances,
a data set preprocessing unit: preprocessing a given data set, extracting main data features in the data set by adopting a principal component analysis method, realizing data dimension reduction, and simultaneously processing the processed data according to the following steps of 8: 2, dividing the ratio of the training set to the test set, and executing the process of selecting the characteristic data in the step 1);
the selection unit of the recurrent neural network: respectively predicting characteristic data of the fuel cell during operation by using RNN, LSTM and GRU, comparing and analyzing prediction results of the three networks by using mean square error, root mean square error and mean absolute error as indexes, selecting an optimal cyclic neural network according to the comparison result, and executing the selection process of the optimal cyclic neural network in the step 2);
a Bayesian neural network establishing unit: replacing fixed parameters in the deep learning neural network with random variables, carrying out uncertainty quantification through probability density distribution, establishing a Bayesian neural network, and executing the construction process of the Bayesian neural network in the step 3);
a model training unit: taking the data in the training set as the input of a Bayesian neural network, training network parameters until a global optimal solution which enables a loss function to be minimum is found, and executing the model training process in the step 4);
fuel cell life prediction unit: and (5) taking the fuel cell characteristic data predicted by the optimal cyclic neural network as the input of the Bayesian neural network to obtain the predicted voltage interval of the fuel cell at each moment, and executing the process of step 5) predicting the service life of the fuel cell.
The invention further provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the life prediction method of the proton exchange membrane based on the bayesian recurrent neural network.
Finally, it should be noted that the above-mentioned embodiments are only for illustrating the patent solution and not for limiting, and those skilled in the art should understand that the technical solution of the patent can be modified or substituted with equivalent without departing from the spirit and scope of the patent solution, which shall be covered by the claims of the patent.

Claims (10)

1. A life prediction method of a proton exchange membrane based on a Bayesian recurrent neural network is characterized by comprising the following steps:
1) performing data preprocessing on a fuel cell operation sample data set, extracting main characteristic data in the data set by adopting a principal component analysis method, realizing data dimension reduction, and dividing the processed data set into a training set and a testing set;
2) establishing three different recurrent neural networks to respectively predict the characteristic numbers selected in the step 1), and selecting the optimal recurrent neural network according to the prediction result;
3) replacing fixed parameters in the classical deep learning neural network with random variables, quantifying uncertainty factors contained in various types of characteristic data through probability density distribution, and establishing a Bayesian neural network;
4) taking the data in the training set as the input of a Bayesian neural network, training network parameters until a global optimal solution which enables a loss function to reach the minimum is found;
5) preprocessing the operation data of the fuel cell by using a principal component analysis method, predicting the selected characteristic data by using the optimal cyclic neural network selected in the step 2), and taking the predicted result as the input of the trained Bayesian neural network to obtain the distribution and interval of the predicted voltage of the fuel cell at each moment.
2. The proton exchange membrane life prediction method based on the Bayesian recurrent neural network as recited in claim 1, wherein: step 1) the specific steps of data feature extraction include:
11) according to the fuel cell operation sample data, recording an original sample-index data matrix as follows:
Figure FDA0003634162140000011
wherein x 1 ,x 2 ,…,x p Respectively, index data of operation of the fuel cell, i.e. x p =[x 1p ,x 2p ,…x np ] T N is the number of samples, and p is the number of indexes when the fuel cell operates;
12) standardizing the operation sample data of the fuel cell, wherein Var () represents the mean square error of the sample data:
Figure FDA0003634162140000021
Figure FDA0003634162140000022
13) calculating a sample data correlation coefficient matrix:
Figure FDA0003634162140000023
Figure FDA0003634162140000024
wherein r is ij The closer | is to 1, the variable x i ,x j The more similar, the less similar if closer to 0;
14) calculating the eigenvalues and corresponding eigenvectors of the correlation coefficient matrix R:
characteristic value: lambda [ alpha ] 12 ...λ p
Feature vector: a is a 1 ,a 2 ...a p
15) Calculating the contribution rate of the sample data:
Figure FDA0003634162140000025
16) extracting data characteristics of the original data set by using a principal component analysis method, and selecting the characteristics with the accumulative contribution rate of 90% as the input characteristics of a subsequent Bayes recurrent neural network: the contribution rates are arranged according to a descending order, the first k principal components are taken to satisfy
Figure FDA0003634162140000026
Wherein T is 0 =90%
And the data set after the data characteristic extraction is according to the following steps of 8: 2 into a training set and a test set.
3. The Bayesian recurrent neural network-based proton exchange membrane life prediction method as recited in claim 1, wherein: step 2) the concrete steps of selecting the optimal recurrent neural network comprise:
21) three mainstream recurrent neural networks were selected: RNN, LSTM, GRU recurrent neural networks;
22) respectively predicting characteristic data of the fuel cell during operation by using three kinds of circulating neural networks, and comparing the prediction results of the three kinds of networks;
23) and comparing and analyzing the prediction results of the three networks by taking the mean square error, the root mean square error and the average absolute error as indexes, and selecting the optimal recurrent neural network according to the comparison result.
4. The proton exchange membrane life prediction method based on the Bayesian recurrent neural network as recited in claim 1, wherein: step 3) the specific steps of establishing the Bayesian neural network comprise:
31) assuming that a network parameter in the neural network is ω, p (ω) is a prior distribution of the parameter, and a given training sample is denoted as D ═ X, Y, where X is input data and Y is label data, a value of the network parameter ω is related to the training sample, and ω obeys a certain distribution and is denoted as p (ω | D), a predicted value is expressed as:
Figure FDA0003634162140000031
wherein p (ω | D) is a posterior probability, p (D | ω) is a likelihood function, and p (D) is an edge function; because omega is a random variable, the predicted value is also a random variable and is used for representing the uncertainty of the predicted value; y is * Is a predicted value; Δ ω is the minimum infinitesimal of the network parameter ω;
32) since the posterior probability p (ω | D) in the bayesian neural network cannot be directly solved, the posterior probability needs to be approximated by variational inference and a required loss function is constructed;
using a variation inference method, the true a posteriori probability p (ω | D) is approximated by a distribution q (ω | θ) controlled by a set of parameters θ, approximated by a gaussian distribution, i.e. with θ equal to (μ, σ), and each network parameter ω is then approximated by (μ, σ) i Compliance parameter is (mu) ii ) (ii) a gaussian distribution of; the difference of KL divergence measures q (ω | θ) and p (ω | D) is used:
θ * =argmin KL[q(ω|θ||p(ω|D)],
in the formula, theta * Characterizing the difference between q (ω | θ) and p (ω | D), in relation to the parameter θ;
according to the formula of KL divergence, the following is further deduced:
Figure FDA0003634162140000041
since the edge function p (D) is independent of θ, the loss function is written as:
Figure FDA0003634162140000042
in the formula, E q(ω|θ) Represents a mathematical expectation obeying a q (ω | θ) distribution; l (D, θ) is a loss function whose value varies with the parameter θ;
33) using Monte Carlo to simulate the loss function approximation to obtain:
Figure FDA0003634162140000043
in the formula, theta i Is a parameter omega i The gaussian distribution obeyed; x j Is j input data; y is j Is X j Corresponding tag data;
34) scaling the model; suppose ^ divide the entire dataset into M batches, and average each small batch of data, resulting in:
Figure FDA0003634162140000044
in the formula, L i EQ (D i θ) represents the loss function for each small batch of data, i.e.
Figure FDA0003634162140000045
35) Updating network parameters by a gradient descent method, namely:
Figure FDA0003634162140000046
in the formula, theta' is the updated parameter theta; v θ Is the gradient with respect to the parameter θ; alpha is the learning rate; l is the loss function L (D, theta).
5. The Bayesian recurrent neural network-based proton exchange membrane life prediction method as recited in claim 3, wherein: the specific steps of step 4) include:
41) inputting the data processed by the principal component analysis algorithm in the step 1) into the Bayesian neural network;
42) from a Gaussian distribution (. mu.) ii ) Middle sampling to obtain network parameter omega i
43) Separately calculate log q (ω) ii )、log p(ω i )、log p(Y j |ω,X j ) Thereby obtaining a loss letterNumber of
Figure FDA0003634162140000051
44) Repeatedly updating the parameter θ '═ θ' - α + θ L until the loss function takes a minimum value.
6. The method for predicting the life of the proton exchange membrane based on the Bayesian recurrent neural network as recited in claim 1, wherein: the specific steps of step 5) include:
51) preprocessing the operation data of the fuel cell by using a principal component analysis method, and selecting the characteristic that the accumulated contribution rate reaches 90% to realize data dimension reduction;
52) predicting the selected characteristic data by using the optimal recurrent neural network selected in the step 2);
53) because the uncertain factors of the fuel cell operation data are quantified by using probability density distribution, the result of the predicted voltage obeys the distribution rule, and a confidence interval is manually fetched;
54) and taking the prediction result of the circulating neural network as the input of the trained Bayes neural network to obtain the distribution and interval of the predicted voltage of the fuel cell at each moment.
7. A proton exchange membrane fuel cell life prediction device based on Bayesian recurrent neural network is characterized in that: the device comprises a data set preprocessing unit, a selection unit of a cyclic neural network, a Bayesian neural network establishing unit, a model training unit and a fuel cell service life predicting unit:
the data set preprocessing unit: preprocessing a given data set, extracting main data features in the data set by adopting a principal component analysis method, realizing data dimension reduction, and simultaneously processing the processed data according to the following steps of 8: 2, dividing the ratio into a training set and a testing set;
the selection unit of the recurrent neural network comprises: respectively predicting characteristic data of the fuel cell during operation by using RNN, LSTM and GRU, comparing and analyzing prediction results of the three networks by using mean square error, root mean square error and average absolute error as indexes, and selecting an optimal recurrent neural network according to the comparison result;
the Bayesian neural network establishing unit: replacing fixed parameters in the deep learning neural network with random variables, and quantifying uncertainty through probability density distribution to establish a Bayesian neural network;
the model training unit: taking the data in the training set as the input of a Bayesian neural network, training network parameters until a global optimal solution which enables a loss function to reach the minimum is found;
the fuel cell life prediction unit: and taking the fuel cell characteristic data predicted by the optimal circulating neural network as the input of the Bayesian neural network to obtain the predicted voltage interval of the fuel cell at each moment.
8. The device for predicting the life of the proton exchange membrane fuel cell based on the Bayesian recurrent neural network as recited in claim 7, wherein: in the Bayesian neural network establishing unit, establishing a Bayesian neural network according to the following steps:
31) assuming that a network parameter in the neural network is ω, p (ω) is a prior distribution of the parameter, and a given training sample is denoted as D ═ X, Y, where X is input data and Y is label data, a value of the network parameter ω is related to the training sample, and ω obeys a certain distribution and is denoted as p (ω | D), a predicted value is expressed as:
Figure FDA0003634162140000061
wherein p (ω | D) is a posterior probability, p (D | ω) is a likelihood function, and p (D) is an edge function; because omega is a random variable, the predicted value is also a random variable and is used for representing the uncertainty of the predicted value; y is * Is a predicted value; Δ ω is the minimum infinitesimal of the network parameter ω;
32) since the posterior probability p (omega | D) in the Bayesian neural network can not be directly solved, the posterior probability needs to be approximated by variation inference and a required loss function is constructed;
using a variation inference method, the true a posteriori probability p (ω | D) is approximated by a distribution q (ω | θ) controlled by a set of parameters θ, approximated by a gaussian distribution, i.e. with θ equal to (μ, σ), and each network parameter ω is then approximated by (μ, σ) i Compliance parameter is (mu) ii ) (ii) a gaussian distribution of; the difference of KL divergence measures q (ω | θ) and p (ω | D) is used:
θ * =argmin KL[q(ω|θ||p(ω|D)],
in the formula, theta * Characterizing the difference between q (ω | θ) and p (ω | D), in relation to the parameter θ;
according to the formula of KL divergence, the following is further deduced:
Figure FDA0003634162140000071
since the edge function p (D) is independent of θ, the loss function is written as:
Figure FDA0003634162140000072
in the formula, E q(ω|θ) Represents a mathematical expectation obeying a q (ω | θ) distribution; l (D, θ) is a loss function whose value varies with the parameter θ;
33) using Monte Carlo to simulate the loss function approximation to obtain:
Figure FDA0003634162140000073
in the formula, theta i Is a parameter omega i The gaussian distribution obeyed; x j Is j input data; y is j Is X j Corresponding tag data;
34) scaling the model; assuming that the entire dataset is divided into M batches, and averaging each small batch of data, we get:
Figure FDA0003634162140000074
in the formula, L i EQ (D i θ) represents the loss function for each small batch of data, i.e.
Figure FDA0003634162140000075
35) Updating network parameters by a gradient descent method, namely:
Figure FDA0003634162140000076
in the formula, theta' is the updated parameter theta; v θ Is the gradient with respect to parameter θ; alpha is the learning rate; l is the loss function L (D, theta).
9. The device for predicting the life of a proton exchange membrane fuel cell based on the Bayesian recurrent neural network as recited in claim 8, wherein: in the model training unit, training a Bayesian neural network according to the following steps:
41) inputting the data processed by the principal component analysis algorithm into the Bayesian neural network;
42) from a Gaussian distribution (. mu.) ii ) Middle sampling to obtain network parameters omega i
43) Separately calculate log q (ω) ii )、log p(ω i )、log p(Y j |ω,X j ) Thereby obtaining a loss function
Figure FDA0003634162140000081
44) Repeatedly updating the parameter θ '═ θ' - α + θ L until the loss function takes a minimum value.
10. A computer-readable storage medium, storing a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN202210500365.5A 2022-05-09 2022-05-09 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network Pending CN114943372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210500365.5A CN114943372A (en) 2022-05-09 2022-05-09 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210500365.5A CN114943372A (en) 2022-05-09 2022-05-09 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network

Publications (1)

Publication Number Publication Date
CN114943372A true CN114943372A (en) 2022-08-26

Family

ID=82907330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210500365.5A Pending CN114943372A (en) 2022-05-09 2022-05-09 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network

Country Status (1)

Country Link
CN (1) CN114943372A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713099A (en) * 2023-01-03 2023-02-24 税友信息技术有限公司 Model design method, device, equipment and storage medium
CN115856645A (en) * 2023-03-01 2023-03-28 四川能投氢能产业投资有限公司 Method, device and equipment for analyzing durability of proton exchange membrane fuel cell
CN116502530A (en) * 2023-04-27 2023-07-28 重庆大学 Membrane pollution early warning method and device based on machine learning
CN116629451A (en) * 2023-06-25 2023-08-22 湖北工业大学 Fuel cell residual life prediction method, system, medium and terminal
CN116840722A (en) * 2023-06-09 2023-10-03 淮阴工学院 Performance degradation evaluation and life prediction method for proton exchange membrane fuel cell

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713099A (en) * 2023-01-03 2023-02-24 税友信息技术有限公司 Model design method, device, equipment and storage medium
CN115713099B (en) * 2023-01-03 2023-05-09 税友信息技术有限公司 Model design method, device, equipment and storage medium
CN115856645A (en) * 2023-03-01 2023-03-28 四川能投氢能产业投资有限公司 Method, device and equipment for analyzing durability of proton exchange membrane fuel cell
CN115856645B (en) * 2023-03-01 2023-04-28 四川能投氢能产业投资有限公司 Method, device and equipment for analyzing durability of proton exchange membrane fuel cell
CN116502530A (en) * 2023-04-27 2023-07-28 重庆大学 Membrane pollution early warning method and device based on machine learning
CN116502530B (en) * 2023-04-27 2023-11-07 重庆大学 Membrane pollution early warning method and device based on machine learning
CN116840722A (en) * 2023-06-09 2023-10-03 淮阴工学院 Performance degradation evaluation and life prediction method for proton exchange membrane fuel cell
CN116840722B (en) * 2023-06-09 2024-02-23 淮阴工学院 Performance degradation evaluation and life prediction method for proton exchange membrane fuel cell
CN116629451A (en) * 2023-06-25 2023-08-22 湖北工业大学 Fuel cell residual life prediction method, system, medium and terminal

Similar Documents

Publication Publication Date Title
CN114943372A (en) Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network
CN109324291B (en) Prediction method for predicting service life of proton exchange membrane fuel cell
Liu et al. Prognostics methods and degradation indexes of proton exchange membrane fuel cells: A review
CN112990556A (en) User power consumption prediction method based on Prophet-LSTM model
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN111310387B (en) Fuel cell life prediction method
CN112068003B (en) Method and device for predicting service life of cadmium-nickel storage battery based on linear wiener process
CN110082682B (en) Lithium battery state of charge estimation method
Zheng et al. Performance prediction of fuel cells using long short‐term memory recurrent neural network
CN114707712A (en) Method for predicting requirement of generator set spare parts
CN114814589A (en) Method and device for predicting remaining service life of PEMFC
CN116609668A (en) Lithium ion battery health state and residual life prediction method
CN112381673A (en) Park electricity utilization information analysis method and device based on digital twin
CN113791351B (en) Lithium battery life prediction method based on transfer learning and difference probability distribution
Li et al. Degradation prediction of proton exchange membrane fuel cell based on the multi-inputs Bi-directional long short-term memory
CN110956304A (en) Distributed photovoltaic power generation capacity short-term prediction method based on GA-RBM
CN112836876B (en) Power distribution network line load prediction method based on deep learning
CN111612648B (en) Training method and device for photovoltaic power generation prediction model and computer equipment
CN110991741B (en) Section constraint probability early warning method and system based on deep learning
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
Sahajpal et al. Accurate long-term prognostics of proton exchange membrane fuel cells using recurrent and convolutional neural networks
CN110059342B (en) Parameter estimation method for P2D model of lithium ion battery
Wang et al. Proton exchange membrane fuel cells prognostic strategy based on navigation sequence driven long short-term memory networks
CN116338502A (en) Fuel cell life prediction method based on random noise enhancement and cyclic neural network
CN114924202A (en) Method and device for detecting service life of fuel cell

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination