CN113821974A - Engine residual life prediction method based on multiple failure modes - Google Patents
Engine residual life prediction method based on multiple failure modes Download PDFInfo
- Publication number
- CN113821974A CN113821974A CN202111042036.2A CN202111042036A CN113821974A CN 113821974 A CN113821974 A CN 113821974A CN 202111042036 A CN202111042036 A CN 202111042036A CN 113821974 A CN113821974 A CN 113821974A
- Authority
- CN
- China
- Prior art keywords
- residual life
- life prediction
- gate
- network
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000006870 function Effects 0.000 claims abstract description 34
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 210000004027 cell Anatomy 0.000 claims description 18
- 238000006731 degradation reaction Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- 230000015556 catabolic process Effects 0.000 claims description 5
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- WLWFNJKHKGIJNW-UHFFFAOYSA-N Phensuximide Chemical compound O=C1N(C)C(=O)CC1C1=CC=CC=C1 WLWFNJKHKGIJNW-UHFFFAOYSA-N 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims 1
- 230000007246 mechanism Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/04—Ageing analysis or optimisation against ageing
Abstract
The invention provides a multi-fault-mode-based engine residual life prediction method, which comprises the steps of firstly extracting features from multi-channel sensing data by using a long-time memory network (LSTM) network, then constructing a sequential neural network by adopting a function structure based on the neural network and simultaneously considering the logical relationship between a fault mode discrimination model and a residual life prediction regression model, and finally outputting residual life prediction. Compared with the traditional method, the method is suitable for residual life prediction under the condition of multiple fault modes, can improve the accuracy of the estimation result, and provides a more accurate prediction result.
Description
Technical Field
The invention belongs to a prediction technology of the service life of an engine, and particularly relates to a method for fusing multi-source sensor signals, constructing a sequential multi-task learning model and predicting the residual life of the engine.
Background
Nowadays, with the rapid development of sensors and information technology, multiple sensors are commonly embedded in a complex machine system to form a sensor network for machine condition monitoring and residual service life prediction. Therefore, it is essential to develop appropriate data fusion and feature extraction techniques based on high-dimensional sensor data. However, most existing engine life prediction methods only address one failure mode, ignoring the differences between the multiple potential failure modes. In fact, as manufacturing processes evolve, complex systems are susceptible to a variety of failure modes, such as communication systems and large rotating machinery. Under different failure modes, the degradation process may exhibit significantly different degradation paths, which makes condition monitoring and remaining useful life prediction more challenging. Since different failure modes have significant impact on the degradation trajectory, using only one unified predictive model for remaining life prediction may result in low generalization performance across different failure modes or high model complexity due to approximating piecewise functions. Therefore, residual life prediction, which takes into account failure mode identification, is an essential step for achieving accurate and robust life analysis.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for predicting the remaining life of an engine based on multiple fault modes, and particularly provides a method for predicting the remaining life of the engine based on multiple fault mode judgments, which is based on a neural network constructed by adopting a function structure based on the neural network and simultaneously considering the logical relationship between a fault mode judgment model and a remaining life prediction regression model. The method comprises the following steps:
(1) processing multi-sensor data into a fixed length L having S dimensionsA time window of which Qn=max(Tn-L +1, 1); wherein:
let N denote the total number of machines and T ═ T (T)1,...,TN) Representing the total life cycle of the N engines, assuming corresponding multi-sensor dataIs shown in whichMulti-sensor data for machine n with failure mode k, size TnIs multiplied by S, given as
Is the s-th sensor timing data for machine n with failure mode k,is the sensor data for engine n, the s-th sensor observation data point during observation period t;
(2) the long-time memory network layer composed of multiple layers of LSTMs can better extract time characteristics and characterize degradation modes from time sequence data, a classification subtask aiming at each failure mode is designed to be a neural network of a full-connection layer, and the full-connection layer can effectively compile all neurons and learn a potential nonlinear function between input and output characteristics of the full-connection layer, wherein:
long-short time memory network (LSTM) unit state ctForgetting gate gammafUpdating the gate gammauCandidate stateOutput gate gammaoAnd the final output value htComposition, let W and b denote weights and biases in neural networks, footnotes f, u, c and o denote forget-to-gate, update-gate and candidate cells and outputs, respectively
Γf=σ[Wf(ht-1,xt)]+bf(formula 2)
Γu=σ[Wu(ht-1,xt)]+bu(formula 3)
Then can be based on a control gate (forgetting gate Γ)fUpdating the gate gammau) Updating the New cell State ctAs shown below
Final output htBased on cell state ctBut will pass through the output gate ΓoThe Sigmoid (sigma) function of (a) is filtered,
ht=Γo⊙ct(formula 7)
(3) The fully-connected layer uses the ReLU function as the activation function for the middle layer, while the Softmax operation is used for the output of the last fully-connected layer,the Softmax activation function of (1) is as follows:
the probability that an instance n at time t belongs to failure mode k is derivedWherein e is a natural constant e;
(4) branching fully-connected layers into multiple regression sub-network models based on different failure modes, generating a residual life prediction estimate for each regression auto-networkTargeting each regression subnetwork to a different failure modeGenerated byBy integrating the probability outputs to improve the predictionAccuracy, final output residual life prediction as
Compared with the prior art, the invention has the beneficial effects that: (1) establishing branch sub-networks for input of residual life prediction aiming at multiple failure modes, and approximating a complex function through a sub-network model with low complexity; (2) the invention can realize data expansion and knowledge migration of the residual life prediction task between the fault mode diagnosis and the residual life prediction and under different fault modes.
Drawings
FIG. 1 is a schematic diagram of the specific mechanism of the LSTM of the present invention;
FIG. 2 is a schematic diagram of the LSTM network architecture of the present invention;
FIG. 3 is a schematic flow chart of the present invention.
Detailed Description
The invention will be further described by way of examples, without in any way limiting the scope of the invention, with reference to the accompanying drawings.
Let N denote the total number of machines and T ═ T (T)1,...,TN) Representing the total life cycle of the N engines. Assuming corresponding multi-sensor dataIs shown in whichMulti-sensor data for machine n with failure mode k, size TnIs multiplied by S, given as
Machine with failure mode kThe s-th sensor timing data for n,is the observed data collected by the s-th sensor of engine n during observation period t. The key point of the invention is to identify potential failure modes according to the state monitoring signals and further predict the remaining service life. Thus, to describe the sequential degradation process and capture the long-term dependence of the sensor signal, the present invention employs an RNN-based mechanism to process the time data. The basic idea of a circular memory network (RNN) is to establish connections between history cells starting with a directed loop. In this process, the key to memorize the valuable information is the transfer function H, the conversion formula of which is
ht=H(xt,ht-1), (3)
Where H accepts the input vector x for the current timetAnd a hidden output vector ht-1The latter is the internal state of the previously input memory for updating the current hidden output ht. Through this feedback mechanism, the historical state interacts with the current input and helps to retain important information. However, RNN is fundamentally a very deep feed-forward network in which all layers share the same weights, which makes it difficult to preserve long-term information. To address the gradient vanishing or explosion issues that may be encountered when training conventional RNNs, long-short memory networks (LSTM) were developed as an important variant of RNN architecture. Since different temporal patterns captured by the LSTM are crucial for task learning, it is well suited to classify and predict long-term sequence data. The LSTM specific mechanism may be described as fig. 1.
Since it is derived from a standard recurrent neural network, there is also a feedback connection in the LSTM. Under the mechanism, the output of the LSTM unit at the previous time point t-1 is compared with the current time point xtAre combined and fed into the next cell. These sequential elements form an LSTM network to process the time data of multiple sensors.
Specifically, the LSTM cell is defined by cell state ctForgetting gate gammafUpdating the gate gammauCandidate stateOutput gate gammaoAnd the final output value htAnd (4) forming. The unit will remember the values in any time interval and the gate will select the valuable information to pass to the LSTM. Fig. 2 depicts the structure of an LSTM single element network.
In particular, the gradual information transition in the LSTM may be explained as follows. Let W and b denote weights and biases in the neural network, and footnotes f, u, c and o denote forget-to-gate, update-gate and candidate cells, and outputs, respectively. The first step of the LSTM is to determine which information in the cell state should be discarded, which is accomplished by a "forget gate". Forgetting gate gammafThe method is composed of an S-shaped neuron layer and point-by-point multiplication. sigmoid function on last step ht-1Output and current input xtTo generate a vector. Since the sigmoid function has an output range (denoted as σ) between 0 and 1, a "forgetting gate" will adjust the flow of information that should be shifted out of the cell according to the following operations,
Γf=σ[Wf(ht-1,xt)]+bf, (3)
it then needs to be determined which information should be stored in the cell state, including the following two parts. First, ΓuThe sigmoid function in (1) determines a value to be updated, and then the tanh layer creates a candidate vectorThe above operation can be expressed by the following formula:
Γu=σ[Wu(ht-1,xt)]+bu, (4)
the new cell state c may then be updated according to the control gatetAs shown below
Final output htBased on cell state ctBut will be filtered by the Sigmoid (σ) function to determine which part of the cell state to output in the following equation.
ht=Γo⊙ct (7)
With the aforementioned unit storage function mechanism, the LSTM unit vector can artificially forget its previously stored memory and can add new information during information transmission.
The present invention provides a method for predicting the remaining life of an engine, as shown in FIG. 3, which mainly comprises four parts, namely filling, sharing representation, classification and regression. In particular, the method of manufacturing a semiconductor device,
the first partial fill is to perform the necessary data preprocessing on the raw multi-channel monitoring data, and typically the model built should be able to predict engines with various data lengths, however, the LSTM model defines a time series with inputs of fixed length. Therefore, in addition to basic preprocessing operations such as normalization, a padding layer is required to process sequence batches having a duration less than a predetermined sequence length. In this layer, the sequence is batch-padded to a fixed length with a mask value that is preset to be completely different from the actual sensor signal possible value. Thus, the filled batch sequence can be directly distinguished by the learned network model through masking techniques, i.e., processing the multi-sensor data into a fixed length L having S dimensionsA time window of which Qn=max(Tn-L+1,1)。
The second part is composed of a shared long-and-short memory network layer composed of multiple layers of LSTM and a fully-connected layer using ReLU (f (x) ═ max (0, x)) as an activation function. The long-time memory network layer can better extract time characteristics and characterize degradation patterns from time sequence data, and the fully-connected layer can effectively compile all neurons and learn potential nonlinear functions between input and output characteristics of the fully-connected layer.
The third classification is failure mode discrimination, the classification subtask for each failure mode is designed as a fully-connected neural network, the ReLU function is used as the activation function of the middle layer, and Softmax operation is used for the output of the last fully-connected layer The Softmax activation function of (1) is as follows:
the probability that an instance n at time t belongs to a failure mode k can be calculated by the formula
The fourth part is regression, and for the prediction task, since different failure modes lead to different degradation processes, a prerequisite for accurate prediction is to group the data into corresponding failure classes and then model the degradation processes separately. Therefore, the shared representation layer before the classification layer is branched into a plurality of regression sub-network models based on different failure modes, and the regression sub-network models are generated for each regression self-networkFinally, each regression subnetwork is directed to a different failure modeGenerated byIs integrated to extractHigh accuracy of prediction result, and final output of residual life prediction
The present invention uses joint learning or optimization to train model parameters. Let θ be [ θ ]c,θr]Representing model parameters, where θcAll parameters, θ, representing the shared presentation layer and the classification layerrAll parameters of the regression layer are represented. For the remaining life prediction task, the true value τ of the remaining lifen,tAnd the estimated valueRoot Mean Square Error (RMSE) between to define the loss of the predicted task, given by
Since the failure mode diagnostic task is essentially a multi-classification problem, the goal of this task is to label categoriesApproximated and estimated failure mode distributionSetting a cross entropy function for an optimization objective of the classification subtasks, in particular
Then, a joint loss function of the two interrelated tasks is calculated, the overall objective being to minimize the joint loss function,
where λ is the weight used to adjust the two losses.
The invention relates to an objective function optimization algorithm based on first-order gradient. This step calculates the partial derivative of the loss function:
wherein
And
representing the square of the element. The invention sets the hyper-parameter as alpha-0.001, beta in the training process1=0.9,β20.999 and e 10-8。
From equations (15) - (16), it can be found that the training of the two tasks is jointly optimized. Unlike traditional multi-task learning (the loss function simply accumulates the loss for each task), the present invention directly couples the fault mode diagnosis and the remaining life prediction. Therefore, by optimizing the joint loss function, the accuracy of the failure mode diagnosis and the remaining life prediction can be further enhanced.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make numerous possible variations and modifications to the present invention, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the present invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.
Claims (5)
1. An engine residual life prediction method comprises the following steps:
(1) processing multi-sensor data into a fixed length L having S dimensionsA time window of which Qn=max(Tn-L +1, 1); wherein:
let N denote the total number of machines and T ═ T (T)1,...,TN) Representing the total life cycle of the N engines, assuming corresponding multi-sensor dataIs shown in whichMulti-sensor data for machine n with failure mode k, size TnIs multiplied by S, given as
S th sensor timing data for machine n with failure mode k,Is the sensor data for engine n, the s-th sensor observation data point during observation period t;
(2) extracting time characteristics of multi-sensor data by adopting a long-time memory network layer and representing a degradation mode from time sequence data, designing classification subtasks of each fault mode into a fully-connected neural network, compiling all neurons by the fully-connected layer and learning a potential nonlinear function between input and output characteristics of the fully-connected layer;
(3) the fully-connected layer uses the ReLU function as the activation function of the middle layer, while the Softmax operation is used for the output of the last fully-connected layer The Softmax activation function of (1) is as follows:
the probability that the instance n at the time t belongs to the failure mode k is calculated by the formula
(4) Branching fully-connected layers into multiple regression sub-network models based on different failure modes, generating a residual life prediction estimate for each regression auto-networkTargeting each regression subnetwork to a different failure modeGenerated byThe probability output is integrated to improve the accuracy of the prediction result, and the final output residual life prediction is as follows:
2. the method of predicting remaining engine life as set forth in claim 1, wherein said long-short memory network comprises a cell state ctForgetting gate gammafUpdating the gate gammauAnd candidate statesLet W and b denote weights and biases in the neural network, footnotes f, u, c and o denote forget-to-gate, update-gate and candidate cells and outputs, respectively
Wherein the content of the first and second substances,
Γf=σ[Wf(ht-1,xt)]+bf(formula 4)
Γu=σ[Wu(ht-1,xt)]+bu(formula 5)
The cell state c is then updated according to the control gatetAs follows:
3. the method of predicting remaining engine life according to claim 2, wherein the long-short term memory network final outputOut value htBased on cell state ctThrough an output gate ΓoThe Sigmoid (sigma) function of (a) is filtered,
ht=Γo⊙ct(formula 8).
4. The engine residual life prediction method of claim 1, characterized in that the true value τ of residual lifen,tAnd the estimated valueThe root mean square error between is used to define the loss of the prediction task, given by:
category labelProbability of approaching failure mode kSetting a cross entropy function aiming at the classification subtasks, specifically:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111042036.2A CN113821974B (en) | 2021-09-07 | 2021-09-07 | Engine residual life prediction method based on multiple fault modes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111042036.2A CN113821974B (en) | 2021-09-07 | 2021-09-07 | Engine residual life prediction method based on multiple fault modes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113821974A true CN113821974A (en) | 2021-12-21 |
CN113821974B CN113821974B (en) | 2023-11-24 |
Family
ID=78921923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111042036.2A Active CN113821974B (en) | 2021-09-07 | 2021-09-07 | Engine residual life prediction method based on multiple fault modes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113821974B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109343505A (en) * | 2018-09-19 | 2019-02-15 | 太原科技大学 | Gear method for predicting residual useful life based on shot and long term memory network |
US20200184131A1 (en) * | 2018-06-27 | 2020-06-11 | Dalian University Of Technology | A method for prediction of key performance parameter of an aero-engine transition state acceleration process based on space reconstruction |
CN111476212A (en) * | 2020-05-18 | 2020-07-31 | 哈尔滨理工大学 | Motor fault detection system based on long-time and short-time memory method |
AU2020104133A4 (en) * | 2020-12-16 | 2021-03-04 | Anjanamma, Chappidi MRS | Expected conditional clustered regressive deep multilayer precepted neural learning for iot based cellular network traffic prediction with big data |
CN112580263A (en) * | 2020-12-24 | 2021-03-30 | 湖南工业大学 | Turbofan engine residual service life prediction method based on space-time feature fusion |
CN113158348A (en) * | 2021-05-21 | 2021-07-23 | 上海交通大学 | Aircraft engine residual life prediction method based on deep learning coupling modeling |
-
2021
- 2021-09-07 CN CN202111042036.2A patent/CN113821974B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200184131A1 (en) * | 2018-06-27 | 2020-06-11 | Dalian University Of Technology | A method for prediction of key performance parameter of an aero-engine transition state acceleration process based on space reconstruction |
CN109343505A (en) * | 2018-09-19 | 2019-02-15 | 太原科技大学 | Gear method for predicting residual useful life based on shot and long term memory network |
CN111476212A (en) * | 2020-05-18 | 2020-07-31 | 哈尔滨理工大学 | Motor fault detection system based on long-time and short-time memory method |
AU2020104133A4 (en) * | 2020-12-16 | 2021-03-04 | Anjanamma, Chappidi MRS | Expected conditional clustered regressive deep multilayer precepted neural learning for iot based cellular network traffic prediction with big data |
CN112580263A (en) * | 2020-12-24 | 2021-03-30 | 湖南工业大学 | Turbofan engine residual service life prediction method based on space-time feature fusion |
CN113158348A (en) * | 2021-05-21 | 2021-07-23 | 上海交通大学 | Aircraft engine residual life prediction method based on deep learning coupling modeling |
Also Published As
Publication number | Publication date |
---|---|
CN113821974B (en) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lv et al. | Generative adversarial networks for parallel transportation systems | |
CN110415215B (en) | Intelligent detection method based on graph neural network | |
CN109255505B (en) | Short-term load prediction method of multi-model fusion neural network | |
Alaloul et al. | Data processing using artificial neural networks | |
CN111899510A (en) | Intelligent traffic system flow short-term prediction method and system based on divergent convolution and GAT | |
CN108596327B (en) | Seismic velocity spectrum artificial intelligence picking method based on deep learning | |
CN110678816B (en) | Method and control device for controlling a technical system | |
CN112763967B (en) | BiGRU-based intelligent electric meter metering module fault prediction and diagnosis method | |
Miao et al. | A novel real-time fault diagnosis method for planetary gearbox using transferable hidden layer | |
CN110837888A (en) | Traffic missing data completion method based on bidirectional cyclic neural network | |
CN114282443B (en) | Residual service life prediction method based on MLP-LSTM supervised joint model | |
CN113869563A (en) | Method for predicting remaining life of aviation turbofan engine based on fault feature migration | |
CN115758290A (en) | Fan gearbox high-speed shaft temperature trend early warning method based on LSTM | |
CN115712873A (en) | Photovoltaic grid-connected operation abnormity detection system and method based on data analysis and infrared image information fusion | |
CN111695607A (en) | Electronic equipment fault prediction method based on LSTM enhanced model | |
CN113705915A (en) | CNN-LSTM-ARIMA-based combined short-term power load prediction method | |
CN112734002A (en) | Service life prediction method based on data layer and model layer joint transfer learning | |
CN114548591A (en) | Time sequence data prediction method and system based on hybrid deep learning model and Stacking | |
CN112651519A (en) | Secondary equipment fault positioning method and system based on deep learning theory | |
Mazzi et al. | Lithium-ion battery state of health estimation using a hybrid model based on a convolutional neural network and bidirectional gated recurrent unit | |
CN113821974B (en) | Engine residual life prediction method based on multiple fault modes | |
CN116542701A (en) | Carbon price prediction method and system based on CNN-LSTM combination model | |
Song et al. | A novel framework for machine remaining useful life prediction based on time series analysis | |
CN112598186B (en) | Improved LSTM-MLP-based small generator fault prediction method | |
CN113988210A (en) | Method and device for restoring distorted data of structure monitoring sensor network and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |