CN113673774A - Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network - Google Patents

Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network Download PDF

Info

Publication number
CN113673774A
CN113673774A CN202110979276.9A CN202110979276A CN113673774A CN 113673774 A CN113673774 A CN 113673774A CN 202110979276 A CN202110979276 A CN 202110979276A CN 113673774 A CN113673774 A CN 113673774A
Authority
CN
China
Prior art keywords
tcn
time sequence
training
encoder
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110979276.9A
Other languages
Chinese (zh)
Inventor
刘晓锋
张一鸣
杨曼鑫
罗晨爽
高旭宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110979276.9A priority Critical patent/CN113673774A/en
Publication of CN113673774A publication Critical patent/CN113673774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention belongs to the field of prediction of the residual service life of an aeroengine, and particularly relates to a method for predicting the residual service life of the aeroengine based on an autoencoder and a time sequence convolution network. According to the method, an automatic encoder network is used for data dimension reduction, the generalization capability of the model is improved and the risk of overfitting is reduced while the prediction accuracy is ensured, so that the stable operation of the aircraft is ensured, the maintenance cost is reduced, and the life and property safety of people is protected.

Description

Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network
Technical Field
The invention belongs to the field of prediction of the residual service life of an aircraft engine, and particularly relates to a method for predicting the residual service life of the aircraft engine based on a sparse self-encoder (SAE) and a time sequence convolution network (TCN).
Background
The turbofan engine is called the heart of the aircraft and is an important device of the aircraft, and the health state of the engine determines the running state of the aircraft. Therefore, engine equipment monitoring and maintenance are important components for ensuring healthy and stable operation of the aircraft, and Remaining Useful Life (RUL) prediction of the engine is an important component thereof. By predicting the residual service life of the turbofan engine, the turbofan engine can be subjected to health management, and a reasonable maintenance mode is adopted, so that the maintenance cost of the engine is reduced while the reliable flight of the engine is ensured. The life prediction method of the aircraft engine mainly comprises three methods of physical modeling, data driving and combination of the physical modeling and the data driving. The physical modeling mainly depends on expert knowledge of the turbofan engine for modeling, and due to the fact that the engine is of a complex structure, accurate physical modeling is challenging, and therefore a data-driven method is widely researched. However, because the dimension of the engine monitoring data is high, modeling is difficult when prediction is carried out in a data-driven mode, and because of dimension explosion, the requirement on the performance of a computer is high, time consumption is long, and meanwhile, the problem of overfitting can occur.
Initially, researchers used continuous-time Hidden Markov Models (HMMs) based on uptime, time to start degradation, and operational failure to predict the RUL of turbofan engines, but this model had the memoryless feature that the model did not take full advantage of longer-term historical information, and thus the model did not perform well. In addition, researchers have also extracted information from time series using a Recurrent Neural Network (RNN) model, which has a unique recurrent structure, the output at the current time depends on the state at the previous time, and the RNN model also fails to fully utilize all historical data information because information from a long time ago becomes increasingly blurred during the iteration process. Researchers have further studied a long-short term memory (LSTM) model that introduces control gates on the basis of RNN models and transmits valid long-term information through the cooperation of internal input gates, output gates, and forgetting gates. Although the LSTM model can realize long-term memory in theory, the inherent problem that the basic RNN model and the HMM model are insufficient in utilization of long-term historical information is solved. However, the LSTM model, a variation of the RNN model, also has problems of gradient diffusion and slow training speed. Therefore, the training of the LSTM model is inefficient when training long-span time-series data with a large data volume.
Disclosure of Invention
Aiming at the problems, the invention provides the prediction model of the residual life of the aero-engine based on the sparse self-encoder and the time convolution network, which improves the generalization capability of the model and reduces the risk of overfitting while ensuring the prediction accuracy, thereby ensuring the stable operation of the aircraft, reducing the maintenance cost and protecting the life and property safety of people.
The technical scheme of the invention is as follows:
an aeroengine remaining life prediction method based on a sparse autoencoder and a time sequence convolution network comprises the following steps:
s1, acquiring sensor monitoring data of each part of the aircraft engine, acquiring an original data set, dividing the original data set into a training set and a testing set, adding a residual life label to each piece of data of the training set, and establishing an initial data set;
s2, carrying out standardization preprocessing on the initial data set;
s3, designing a stack type sparse self-encoder SAE, taking the data set preprocessed in the step S2 as input, and performing feature extraction to obtain a low-dimensional data set;
s4, constructing a time sequence convolution network TCN, wherein the time sequence convolution network TCN comprises an expansion convolution layer, a cause-and-effect convolution layer, a ReLU function activation layer, a Dropout layer, a plurality of stacked TCN residual blocks and a full connection layer; using the training set in the low-dimensional data set obtained in the step S3 as an input, and comparing the output residual life prediction value with the residual life label of the training set to finish the training of the time sequence convolution network TCN;
and S5, inputting the test set in the low-dimensional data set obtained in the step S3 into the trained time sequence convolution network TCN, outputting the predicted value of the residual life, performing inverse normalization processing on the predicted value, comparing the predicted value with the real value in the test set, and simultaneously introducing an evaluation index to evaluate the training effect of the time sequence convolution network TCN.
Preferably, in step S1, the raw data set includes a degradation simulation data set of each component of the engine, wherein the training set includes complete life cycle data of each component, and the test set includes life cycle data before each component randomly stops operating and remaining life labels when each component stops operating.
Preferably, the remaining life label is a piecewise linear function, the remaining life is set to be a constant in the initial operation stage and monotonically decreases to 0 with the increase of the operation period in the later stage, and the engine life period is considered to be ended.
Preferably, in step S3, the stacked sparse self-encoder SAE includes a multi-layer sparse self-encoder, and the output of the self-encoder of the previous layer is the input of the self-encoder of the next layer.
Preferably, in step S4, performing time series neural network TCN parameter optimization using grid search and cross validation; measuring the error between the residual life label of the training set and the predicted value output by the time sequence neural network TCN by using the minimum mean square error MSE as a loss function; leading in an Adam module as an optimizer, and carrying out training and parameter optimization on a timing sequence neural network (TCN); strategies for both paranoising and premature stopping are employed to improve the generalization capability of the TCN.
Preferably, in step S5, a root mean square error and a score function are introduced as the evaluation index.
Compared with the prior art, the invention has the following beneficial effects:
1) the invention uses the automatic encoder network to reduce the dimension of the data, and can improve the generalization capability of the service life prediction model;
2) the residual life prediction SAE-TCN model of the turbofan engine, provided by the invention, combines the advantages of a sparse self-encoder SAE and a time sequence convolution network TCN (TCN is a neural network for processing sequence modeling tasks, shows better performance in different tasks, and can grasp the overall information of the sequence from local to global), so that the prediction result is more accurate and effective.
Drawings
FIG. 1 is a flowchart of a method for predicting the residual life of an aircraft engine based on a sparse self-encoder and a time convolution network according to an embodiment of the invention;
FIG. 2 is a detailed index and unit of a Commercial Modular aviation-Propulsion System Simulation (C-MAPSS) data set according to an embodiment of the present invention;
FIG. 3 is a schematic network structure of a stacked sparse self-encoder SAE according to an embodiment of the present invention;
FIG. 4 is a diagram of a causal convolution basic structure of a time series convolution network TCN according to an embodiment of the present invention;
FIG. 5 is a diagram of the dilated convolution basic structure of the time series convolution network TCN according to an embodiment of the present invention;
FIG. 6 is a diagram of a residual linking principle of an embodiment of the present invention;
FIG. 7 is a schematic diagram of a residual life prediction SAE-TCN model structure according to an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings and examples, it being understood that the examples described below are intended to facilitate the understanding of the invention, and are not intended to limit it in any way.
As shown in fig. 1, the method for predicting the remaining life of an aircraft engine based on a sparse self-encoder SAE and a time-series convolutional network TCN provided in this embodiment includes the following steps:
s1, acquiring sensor monitoring data of the whole life cycle of each component of the aircraft engine from normal operation to degradation, obtaining an original data set, and dividing the original data set into a training set and a testing set, wherein the training set is complete life cycle data, and the testing set is life cycle data before the aircraft engine stops operating randomly and a residual life label when the aircraft engine stops operating; and then adding training set data labels, and marking each data of the training set with a residual life label.
The dataset used in this example is the commercial modular aviation propulsion system simulation C-MAPSS dataset, which is the NASA turbofan engine degradation simulation dataset. The C-MAPSS simulates the degradation process of the aircraft engine under different operation settings, and outputs the state monitoring data of the readings and the operation state indexes of each sensor in the whole degradation process. The C-MAPSS has four subdata sets under different simulation conditions and failure modes, and the data sets FD001 and FD003 under two single working conditions are adopted in the embodiment to test the engine residual life prediction model established later. Data set FD001 recorded data from normal operation to complete failure for 100 engines, wherein the training set included status monitoring information for 20630 operating cycles, and the test set included 13095 data sets of characteristic parameters and 100 remaining life labels for the engines. The training set in data set FD003 included 24720 pieces of data, and the test set included 13095 pieces of data for characteristic parameters and the remaining life labels for 100 engines. The data sets selected in this embodiment all include 21 sensor readings and 3 operating condition indicators for RUL prediction, and the detailed information of the data sets is shown in fig. 2.
During the normal operating phase, the failure rate of the turbofan engine is low, the degradation is negligible and the failure rate increases rapidly with the passage of time. Therefore, the remaining life label added by the training set is set as a piecewise linear function, and when the remaining life of the engine is greater than or equal to the RUL threshold value, the remaining life label is set as a constant; when the RUL is smaller than the RUL threshold value, the RUL is monotonically decreased with the increase of the operation period, and when the RUL is decreased to 0, the end of the engine life cycle is considered. In this embodiment, the RUL threshold is set to 120, resulting in an initial data set D.
S2, the initial data set D is processed by standardization to obtain a new data set D'.
Because there is a difference in magnitude between different features in the data set, the variance is large, which has a negative effect on both the accuracy and the convergence rate of the subsequent model, and thus, the data of the data set needs to be normalized. The normalization processing formula of the present embodiment is as follows:
Figure BDA0003228399220000051
wherein x' is the normalized data; x is the raw data; mean (x) is the mean, σ is the standard deviation, the resulting data mean is 0, and the standard deviation is 1. This example uses the StandardScaler operation in Scikit-leann to perform normalization to eliminate variance effects and obtain a new data set D'.
And S3, designing a stack type sparse self-encoder SAE, taking a training set in the new data set D' as input, and performing feature extraction to obtain a low-dimensional data set.
The C-MAPSS dataset selected in this embodiment contains 24-dimensional input data, the dimension of the monitored data is high, and some data values are constant. Before the data is input into the model, feature extraction is carried out in a data dimension reduction mode, data which has obvious influence on the residual service life is selected as input features, and the process is called data dimension reduction. Common feature extraction methods include Principal Component Analysis (Principal Component Analysis), lasso and the like, and the invention designs a stacked sparse self-encoder SAE for feature extraction.
The self-encoder is a multilayer unsupervised neural network, can reduce original data into low-dimensional output under the condition of not losing information, and comprises an input layer, a hidden layer and an output layer, wherein different layers are trained in a full-connection mode, as shown in fig. 3. The self-encoder comprises an encoding network and a decoding network, wherein the encoding network encodes input original data x 'to obtain a low-dimensional feature z, wherein z is f (wx' + b), w is a weight parameter of a hidden layer, and b is a bias parameter of the hidden layer. The resulting low-dimensional feature z is then decoded into y' by decoding the trellis. The training among different layers of the self-encoder aims to minimize the difference between output data and input data to obtain a data reconstruction expression mode, and a cost function in the training process is measured by the following formula:
Figure BDA0003228399220000052
wherein J (y) is a cost function; n is the number of samples.
The self-coding neural network is trained in a back propagation mode, and the trained self-coder can code any input data according to parameters obtained in the training process to obtain the output of a hidden layer, namely a feature vector after original data are extracted. The self-encoder converts input data into low-dimensional data as a way of extracting features, can extract feature vectors in the input data without losing key information, and achieves the purpose of data reconstruction. The error between the original data and the reconstructed data can be measured by taking the minimum Mean Square Error (MSE) as a loss function:
Figure BDA0003228399220000061
wherein m is the number of engine samples, yiIn order to achieve the target value,
Figure BDA0003228399220000062
is a predicted value.
The sparse self-encoder has the effects that the self-encoder can better learn sparse characteristics in input data by increasing the sparsity limit in the self-encoder, increasing the constraint on a loss function and constraining the reconstruction mode of the encoder. Sparsity constraint has an important role in algorithm optimization, and most neurons are limited to be in a suppressed state, so that the method is more similar to human brain neurons.
Wherein sigmoid is adopted as an activation function of the hidden layer, when the output is 1, the neuron is in a complete active state, and when the output is 0, the neuron is in a suppressed state.
Introducing KL dispersion as a regularization term:
Figure BDA0003228399220000063
wherein rho is a sparsity parameter; rhojIs the average liveness of the hidden neuron j;
Figure BDA0003228399220000064
Figure BDA0003228399220000065
j nodes of the hidden layer are used for the ith sample; ρ and
Figure BDA0003228399220000066
the larger the phase difference, the larger the KL dispersion.
The stacked sparse autoencoder SAE designed by the invention consists of a plurality of layers of sparse autoencoders, namely, the output of the autoencoder of the previous layer is the input of the autoencoder of the next layer, and more representative characteristic values can be obtained through characteristic extraction layer by layer. In this embodiment, a stacked sparse self-encoder is used to process 24-dimensional data in a data set, the 24-dimensional data is first compressed to 20 dimensions through a self-encoder input layer encoding network, and finally compressed to 11 dimensions at an implicit layer, and then the data is reconstructed by a decoding network, and finally returned to 24 dimensions from 11 dimensions to 20 dimensions. In the training of the stacked sparse self-encoder SAE, an adam (adaptive motion) module is introduced as an optimizer, and the speed of gradient descent is adjusted by using momentum for optimization of the stacked sparse self-encoder SAE.
Particularly, when 11 neurons are arranged in a hidden layer, the stacked sparse self-encoder SAE has the optimal encoding effect and the minimum loss function.
S4: and constructing a time sequence neural network (TCN) which comprises an expansion convolutional layer, a causal convolutional layer, a ReLU function activation layer, a Dropout layer, a plurality of TCN residual blocks and a full connection layer.
The time-series neural network TCN is a variant of a convolutional neural network, can be used for processing a sequence modeling task, and has the advantages that: the performance on different tasks is better than that of a general recursive structure, such as RNN, GRU, LSTM and the like; the convolution architecture is causal (i.e., causal convolution), and thus the information is passed in its entirety; a combination of an enhanced residual network and a dilated convolution is employed for capturing long-term correlation properties between features.
The principle of causal convolution is shown in fig. 4, where causal convolution limits the sliding window to ensure that only information from previous time instants is used for prediction, and is a unidirectional rather than bidirectional structure that ensures that subsequent information does not interfere with the outcome, and that previous information is fully preserved. Principle of the dilated convolution as shown in fig. 5, the dilated convolution adds a dilation factor at the top level of the convolution to reduce the computational complexity, i.e. to make each hidden layer and the input sequence have the same dimension. The dilation convolution controls the interval sampling by the sampling rate d, the sampling rate of the bottommost layer is 1, namely, each point of the input is sampled, and the sampling rate is 2, which means that every two points are sampled once and used as the input of the next layer.
The gradient disappears easily along with the increase of the number of the neural network layers, the effect of the deep neural network is degraded possibly and is weaker than that of the shallow neural network, so that when the redundant network structure realizes the identity mapping, the input and the output are completely the same, the deep neural network is ensured not to be degraded, and the optimized network structure is ensured to be obtained. To achieve identity mapping, this problem can be solved by residual concatenation, as shown in fig. 6. The residual connection is formed by summing the input features x and their non-linear transformations f (x). One residual block can be represented as xl+1=xl+F(xl,Wl) Wherein F (x)l,Wl) Is a residual term, xl+1For the characteristic region of period l +1, xlFor the characteristic value of period l, WlIs the weight corresponding to epoch l. When the residual term approaches 0, the network can be considered as an identity mapping. Therefore, by adding the residual block, the deep neural network can be more easily optimized, thereby reducing the overfitting problem which may exist in the deep neural network.
In this embodiment, a time-series neural network TCN formed by 6 time-series residual blocks is used as a processor, six residual blocks are stacked, and a predicted value of remaining life is finally obtained through a full connection layer, where the full connection layer includes 64 neurons, and one-dimensional data is finally output.
S5: and inputting the low-dimensional data set of the stacked sparse self-encoder into the constructed time sequence neural network TCN for training as a characteristic to obtain the weight and the bias parameter among the neurons. The specific process is as follows:
grid search and cross validation are used for time series neural network TCN parameter optimization. The super-parameters related to the TCN of the time sequence neural network constructed by the invention are almost not coupled, and the time step (time _ steps) is set to be 10, 20 and 30; a rate of branching _ rate of 0.15, 0.2, 0.3, 0.4, 0.5; the number of filters is 8, 16, 32, 64 and 128 layers; the convolution kernel size is 2, 3, 4, 5. Through training and tuning, the optimal parameter of the time-series neural network TCN finally obtained in this embodiment is time _ steps ═ 10, that is, 20 cycles of data are used to predict the remaining life at a future time, the rate of dropping is 0.4, the size of the convolution kernel is 2, and the number of filters is 64, at this time, the optimal time-series neural network TCN is obtained, and the detailed parameters and structure thereof are shown in table 1 below:
TABLE 1 temporal neural network TCN architecture
Layer Output Shape
Input None,20,11
Block1 None,20,64
Block2 None,20,64
Block3 None,20,64
Block4 None,20,64
Block5 None,20,64
Block6 None,20,64
Lambda None,64
Flatten None,1
Similarly, the minimum mean square error MSE is used as a loss function to measure the error between the TCN target value and the predicted value of the time sequence neural network; leading in an Adam module as an optimizer, and carrying out training and parameter optimization on a timing sequence neural network (TCN); the generalization capability of the time-series neural network TCN is improved by adopting a strategy of diagnosis and stopping in advance; the appropriate Batch _ size and epochs are selected for training, with the Batch _ size set to 20 and epochs set to 100 in this example.
In the embodiment, an Anaconda integrated development environment is used, and a deep learning framework keras2.3.1 and tensoflow 2.0 are adopted to train a stacked sparse self-encoder SAE and a time-series neural network TCN. In the training process, the probability is set to be 5, the lower limit of the learning rate is 0.00001, namely when 5 epochs pass and the model performance does not improve, the learning rate is adjusted to be smaller, and a better training effect is obtained.
S6: inputting a test set in a low-dimensional data set output by a stacked sparse self-encoder into a trained time sequence neural network TCN for evaluation; the specific process is as follows:
inputting a test set in a low-dimensional data set output by a stacked sparse self-encoder into a trained time sequence neural network TCN, and using weights and bias parameters among neurons to obtain a predicted value y of the test set.
And then, comparing the predicted value after the inverse normalization with the real residual life value of the test set, and simultaneously, introducing a root mean square error and a score function as evaluation indexes to evaluate the training effect of the time sequence neural network TCN. The root mean square error calculation formula is shown below.
Figure BDA0003228399220000091
The Score function is an index specified by the C-MAPSS, and since an excessively high estimated remaining engine life causes higher losses, it is necessary to pay higher penalties to the overestimated RUL in order to reduce the economic losses of the airlines and to reduce the maintenance costs. The Score function is specifically formulated as follows:
Figure BDA0003228399220000092
the lower the root mean square error RMSE and the score value of the asymmetric score function, the better the performance of the time series neural network TCN prediction method is represented.
In order to evaluate the performance of the time-series convolutional network residual life prediction model (SAE-TCN model for short, as shown in FIG. 7) for performing data dimension reduction and time-series neural network TCN prediction by using the stacked sparse automatic encoder, prediction indexes of the SAE-TCN model are compared with prediction models such as a long-short term memory network (LSTM), a gated cycle unit (GRU), a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), a multilayer perceptron (MLP), and the like.
Firstly, taking a multi-layer perceptron MLP as a reference model, adopting the grid search to carry out parameter optimization, setting the network structure to be (10,5,1), setting the unit number of LSTM, GRU and RNN networks to be 50, and setting the rate of failure (probability) to be 0.5. Training each model according to the parameters by using a training set, and verifying by using a test set, wherein the results of the corresponding indexes obtained by the training set and the sample set are as follows 2:
TABLE 2 prediction index of each model on different datasets
Figure BDA0003228399220000101
From the analysis of the results in table 2, from the perspective of RMSE or Score, the SAE-TCN model of the present invention has the smallest prediction error and the best prediction effect compared with the conventional deep learning method.
In conclusion, the SAE-TCN model which uses the stack-type sparse automatic encoder to perform data dimension reduction and uses the time sequence neural network TCN to perform prediction further improves the prediction performance of the RUL, and has good applicability, strong generalization capability and higher accuracy in long-time data application. Therefore, the present invention is of some significance to the health management of turbofan engines. The airline company adopts a reasonable maintenance mode according to the service life prediction result of the invention, can reduce the maintenance cost of the engine while ensuring the reliable flight of the engine, and has stronger application value.
It will be apparent to those skilled in the art that various modifications and improvements can be made to the embodiments of the present invention without departing from the inventive concept thereof, and these modifications and improvements are intended to be within the scope of the invention.

Claims (6)

1. The method for predicting the remaining life of the aero-engine based on the sparse autoencoder and the time sequence convolution network is characterized by comprising the following steps of:
s1, acquiring sensor monitoring data of each part of the aircraft engine, acquiring an original data set, dividing the original data set into a training set and a testing set, adding a residual life label to each piece of data of the training set, and establishing an initial data set;
s2, carrying out standardization preprocessing on the initial data set;
s3, designing a stack type sparse self-encoder SAE, taking the data set preprocessed in the step S2 as input, and performing feature extraction to obtain a low-dimensional data set;
s4, constructing a time sequence convolution network TCN, wherein the time sequence convolution network TCN comprises an expansion convolution layer, a cause-and-effect convolution layer, a ReLU function activation layer, a Dropout layer, a plurality of stacked TCN residual blocks and a full connection layer; using the training set in the low-dimensional data set obtained in the step S3 as an input, and comparing the output residual life prediction value with the residual life label of the training set to finish the training of the time sequence convolution network TCN;
and S5, inputting the test set in the low-dimensional data set obtained in the step S3 into the trained time sequence convolution network TCN, outputting the predicted value of the residual life, performing inverse normalization processing on the predicted value, comparing the predicted value with the real value in the test set, and simultaneously introducing an evaluation index to evaluate the training effect of the time sequence convolution network TCN.
2. The method according to claim 1, wherein in step S1, the raw data set comprises a simulation data set of degradation of components of the engine, wherein the training set comprises complete life cycle data of the components, and the test set comprises life cycle data before the components are randomly stopped and remaining life labels of the components when the components are stopped.
3. The method of claim 1, wherein the residual life label is a piecewise linear function, wherein the residual life is set to a constant value at an early stage of operation and monotonically decreases to 0 with increasing operating cycle at a later stage, wherein the end of the engine life cycle is considered.
4. The method according to claim 1, wherein in step S3, the stacked sparse self encoder SAE comprises a multi-layer sparse self encoder, and the output of the self encoder of the previous layer is the input of the self encoder of the next layer.
5. The method according to claim 1, wherein in step S4, grid search and cross validation are used for time series neural network TCN parameter optimization; measuring the error between the residual life label of the training set and the predicted value output by the time sequence neural network TCN by using the minimum mean square error MSE as a loss function; leading in an Adam module as an optimizer, and carrying out training and parameter optimization on a timing sequence neural network (TCN); strategies for both paranoising and premature stopping are employed to improve the generalization capability of the TCN.
6. The method according to claim 1, wherein in step S5, a root mean square error and score function is introduced as the evaluation index.
CN202110979276.9A 2021-08-25 2021-08-25 Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network Pending CN113673774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979276.9A CN113673774A (en) 2021-08-25 2021-08-25 Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979276.9A CN113673774A (en) 2021-08-25 2021-08-25 Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network

Publications (1)

Publication Number Publication Date
CN113673774A true CN113673774A (en) 2021-11-19

Family

ID=78545919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979276.9A Pending CN113673774A (en) 2021-08-25 2021-08-25 Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network

Country Status (1)

Country Link
CN (1) CN113673774A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048685A (en) * 2021-11-25 2022-02-15 成都理工大学 Time convolution network electromagnetic response value prediction method based on grey correlation analysis
CN115034312A (en) * 2022-06-14 2022-09-09 燕山大学 Fault diagnosis method for dual neural network model satellite power supply system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184131A1 (en) * 2018-06-27 2020-06-11 Dalian University Of Technology A method for prediction of key performance parameter of an aero-engine transition state acceleration process based on space reconstruction
CN111289250A (en) * 2020-02-24 2020-06-16 湖南大学 Method for predicting residual service life of rolling bearing of servo motor
CN111340282A (en) * 2020-02-21 2020-06-26 山东大学 DA-TCN-based method and system for estimating residual service life of equipment
CN112560252A (en) * 2020-12-07 2021-03-26 厦门大学 Prediction method for residual life of aircraft engine
CN113158445A (en) * 2021-04-06 2021-07-23 中国人民解放军战略支援部队航天工程大学 Prediction algorithm for residual service life of aero-engine with convolution memory residual self-attention mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184131A1 (en) * 2018-06-27 2020-06-11 Dalian University Of Technology A method for prediction of key performance parameter of an aero-engine transition state acceleration process based on space reconstruction
CN111340282A (en) * 2020-02-21 2020-06-26 山东大学 DA-TCN-based method and system for estimating residual service life of equipment
CN111289250A (en) * 2020-02-24 2020-06-16 湖南大学 Method for predicting residual service life of rolling bearing of servo motor
CN112560252A (en) * 2020-12-07 2021-03-26 厦门大学 Prediction method for residual life of aircraft engine
CN113158445A (en) * 2021-04-06 2021-07-23 中国人民解放军战略支援部队航天工程大学 Prediction algorithm for residual service life of aero-engine with convolution memory residual self-attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YIMING ZHANG 等: "Remaining Useful Life Prediction for Turbofan Engine using SAE-TCN Model", 第40届中国控制会议论文集(15), pages 8280 - 8285 *
王宇韬 等: "Python大数据分析与机器学习商业案例实战", 30 June 2020, 机械工业出版社, pages: 138 - 140 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048685A (en) * 2021-11-25 2022-02-15 成都理工大学 Time convolution network electromagnetic response value prediction method based on grey correlation analysis
CN115034312A (en) * 2022-06-14 2022-09-09 燕山大学 Fault diagnosis method for dual neural network model satellite power supply system
CN115034312B (en) * 2022-06-14 2023-01-06 燕山大学 Fault diagnosis method for dual neural network model satellite power system

Similar Documents

Publication Publication Date Title
Li et al. A directed acyclic graph network combined with CNN and LSTM for remaining useful life prediction
CN110321603B (en) Depth calculation model for gas path fault diagnosis of aircraft engine
CN111340282B (en) DA-TCN-based method and system for estimating residual service life of equipment
CN110702418A (en) Aircraft engine fault prediction method
CN113722985B (en) Method and system for evaluating health state and predicting residual life of aero-engine
Ellefsen et al. Validation of data-driven labeling approaches using a novel deep network structure for remaining useful life predictions
CN116757534B (en) Intelligent refrigerator reliability analysis method based on neural training network
CN113673774A (en) Aero-engine remaining life prediction method based on self-encoder and time sequence convolution network
Xu et al. A novel dual-stream self-attention neural network for remaining useful life estimation of mechanical systems
CN114297918A (en) Aero-engine residual life prediction method based on full-attention depth network and dynamic ensemble learning
CN114266201B (en) Self-attention elevator trapping prediction method based on deep learning
CN114218872A (en) Method for predicting remaining service life based on DBN-LSTM semi-supervised joint model
Chen et al. Real-time bearing remaining useful life estimation based on the frozen convolutional and activated memory neural network
CN115017826A (en) Method for predicting residual service life of equipment
CN111881299A (en) Outlier event detection and identification method based on duplicate neural network
Xu et al. Global attention mechanism based deep learning for remaining useful life prediction of aero-engine
Deng et al. A remaining useful life prediction method with automatic feature extraction for aircraft engines
CN114169091A (en) Method for establishing prediction model of residual life of engineering mechanical part and prediction method
Li et al. Remaining useful life prediction of aero-engine based on PCA-LSTM
CN115048873B (en) Residual service life prediction system for aircraft engine
Lang et al. Data augmentation for fault prediction of aircraft engine with generative adversarial networks
CN115982988A (en) PCA-Transformer-based device remaining service life prediction method
Li et al. Gated recurrent unit networks for remaining useful life prediction
Wenqiang et al. Remaining useful life prediction for mechanical equipment based on temporal convolutional network
Zhang et al. Remaining Useful Life Prediction for Turbofan Engine using SAE-TCN Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination