CN112613227B - Model for predicting remaining service life of aero-engine based on hybrid machine learning - Google Patents

Model for predicting remaining service life of aero-engine based on hybrid machine learning Download PDF

Info

Publication number
CN112613227B
CN112613227B CN202011468528.3A CN202011468528A CN112613227B CN 112613227 B CN112613227 B CN 112613227B CN 202011468528 A CN202011468528 A CN 202011468528A CN 112613227 B CN112613227 B CN 112613227B
Authority
CN
China
Prior art keywords
neuron
neurons
model
layer
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011468528.3A
Other languages
Chinese (zh)
Other versions
CN112613227A (en
Inventor
徐甜甜
韩光洁
林川
田晨
史国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202011468528.3A priority Critical patent/CN112613227B/en
Publication of CN112613227A publication Critical patent/CN112613227A/en
Application granted granted Critical
Publication of CN112613227B publication Critical patent/CN112613227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/04Ageing analysis or optimisation against ageing

Abstract

The invention belongs to the technical field of aero-engine fault prediction and health management, and discloses a hybrid machine learning-based aero-engine residual service life prediction model, in particular to a hybrid machine learning-based model SGBRT for timely predicting the residual service life of an aero-engine; the model combines a self-organizing mapping network and a gradient lifting regression tree algorithm, and can predict the residual service life of the aircraft engine through the following steps: firstly, the model uses a self-organizing mapping network to cluster an original sample set into clusters; and then respectively constructing a gradient lifting regression tree for each cluster so as to predict the remaining service life of the aircraft engine. The method not only can better predict the residual service life of the aircraft engine, but also reveals the intrinsic characteristics of the degradation data of the aircraft engine.

Description

Model for predicting remaining service life of aero-engine based on hybrid machine learning
Technical Field
The invention belongs to the technical field of aero-engine fault prediction and health management, and particularly relates to a model for predicting the remaining service life of an aero-engine based on hybrid machine learning.
Background
Aircraft engines are one of the most critical parts of an aircraft and are highly complex systems. Aircraft engines typically operate for long periods of time under severe conditions of high temperature, high pressure, high speed and high load, and are therefore prone to failure. Failure of an aircraft engine can have catastrophic consequences and therefore requires very high reliability and safety. Furthermore, the maintenance costs of an aircraft engine are very high. For the management of aircraft engines, airlines are under various pressures, including ensuring the safety and reliability of the engines, avoiding engine failures during operation and reducing the maintenance costs of the engines. Engine fault Prediction and Health Management (PHM) is an effective solution. The PHM technology of the aircraft engine is an important means for promoting the improvement of maintenance modes and improving the safety, reliability and economic bearing capacity of the engine. As a key core technology of PHM, prediction aims to predict (RUL) of a component or system and provide support for operation planning and maintenance decisions.
RUL prediction can predict the time from an operational state to a failure state of a device at a particular time. Existing RUL prediction methods can be divided into two categories: model-based methods and data-driven based methods. Model-based approaches are often more accurate if the degradation of a complex system is modeled accurately. But physical model-based methods require a large amount of a priori knowledge. Recently, data-driven based approaches have received increasing attention. The method based on data driving does not need to know the detailed operation mechanism of a mechanical system, only needs to collect some data from the system, and can identify the condition of the system according to algorithms such as artificial intelligence and the like. There are many methods and models available for data-driven RUL prediction.
In 2015, Nieto et al proposed a RUL prediction model of an aircraft engine based on a support vector machine in Hybrid pseudo-based method for detecting the remaining useful life of the aircraft engine and evaluating the parameters in the support vector machine by using a particle swarm optimization algorithm.
With the advent of large data sets with excellent explicit labels, artificial intelligence methods have been widely used in RUL prediction. Among them, neural networks are one of the most commonly used algorithms.
In 2018, Li et al propose a new method for data prediction using a deep convolutional neural network in "learning using free estimation in modeling using radial convolution neural networks". Experiments were performed on the C-MAPSS data set to demonstrate the effectiveness of the method. However, most deep learning approaches have no effective mechanism to adaptively weight input features when processing multi-feature data. In 2020, Liu et al, in "Remaining using free prediction using a novel feature based end-to-end prediction", proposed a novel end-to-end RUL prediction method based on feature attention. The proposed feature attention mechanism is directly applied to the input data, and can dynamically focus more attention on more important features in the training process.
In RUL prediction, time series analysis is also a common prediction method and is relatively mature. In this predictive approach, the general idea is to predict engine performance and health parameters by single or multiple steps using sensor data as a time series until a set fault threshold is reached.
In 2019, Miao et al propose a dual-task long and short term memory network for the degradation assessment and RUL prediction of an aircraft engine in Joint learning of degradation assessment and a road prediction. This makes the evaluation and prediction results more reliable and accurate, thereby improving reliability and safety of operation and reducing maintenance costs. However, the conventional LSTM network only uses the learned features for regression or classification at the last time step, but actually the learned features at other time steps also contribute to some extent.
In 2020, Chen et al propose an attention-based deep learning framework in Machine learning vision an attention based deep learning approach. Long-short term memory networks are used to learn features from raw sensory data. At the same time, the proposed attention mechanism can learn the importance of features and time steps and assign more weight to more important features. But for aircraft engines, the data set is limited. Therefore, it is necessary to introduce an ensemble learning method.
The ensemble learning method is a very general method, and can be applied to many application programs. In 2019, Li et al propose a prediction method based on ensemble learning in the Degradation modeling and remaining using prediction of aircraft engines to model the Degradation process and predict the RUL of an aircraft engine. The ensemble learning algorithm combines multiple base learners to achieve better prediction performance. And (3) distributing the optimal weight for the basic learner by using a particle swarm optimization algorithm and a sequential quadratic optimization method. In 2017, Zhang et al put forward an integration method of a multi-target deep belief network in Multi object deep belief networks for remaininging user interest in prognosics, which is used for predicting the RUL of an aircraft engine. A multi-objective evolutionary algorithm is applied to train a deep belief network with two conflicting objectives (accuracy and diversity). The trained deep belief networks are then merged to form an integrated model of the final RUL prediction.
The existing research methods are mainly based on supervised learning algorithms. The prediction result is excessively dependent on the history data, and thus it is difficult to ensure accuracy and efficiency of the prediction result. Clustering (self-organizing map networks), an unsupervised learning algorithm, can provide insight into individual relationships that may be missing from the data and identify some more meaningful relationships from the data. Deep learning based approaches are hampered by the lack of data on aircraft engines. Ensemble learning has a great advantage in small sample learning, and therefore, a new ensemble learning method (gradient boosting regression tree) is considered to be used instead of the deep learning method which is widely used at present.
Disclosure of Invention
The invention provides a novel SGBRT (data-driven modeling, fuzzy neural network) based model, which combines a Self-organizing mapping Map (SOM) with a Gradient Boosting Regression Tree (GBRT) to predict the RUL (residual-Boosting Regression Tree) of an aircraft engine. First, the original sample set is clustered into clusters using the SOM network. Each cluster was then individually built into a regression model (GBRT) to predict RUL.
The technical scheme of the invention is as follows:
an aircraft engine RUL prediction model based on hybrid machine learning comprises the following steps:
establishing a hybrid machine learning model combining a Self-organizing mapping network (SOM) and a Gradient Boosting Regression Tree (GBRT) so as to predict the Remaining service Life (RUL) of the aircraft engine; the SOM self-organizes and adaptively changes network parameters and structures by automatically discovering the intrinsic laws and basic attributes in the sample; the hybrid machine learning model is regarded as a modified version of SOM, and data points mapped to neurons after a standard training process are reserved to construct GBRT; establishing a GBRT for each neuron;
the hybrid machine learning model SGBRT is divided into four layers: the system comprises an input layer, an SOM layer, a regression layer and an output layer; in the training stage, inputting the trained feature vectors into an input layer, and then clustering input original data in an SOM layer; in a regression layer, constructing GBRTs for the sub data sets obtained by clustering respectively; in the testing stage, inputting the characteristic vectors of the test set into an input layer, and then judging which type the characteristic vectors belong to in an SOM layer; next, in the regression layer, inputting the feature vectors into the GBRT of the construction class; through the steps, the predicted value is output to an output layer;
(1) after the SOM layer receives the training data, the neurons calculate the distance between the training data and the weight vectors carried by the neurons, and the neurons with the closest distance become competitive winners and are called winning neurons; the weight vectors of the winning neuron and its neighbors will then be adjusted so that the distance of these weight vectors from the current input sample is reduced; this process is iterated continuously until convergence; the method comprises four steps of initialization, a competition process, a cooperation process and weight adjustment;
(1.1) assume that the input layer feature vector x is written as: x ═ X i I ═ 1., K }, where there are K eigenvectors; the connection weight between input cell i and neuron j is written as m ji ={m ji :j=1,.., N; 1., K }, wherein the total number of neurons is N; selecting a smaller random initial value for the network weight;
(1.2) for each input feature vector, the neurons calculating their respective discrimination function values, the winning neuron being the particular neuron having the smallest discrimination function value; the discriminant function is defined as a feature vector x and a weight vector m for each neuron j ji The squared euclidian distance between, i.e.:
Figure BDA0002835398910000051
namely, the winning nerve unit is the neuron with the weight vector closest to the characteristic vector;
(1.3) there is a topological neighborhood similar to neurobiology among neurons in the SOM; s ji And if the lateral distance between the neuron j and the input unit i on the neuron grid is the lateral distance, the topological neighborhood is taken as:
Figure BDA0002835398910000052
wherein I (x) is the input unit index of the winning neuron; the function is largest among the winning neurons and is symmetric about the winning neurons; when the distance reaches infinity, it decays monotonically to zero; sigma is the effective width of the topological neighborhood, and the size of the topological neighborhood shrinks along with time;
(1.4) the SOM must contain some adaptive or learning process by which the output nodes self-organize to form a feature map between the inputs and outputs; not only can the winning neuron get weight updates, its neighbors will also update their weights; the weight updating method comprises the following steps:
Δm ji =η(t)·T j,I(x) (t)·(x i -m ji )
wherein eta (t) is the learning rate, and t is the cycle number; the update applies to all trained feature vectors X in multiple iterations; the effect of each learning weight update is to win the nervesWeight vector m of element and its neighbors ji Moving to the feature vector X; iterating through the process may order the topology of the network;
(2) for each clustered cluster in the regression layer, constructing GBRT to predict the RUL of the aircraft engine; GBRT is the Boosting type of the ensemble learning algorithm;
(2.1) the Boosting framework is trained separately using multiple sets of base models, and the results of all base models are linearly combined to obtain a more reliable prediction, as shown in the following equation:
Figure BDA0002835398910000061
wherein h is d (x) Represents a base model; d is the number of the basic models, and D is the total number of the basic models; the training target of the overall model is to enable the predicted value F (x) to approach the true value y, enable each basic model to respectively bear department prediction tasks by using the idea of a greedy algorithm, and pay attention to the error generated by each basic model:
F d (x)=F d-1 (x)+h d (x)
(2.2) fitting formula of inverse gradient by introducing arbitrary loss function L:
Figure BDA0002835398910000062
wherein H is the cycle number, and H is the maximum cycle number;
(2.3) GBRT is a continuously developed ensemble learning model, and the basic function of the GBRT is a tree structure; the predicted output using E-tree function accumulation is shown in the formula:
Figure BDA0002835398910000063
wherein e represents an index of the tree; Γ is the number of leaves on the tree; each F corresponds to an independent tree structure; the values of the leaf node regions are estimated using a linear search, the loss function is minimized, and then the regression tree is updated.
The invention has the beneficial effects that: the present invention proposes a hybrid model, called SGBRT, which combines SOM and GBRT to predict RUL for an aircraft engine. First, the method clusters the original sample set into clusters by using SOM. Then, GBRT was constructed separately for each cluster to predict RUL. Regression analysis based on data partitioning can not only provide more accurate predictions, but can also reveal intrinsic connections between data and extract meaningful information.
Drawings
FIG. 1 is a topology of a SOM in accordance with an embodiment of the present invention;
FIG. 2 is a diagram of a gradient boosting regression tree GBRT algorithm according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hybrid machine learning model SGBRT algorithm according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
The SOM automatically searches for intrinsic rules and intrinsic attributes in the sample, and changes network parameters and structures in a self-organizing and self-adaptive manner. It is proposed herein that the hybrid model is considered an improved version of the SOM, which is a standard training process that combines the GBRT regression model with the SOM more deeply. The difference from the traditional SOM method is that the data mapped to neurons is preserved to build the GBRT regression model. Each neuron builds a GBRT regression model. The system architecture is shown in fig. 3.
The proposed hybrid model SGBRT is divided into four layers: the system comprises an input layer, an SOM layer, a regression layer and an output layer; in the training stage, inputting the trained feature vectors into an input layer, and then clustering input original data in an SOM layer; in a regression layer, constructing sub data sets obtained by clustering into GBRTs respectively; in the testing stage, inputting the characteristic vectors of the test set into an input layer, and then judging which type the characteristic vectors belong to in an SOM layer; next, in the regression layer, inputting the feature vector into the GBRT of the construction class; through the steps, the predicted value is output to an output layer;
(1) after the SOM layer receives the training data, the neurons calculate the distance between the training data and the weight vectors carried by the neurons, and the neurons with the closest distance become competitive winners and are called winning neurons; the weight vectors of the winning neuron and its neighboring neurons will then be adjusted such that the distance of these weight vectors from the current input sample is reduced; this process is iterated continuously until convergence; the topological structure of the SOM is shown in figure 1 and comprises four steps of initialization, a competition process, a cooperation process and weight adjustment;
(1.1) assume that the input layer feature vector x is written as: x ═ X i I 1.., K }, where there are K eigenvectors; the connection weight between input cell i and neuron j is written as m ji ={m ji J ═ 1., N; 1.., K }, wherein the total number of neurons is N; selecting a smaller random initial value for the network weight;
(1.2) for each input feature vector, the neurons calculating their respective discrimination function values, the winning neuron being the particular neuron having the smallest discrimination function value; the discriminant function is defined as a feature vector x and a weight vector m for each neuron j ji The squared euclidian distance between, i.e.:
Figure BDA0002835398910000081
namely the winning nerve unit is the neuron with the weight vector closest to the characteristic vector;
(1.3) there is a topological neighborhood similar to neurobiology among neurons in the SOM; s. the ji And if the lateral distance between the neuron j and the input unit i on the neuron grid is the lateral distance, the topological neighborhood is taken as:
Figure BDA0002835398910000082
wherein I (x) is the input unit index of the winning neuron; the function is largest among the winning neurons and is symmetric about the winning neurons; when the distance reaches infinity, it decays monotonically to zero; sigma is the effective width of the topological neighborhood, and the size of the topological neighborhood shrinks along with time;
(1.4) the SOM must contain some adaptive or learning process by which the output nodes self-organize to form a feature map between the inputs and outputs; not only can the winning neuron get weight updates, its neighbors will also update their weights; the weight updating method comprises the following steps:
Δm ji =η(t)·T j,I(x) (t)·(x i -m ji )
wherein eta (t) is the learning rate, and t is the cycle number; the update applies to all trained feature vectors X in multiple iterations; the effect of each learning weight update is to update the weight vector m of the winning neuron and its neighbors ji Moving to the feature vector X; iterating through the process may order the topology of the network;
(2) for each clustered cluster in the regression layer, constructing GBRT to predict the RUL of the aircraft engine; GBRT is the Boosting type of the ensemble learning algorithm, and the training process thereof is shown in FIG. 1;
(2.1) the Boosting framework is trained separately using multiple sets of base models, and the results of all base models are linearly combined to obtain a more reliable prediction, as shown in the following equation:
Figure BDA0002835398910000091
wherein h is d (x) Representing a base model; d is the number of the basic models, and D is the total number of the basic models; the training goal of the ensemble model is to approximate the predicted values F (x) to the true values y, each elementary using the idea of a greedy algorithmThe models respectively undertake department prediction tasks and pay attention to errors generated by each basic model:
F d (x)=F d-1 (x)+h d (x)
(2.2) fitting formula of inverse gradient by introducing arbitrary loss function L:
Figure BDA0002835398910000092
wherein H is the cycle number, and H is the maximum cycle number;
(2.3) GBRT is a continuously developed ensemble learning model, and the basic function of the GBRT is a tree structure; the predicted output using E-tree function accumulation is shown in the formula:
Figure BDA0002835398910000093
wherein e represents an index of the tree; Γ is the number of leaves on the tree; each F corresponds to an independent tree structure; the values of the leaf node regions are estimated using a linear search, the loss function is minimized, and then the regression tree is updated.
In summary, the following steps:
in order to accurately predict the RUL of an aircraft engine, a hybrid ensemble learning based model SGBRT is proposed in the present invention that combines SOM with GBRT to predict the RUL of an aircraft engine. First, the method clusters the original sample set into clusters using the SOM network. GBRT is then constructed separately for each cluster to predict the RUL of the aircraft engine. The proposed hybrid machine learning-based model not only can better predict the RUL of an aircraft engine, but also reveals the intrinsic characteristics of aircraft engine degradation data.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (1)

1. A model for predicting the remaining service life of an aircraft engine based on hybrid machine learning is characterized by comprising the following steps:
establishing a hybrid machine learning model combining a self-organizing mapping network and a gradient lifting regression tree to predict the residual service life of the aircraft engine; the SOM self-organizes and adaptively changes network parameters and structure by automatically discovering the intrinsic laws and basic properties in the sample; the hybrid machine learning model is regarded as a modified version of SOM, and data points mapped to neurons after a standard training process are reserved to construct GBRT; establishing a GBRT for each neuron;
the hybrid machine learning model SGBRT is divided into four layers: the system comprises an input layer, an SOM layer, a regression layer and an output layer; in the training stage, inputting the trained feature vectors into an input layer, and then clustering input original data in an SOM layer; in a regression layer, constructing sub data sets obtained by clustering into GBRTs respectively; in the testing stage, inputting the characteristic vectors of the test set into an input layer, and then judging which type the characteristic vectors belong to in an SOM layer; next, in the regression layer, inputting the feature vector into the GBRT of the construction class; through the steps, the predicted value is output to an output layer;
(1) after the SOM layer receives the training data, the neurons calculate the distance between the training data and the weight vectors carried by the neurons, and the neurons with the closest distance become competitive winners and are called winning neurons; the weight vectors of the winning neuron and its neighboring neurons will then be adjusted such that the distance of these weight vectors from the current input sample is reduced; this process is iterated continuously until convergence; the method comprises four steps of initialization, a competition process, a cooperation process and weight adjustment;
(1.1) assume that the input layer feature vector x is written as: x ═ X i I 1.., K }, where there are K eigenvectors; the connection weight between input cell i and neuron j is written asm ji ={m ji J ═ 1.., N; 1.., K }, wherein the total number of neurons is N; selecting a smaller random initial value for the network weight;
(1.2) for each input feature vector, the neurons calculating their respective discrimination function values, the winning neuron being the particular neuron having the smallest discrimination function value; the discriminant function is defined as a feature vector x and a weight vector m for each neuron j ji The squared euclidian distance between, i.e.:
Figure FDA0002835398900000021
namely the winning nerve unit is the neuron with the weight vector closest to the characteristic vector;
(1.3) there is a topological neighborhood similar to neurobiology among neurons in the SOM; s ji And if the lateral distance between the neuron j and the input unit i on the neuron grid is the lateral distance, the topological neighborhood is taken as:
Figure FDA0002835398900000022
wherein I (x) is the input unit index of the winning neuron; the function is largest among the winning neurons and is symmetric about the winning neurons; when the distance reaches infinity, it decays monotonically to zero; sigma is the effective width of the topological neighborhood, and the size of the topological neighborhood shrinks along with time;
(1.4) the SOM must contain some adaptive or learning process by which the output nodes self-organize to form a feature map between the inputs and outputs; not only can the winning neuron get weight updates, its neighbors will also update their weights; the weight updating method comprises the following steps:
Δm ji =η(t)·T j,I(x) (t)·(x i -m ji )
wherein eta (t) is the learning rate, and t is the cycle number; the update applies to all trained feature directions in multiple iterationsAn amount X; the effect of each learning weight update is to update the weight vector m of the winning neuron and its neighbors ji Moving to the feature vector X; iterating through the process may order the topology of the network;
(2) for each clustered cluster in the regression layer, constructing GBRT to predict the RUL of the aircraft engine; GBRT is the Boosting type of the ensemble learning algorithm;
(2.1) the Boosting framework is trained separately using multiple sets of base models, and the results of all base models are linearly combined to obtain a more reliable prediction, as shown in the following equation:
Figure FDA0002835398900000023
wherein h is d (x) Represents a base model; d is the number of the basic models, and D is the total number of the basic models; the training target of the overall model is to enable the predicted value F (x) to approach the true value y, enable each basic model to respectively undertake department prediction tasks by using the idea of a greedy algorithm, and pay attention to errors generated by each basic model:
F d (x)=F d-1 (x)+h d (x)
(2.2) fitting formula of inverse gradient by introducing arbitrary loss function L:
Figure FDA0002835398900000031
wherein H is the cycle number, and H is the maximum cycle number;
(2.3) GBRT is a continuously developing integrated learning model, and the basic function of the GBRT uses a tree structure; the predicted output using E-tree function accumulation is shown in the formula:
Figure FDA0002835398900000032
wherein e represents an index of the tree; Γ is the number of leaves on the tree; each F corresponds to an independent tree structure; the values of the leaf node regions are estimated using a linear search, the loss function is minimized, and then the regression tree is updated.
CN202011468528.3A 2020-12-15 2020-12-15 Model for predicting remaining service life of aero-engine based on hybrid machine learning Active CN112613227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011468528.3A CN112613227B (en) 2020-12-15 2020-12-15 Model for predicting remaining service life of aero-engine based on hybrid machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011468528.3A CN112613227B (en) 2020-12-15 2020-12-15 Model for predicting remaining service life of aero-engine based on hybrid machine learning

Publications (2)

Publication Number Publication Date
CN112613227A CN112613227A (en) 2021-04-06
CN112613227B true CN112613227B (en) 2022-09-30

Family

ID=75233784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011468528.3A Active CN112613227B (en) 2020-12-15 2020-12-15 Model for predicting remaining service life of aero-engine based on hybrid machine learning

Country Status (1)

Country Link
CN (1) CN112613227B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722989B (en) * 2021-08-23 2023-04-28 南京航空航天大学 CPS-DP model-based aeroengine service life prediction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737948A (en) * 2019-10-15 2020-01-31 南京航空航天大学 method for predicting residual life of aero-engine based on deep FNN-LSTM hybrid network
CN110807257A (en) * 2019-11-04 2020-02-18 中国人民解放军国防科技大学 Method for predicting residual life of aircraft engine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737948A (en) * 2019-10-15 2020-01-31 南京航空航天大学 method for predicting residual life of aero-engine based on deep FNN-LSTM hybrid network
CN110807257A (en) * 2019-11-04 2020-02-18 中国人民解放军国防科技大学 Method for predicting residual life of aircraft engine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于CAE与LSTM的航空发动机剩余寿命预测;王旭等;《北京信息科技大学学报(自然科学版)》;20200815(第04期);全文 *

Also Published As

Publication number Publication date
CN112613227A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN111539515B (en) Complex equipment maintenance decision method based on fault prediction
CN106168799B (en) A method of batteries of electric automobile predictive maintenance is carried out based on big data machine learning
CN109141847B (en) Aircraft system fault diagnosis method based on MSCNN deep learning
CN115270956B (en) Continuous learning-based cross-equipment incremental bearing fault diagnosis method
CN110609524B (en) Industrial equipment residual life prediction model and construction method and application thereof
CN106912067B (en) WSN wireless communication module fault diagnosis method based on fuzzy neural network
CN111652461A (en) Aero-engine continuous health state evaluation method based on SAE-HMM
CN112734002B (en) Service life prediction method based on data layer and model layer joint transfer learning
Barzola-Monteses et al. Energy consumption of a building by using long short-term memory network: a forecasting study
CN114676742A (en) Power grid abnormal electricity utilization detection method based on attention mechanism and residual error network
CN111190349A (en) Method, system and medium for monitoring state and diagnosing fault of ship engine room equipment
CN111597760A (en) Method for obtaining gas path parameter deviation value under small sample condition
CN114265913A (en) Space-time prediction algorithm based on federal learning on industrial Internet of things edge equipment
CN116186633A (en) Power consumption abnormality diagnosis method and system based on small sample learning
CN112613227B (en) Model for predicting remaining service life of aero-engine based on hybrid machine learning
Xu et al. SGBRT: an edge-intelligence based remaining useful life prediction model for aero-engine monitoring system
CN116842459B (en) Electric energy metering fault diagnosis method and diagnosis terminal based on small sample learning
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN113884807B (en) Power distribution network fault prediction method based on random forest and multi-layer architecture clustering
CN115659258A (en) Power distribution network fault detection method based on multi-scale graph convolution twin network
Mehta et al. A Comprehensive study of Machine Learning Techniques used for estimating State of Charge for Li-ion Battery
CN112465253B (en) Method and device for predicting links in urban road network
CN114692729A (en) New energy station bad data identification and correction method based on deep learning
Liu et al. Aero-Engines Remaining Useful Life Prognostics Based on Multi-Hierarchical Gated Recurrent Graph Convolutional Network
Zhang et al. Research on transformer fault diagnosis method based on rough set optimization BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant