CN112818461A - Variable-cycle engine multiple fault diagnosis device based on self-association neural network - Google Patents

Variable-cycle engine multiple fault diagnosis device based on self-association neural network Download PDF

Info

Publication number
CN112818461A
CN112818461A CN202011600688.9A CN202011600688A CN112818461A CN 112818461 A CN112818461 A CN 112818461A CN 202011600688 A CN202011600688 A CN 202011600688A CN 112818461 A CN112818461 A CN 112818461A
Authority
CN
China
Prior art keywords
network
auto
layer
neural network
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011600688.9A
Other languages
Chinese (zh)
Inventor
刘志丹
缑林峰
张猛
黄雪茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011600688.9A priority Critical patent/CN112818461A/en
Publication of CN112818461A publication Critical patent/CN112818461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]

Abstract

The invention provides a variable cycle engine multiple fault diagnostor based on an auto-associative neural network, which comprises eight auto-associative neural networks; sensor measurement parameter of engine and corresponding oil supply amount WfThe n-dimensional vector m formed by the opening degree MSV of the MSV and the mode selection valve is input to the eight auto-associative neural networks, and the eight auto-associative neural networks respectively output m1'、m2'、…、m8', and comparing with the input m to obtain a residual r1、r2、…、r8And judging the fault condition of the engine according to the relation between the residual error and the set threshold value. The invention can carry out effective fault diagnosis and isolation on the variable cycle engine and can carry out fault diagnosis and isolation on the variable cycle engineThe method has the advantages that single component faults and multiple sensor faults are diagnosed and isolated, so that economic loss caused by the reasons of flying stop and the like can be effectively avoided, and unnecessary component replacement can be avoided; the engine can be effectively ensured to have higher stability and reliability, the safe operation of the engine is ensured, the performance of the engine is fully exerted, and the safety and the performance of the airplane are improved.

Description

Variable-cycle engine multiple fault diagnosis device based on self-association neural network
Technical Field
The invention relates to the technical field of variable cycle engine control, in particular to a variable cycle engine multiple fault diagnosis device based on an auto-associative neural network.
Background
Modern wars require advanced fighters to have the capability of long-range subsonic cruising and the capability of quick response during combat, and the variable-cycle engine will be continuously developed in three directions of long cruising mileage, high thrust-weight ratio and wide working range in the future. By studying the conventional engine speed characteristics, researchers have found that the turbojet engine has a higher specific thrust and a lower specific fuel consumption rate in the supersonic state and the big bypass ratio turbofan engine has a lower specific fuel consumption rate in the subsonic state. Considering the performance requirements of modern warfare on the propulsion system of a fighter, the turbofan engine is more suitable for subsonic flight, and the turbojet engine is more suitable for supersonic flight. Thus, a more efficient variable cycle engine is provided. Under different working states of the engine, by adopting different technical means such as adjusting the geometric shape, the physical position or the size of the characteristic part, the performance advantages of the variable-cycle engine with two different types of turbofan and turbojet are integrated, so that the variable-cycle engine is ensured to work in a similar configuration of the turbofan engine under the subsonic cruising state, higher economy is obtained, the variable-cycle engine works in a similar configuration of the turbojet engine under the supersonic combat state, continuous and reliable high unit thrust is obtained, the purpose of integrating the performance advantages of the turbofan and turbojet engines is achieved, and the variable-cycle engine has excellent performance in the whole working process of the engine.
Because the working process of the variable cycle engine is complex and changeable and the working condition is severe, the control system is more difficult to design, and the modern control system requires higher precision and excellent reliability. The variable cycle engine plays an important role as the heart of the airplane, the safety requirement is extremely high, and fault diagnosis is essential to the improvement of the reliability of the variable cycle engine. Generally, fault diagnosis is a series of operations for determining whether a fault occurs and determining the position of the occurrence of the fault, and estimating the severity of the fault. The purpose of Fault Diagnosis and Isolation (FDI) is to increase the reliability, availability and safety of the system.
The variable cycle engine is an important part of an aircraft composition, and the development of a control system is required to meet the indexes of reliability and stability. The performance of main parts of the engine can be gradually degraded along with the service life due to the fact that all parts of the engine work under the working conditions of high temperature and high pressure for a long time and the corrosion of the outside to the engine and the like. In order to maintain high reliability of the engine, a critical technique is real-time diagnosis of faults in addition to improving the performance of components.
There are many problems with the presently proposed fault diagnosis methods. First, in the face of general fault diagnosis methods, a diagnosed object needs to be modeled, and a variable cycle engine has great difficulty in the modeling process because a mathematical model of the variable cycle engine is quite complex and has high nonlinearity. And the pure signal-based processing mode completely avoids the model, and ignores some practical characteristics in the engine model. Therefore, there is a need for a suitable diagnostic method that avoids the cumbersome modeling and that can be derived from the characteristics of the reaction model. Second, fault diagnosis systems require data verification of engine data measured by sensors. The most sophisticated algorithm in use today is kalman filtering. However, the algorithm of kalman filtering is complex in practice and often requires multiple iterations. Moreover, the measured data of the engine is numerous, and the fault diagnosis needs to be carried out in real time, so that how to carry out the verification and the noise filtering on a plurality of data simultaneously and rapidly is the key for improving the performance of the diagnosis system. Then, there are many cases where the engine fails, including component failures and sensor failures. Many fault diagnosis techniques are available to diagnose and isolate faults, but all require two or more steps. According to the requirement for the rapidity of diagnosis, the diagnosis system is required to be capable of simultaneously detecting, isolating and recovering the fault.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides the variable-cycle engine multiple fault diagnosis device based on the self-association neural network, which can quickly and accurately diagnose the faults of the variable-cycle engine in various working modes, can diagnose the faults of a sensor, can efficiently diagnose the faults of components, effectively avoid economic loss caused by flying stop and the like, and can avoid unnecessary component replacement; the engine can be effectively ensured to have higher stability and reliability, the safe operation of the engine is ensured, the performance of the engine is fully exerted, and the safety and the performance of the airplane are improved.
The technical scheme of the invention is as follows:
the variable cycle engine multiple fault diagnostor based on the self-association neural network is characterized in that: comprises eight self-associative neural networks;
the self-association neural network is a network formed by connecting a large number of neurons, the topological structure of the self-association neural network is a layered structure, and the neurons of different layers have no linking relation and comprise 3 hidden layers. The first hidden layer is called the mapping layer, the second hidden layer is called the bottleneck layer, and the third hidden layer is called the demapping layer.
The eight auto-associative neural networks firstly utilize noiseless and fault-free data, the number of neurons in each layer of the network is gradually determined from a relatively small number of neurons, the structure of the network is continuously changed, the optimal auto-associative neural network structure is searched according to the standard which is closest to the optimal input and noise reduction performance, and then the structure is maintained to respectively adjust the weight of the network according to the simulation data of a normal model and a fault model and an improved back propagation algorithm.
The inputs of the eight auto-associative neural networks are all sensor measurement parameters and corresponding oil supply WfAnd an n-dimensional vector m consisting of the opening degree MSV of the mode selection valve MSV, wherein the output of the eight auto-associative neural networks is m1'、 m2'、…、m8', the inputs of eight auto-associative neural networks are respectively compared with the corresponding outputs to obtain a difference r1、r2、…、 r8
Further, the fault judgment basis of the variable-cycle engine multiple fault diagnoser based on the auto-associative neural network is as follows: and judging whether the size of the residual error is in a normal range or not by using a difference value, namely the residual error, of the input and output signals of the network, and giving a fault alarm if the size of the residual error exceeds a threshold value. Neither component nor sensor failures: when all network residuals are below a selected suitable threshold, the engine is considered normal with neither component nor sensor failures; only the component fails: the occurrence of a component fault can cause the residual error of a corresponding fault model (the self-association type neural network is trained under a specific fault condition) to be below a threshold value, and the residual errors of other self-association type neural networks are all above the threshold value; only sensor failure: when only a sensor fails, the residual error of the corresponding failed sensor exceeds a threshold value and occurs in the corresponding self-association neural network; both component and sensor failures: the residuals of the corresponding fault model, except for the residuals corresponding to the faulty sensor above the threshold, are all below the threshold, and the residuals of the other auto-associative neural networks are all above the threshold.
Furthermore, the input, mapping layer and bottleneck layer of the self-association neural network are jointly expressed as a nonlinear function G: Rm→RfIt can reduce the dimension of the input vector to meet the design requirement. The mapping may be expressed by the following expression
T=G(Y)
Wherein G is a non-linear vector function having a plurality of independent non-linear functions G ═ G1,G2,...,Gf]T. By TiThe output or vector T ═ T representing the ith node of the bottleneck layer1,T2,...,Ti]TI is the i-th element in 1, 2. Y ═ Y1,Y2,...,Yf]TRepresenting the input to the network. Thus, the above mapping can be described as Gi:Rm→ R, its composition:
Ti=Gi(Y),i=1,2,...,f
further, the output, the demapping layer and the output layer of the bottleneck layer of the self-associative neural networkForming a second layer network with a nonlinear function model of H: Rf→RmA value approximating the input from the bottleneck layer element may be copied. This mapping can be expressed by the following equation:
Figure BDA0002871270730000031
where H is a non-linear vector function consisting of m non-linear functions, each output can be represented as Hj:Rf→ R, then:
Figure BDA0002871270730000032
further, the improved back propagation algorithm is based on the traditional back propagation algorithm, and the weight vector is modified according to the negative gradient direction of the error function E (W) until E (W) reaches the minimum value. Thus, the iterative formula for the weight vector is:
W(k+1)=W(k)+ηG(k)
wherein η is a constant and represents the step size of learning; g (k) is a negative gradient of E (W), i.e.:
Figure BDA0002871270730000033
in order to accelerate the convergence speed of the back propagation algorithm, a momentum factor α is introduced, so that the weight vector iterative modification rule of W (k +1) ═ W (k) + η g (k) is improved as follows:
W(k+1)=W(k)+ηG(k)+α·ΔW(k)
in the above formula:
ΔW(k)=W(k)-W(k-1)
it memorizes the modification direction of the weight vector at the previous moment, so that the form of the formula W (k +1) ═ W (k) + η g (k) + α · Δ W (k) is similar to the conjugate gradient algorithm. The value range of the momentum factor alpha is 0< alpha <1, and the selection of the momentum factor alpha has important regulation effect on the convergence rate of the learning of the network.
Further, the correction of the network weight of the self-associative neural network needs to start from a small number of neurons of the mapping layer, the bottleneck layer and the demapping layer of the network, and gradually increase the number of neurons. The weight value and the threshold value of the network are continuously updated through iteration, so that the mean square error of the whole network is minimum, and the output value is close to the input value as much as possible.
Further, the data used by the training network is from the Matlab simulation model, and all input and output data are normalized by dispersion normalization, even though the training data is kept between-1 and 1.
Further, after the network structure is determined, the self-associative neural network needs to be retrained if the self-associative neural network is insufficient in performance of correcting the deviation. The weights of the network are corrected using the data from the faulty sensor as input until the output data is the corresponding correct value, i.e. retraining is such that the output is an error-free and noise-free value in case the input contains deviations and offsets.
Furthermore, the faults of the engine comprise sensor faults and component faults, and the sensors comprise temperature and pressure sensors at an air inlet outlet, a fan outlet, a compressor outlet, a high-pressure turbine rear part and a low-pressure turbine rear part, and a fan rotating speed and compressor rotating speed sensor. Component failures include the following changes in 8 health parameters, respectively: the efficiency and the flow capacity of the low-pressure compressor, the efficiency and the flow capacity of the high-pressure compressor, the efficiency and the flow capacity of the low-pressure turbine and the efficiency and the flow capacity of the high-pressure turbine.
Advantageous effects
Compared with the prior art, the variable cycle engine multiple fault diagnotor based on the self-association neural network disclosed by the invention has the advantages that the self-association neural network is used for fault diagnosis and isolation of the variable cycle engine, a basic network structure is designed according to the characteristics of measured data, and the specific structural composition of the network is determined according to simulation data of a model and a back propagation algorithm of the network, so that the self-association neural network can obviously reduce the noise of the measured data, and under the conditions of misalignment of a sensor and gradual increase of the noise, the network also has the capability of reducing the noise, and has good data reconstruction capability on singular values of the measured data and faults of the measured data. Eight different self-association type neural network structures are respectively designed aiming at eight different component faults of the variable-cycle engine, and the variable-cycle engine multi-fault diagnostor of the self-association type neural network formed by the eight different self-association type neural network structures can effectively diagnose and isolate faults of the variable-cycle engine in multiple working modes, and can diagnose and isolate faults of a single component and faults of a plurality of sensors at the same time. The invention can effectively avoid economic loss caused by the reasons of flying stop and the like, and can avoid unnecessary part replacement; the engine can be effectively ensured to have higher stability and reliability, the safe operation of the engine is ensured, the performance of the engine is fully exerted, and the safety and the performance of the airplane are improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a gas path analysis principle of the present invention for fault diagnosis and isolation;
FIG. 2 is a schematic structural view of a variable cycle engine of the present invention;
FIG. 3 is a schematic view of the variable cycle engine tuning parameters of the present invention;
FIG. 4 is a schematic diagram of the adjustable components of the variable cycle engine of the present invention;
FIG. 5 is a dual bypass mode flow distribution diagram for a variable cycle engine according to the present invention;
FIG. 6 is a single bypass mode flow distribution diagram for the variable cycle engine of the present invention;
FIG. 7 is a schematic diagram of an artificial neuron model in the self-associative neural network of the present invention;
FIG. 8 is a schematic diagram of the neuron transfer function of the artificial neuron model of the present invention;
FIG. 9 is a diagram of the architecture of the self-associating neural network of the present invention;
FIG. 10 is a schematic diagram of a sensor fault diagnostic method of the present invention;
FIG. 11 is a table of the correspondence between the self-associating neural network and the failure of the related components of the present invention;
FIG. 12 is a schematic structural diagram of a variable cycle engine multiple fault diagnosis device based on an auto-associative neural network.
Detailed Description
The engine fault diagnosis is mainly to judge the performance of the system at the moment by using the measured value of a sensor in an engine gas path, and the performance at the moment is necessarily different from an ideal state due to environmental factors and the like. And (3) changing variables such as the rotating speed, the total temperature, the total pressure, the fuel flow and the like of the engine so as to obtain required information to identify whether the engine has faults or not. A method for performing fault diagnosis using such characteristics is called gas path analysis in many documents.
The main objective of the gas path analysis method is to detect physical faults of the system, the physical faults include a plurality of types of problems, and there are abnormalities caused by damage to components from outside, including: erosion and corrosion of the vanes, seal wear, nozzle plugging, and the like. These physical failures can cause changes in the thermodynamic performance of the engine or its components. The state of the engine component may be mathematically calculated using a series of independent performance parameters. These performance parameters are currently widely used in the efficiency and flow-through capability of the components.
The multiple fault diagnostor of the variable cycle engine of the self-association neural network is based on the principle of a gas path analysis method. Fig. 1 shows the main idea of gas path analysis. In general, the basic idea behind this approach is that physical faults can cause changes in the performance of engine components, which can be reflected in efficiency and flow-through capability, which is most directly reflected in changes in some measured parameter of the engine, such as: total temperature, total pressure, rotational speed, etc. If a change in the gas path parameters of the engine is observed, the next task is to analyze and detect which thermodynamic parameter or which component parameter has changed. This is a further concept of isolating faults.
1. Working principle of variable cycle engine
The invention takes a double-bypass variable-cycle engine with a core-driven fan stage (CDFS) as a main research object, the main structure of the engine is shown in figure 2, and the engine comprises main components of an air inlet channel, a fan, a core-driven fan stage, a high-pressure compressor, a combustion chamber, a high-pressure turbine, a low-pressure turbine, a mixing chamber, an afterburner and a tail nozzle. Compared with a common double-shaft turbofan engine, the double-shaft turbofan engine has the remarkable structural characteristics that the CDFS is additionally arranged between the fan and the high-pressure compressor, and the auxiliary bypass and the main bypass are respectively arranged behind the fan and the CDFS. Under different working states of the variable cycle engine, the air flow of the outer duct and the core engine of the engine can be greatly adjusted by changing the guide vane angle of the CDFS, so that the cycle parameters of the inner duct air flow, the outer duct air flow, the duct ratio, the supercharging ratio and the like of the engine are adjusted, and the thermodynamic cycle adjustment of the engine is more flexible.
Compared with a common double-shaft turbofan engine, the variable-cycle engine has more adjustable components. The variable cycle engine with the CDFS components has essentially 8 tunable components, as shown in particular in fig. 3, and a schematic diagram of the tunable components is shown in fig. 4.
Compared with the traditional engine, the variable-cycle engine has the performance advantages that the adjustable components are added, the pneumatic thermodynamic cycle of the engine in the working process is adjusted by changing the parameters of the adjustable components, the unit fuel consumption rate is obviously reduced when the thrust is basically unchanged, the economic benefit of the engine is greatly improved, meanwhile, the adjustable components are added, the adjusting process of a control system is more flexible, and the stability margin of components such as a fan, a gas compressor and the like is greatly improved.
The variable cycle engine has two typical working modes of single/double bypass, and the two typical working modes are switched by variable valves such as mode selection valves MSV, FVABI and RVABI. When the MSV is completely opened, the airflow is divided into two parts after passing through the fan, one airflow flows into the auxiliary culvert, and the airflow is effectively mixed with the airflow of the main culvert at the section of the outlet of the main culvert and flows into the main culvert. Another stream flows into the CDFS, this stream is partially directed to the overall culvert via the RVABI, and the rest of the stream will flow into the core machine. Due to the existence of the tail end duct and the RVABI, the total bypass airflow can be divided into two parts at the outlet, one airflow directly flows into the tail nozzle through the tail end duct, the other airflow enters the mixing chamber, is mixed with the airflow passing through the core machine and then is combusted in the afterburner, and then flows into the tail nozzle, and the specific airflow distribution is shown in figure 5. In the working process, the main culvert and the auxiliary culvert are both provided with air flows to pass through, so the mode is named as a double culvert mode.
When the mode selection valve MSV is completely closed, the airflow flowing through the fan completely flows into the CDFS, the fan operates in the compressor mode, and no airflow passes through the secondary bypass, which is named as a single bypass operation mode, and the specific airflow distribution of the process is shown in fig. 6.
When the variable-cycle engine is switched under different working modes, the internal thermodynamic cycle state can be changed accordingly. In order to ensure that the engine can continuously keep stable and reliable work and stably realize the conversion of single and double bypass modes, the following basic conditions should be met in the mode switching process:
(1) the fan inlet flow rate remains substantially constant;
(2) the fan pressure ratio remains substantially unchanged;
(3) the pressure ratio of the core driving fan stage changes steadily along with the switching process;
(4) the bypass ratio changes smoothly with the change of MSV displacement;
(5) ensuring that the backflow margin is always larger than 0, namely, the backflow of the airflow around the CDFS does not exist;
(6) continuous over-temperature and over-rotation phenomena are avoided, and surging phenomena are avoided.
In order to meet the above conditions, when the MSV displacement is adjusted, other adjustable component parameters are adjusted, and the opening degree of the mode selection valve MSV can represent the working mode of the variable-cycle engine. The mode switch adjustment strategies that have proven to be feasible today are: in the mode switching process from single culvert to double culverts, the auxiliary culvert is enlarged by adjusting the MSV displacementThe cross section area of the inlet of the culvert needs to be reduced in cooperation with the angle alpha of the CDFS inlet guide vane in order to avoid the great reduction of the pressure ratio of the faniWhile reducing the adjustable turbine vane angle alphat. The mode switching process from double-foreign-culvert to single-foreign-culvert is opposite in adjusting strategy. When the variable-cycle engine works in different working modes, in order to obtain an ideal bypass ratio and simultaneously ensure that the airflow does not generate surge or other abnormal working states, the angle alpha of the CDFS guide vane needs to be adjustediSo as to change the contained air flow and make it match with the working state of the engine.
Establishing a variable-cycle engine nonlinear model in MATLAB based on a component method
Figure BDA0002871270730000061
y=g(x,u,msv)
Wherein
Figure BDA0002871270730000062
In order to control the input vector,
Figure BDA0002871270730000063
in the form of a state vector, the state vector,
Figure BDA0002871270730000064
in order to output the vector, the vector is,
Figure BDA0002871270730000065
for the mode select valve MSV opening degree, f (-) is an n-dimensional differentiable nonlinear vector function representing the system dynamics, and g (-) is an m-dimensional differentiable nonlinear vector function generating the system output.
2. Self-association type neural network
The self-association type neural network is developed on the basis of a simple neural network, and different network functions are realized by changing the topological structure, the excitation function and the like of the network.
In the present invention, the complex functional relationship between the networks is determined step by inputting the output signal, meaning that the output will approach and converge on the input. By selecting a reasonable network internal structure and training the network, the network has a complete and accurate mapping relation. Such neural networks can provide large amounts of data to accomplish a variety of tasks. In addition, by constructing the error between the network input and output, the error signal can be used to diagnose and isolate sensor faults, as well as reconstruct sensor loss and fault data during checksum analysis of sensor data.
The basic unit of a neural network is called a "neuron," which is a simplification and simulation of biological neurons. To model a biological neuron, it can be simplified to the artificial neuron of fig. 7, where the subscript i of each variable denotes the ith neuron in the neural network.
The neuron is a multi-input (n is set) single-output nonlinear neuron, and the input-output relationship of the neuron can be described as follows:
Figure 3
wherein x isj(j ═ 1, 2.., n) input signals from other neurons; thetaiA threshold for the neuron; w is aijRepresenting the connection weight of the neurons j to i; siRepresenting the state of the neuron. f (-) is some non-linear function that changes the state s of the neuroniConversion to output y of neuronsiAnd is therefore referred to as the output function or transfer function of the neuron.
The transfer function f (-) in the neuron model used in the invention is of Sigmoid type, and has the following two forms:
Figure BDA0002871270730000072
Figure BDA0002871270730000073
the functional images are shown in (a) and (b) of fig. 8.
The self-associative neural network is a network formed by connecting a large number of neurons, the topological structure of the self-associative neural network is a layered structure, and the neurons in different layers have no linking relation, the specific structure is shown as 9, and 3 hidden layers are included in the self-associative neural network. The first hidden layer is called the mapping layer. The activation function of the hidden layer may be an sigmoid function, a tangent hyperbola, and other similar non-linear functions. The second hidden layer, called the bottleneck layer, functions as a linear transfer function. The dimensions of the bottleneck layer should be smaller than those of the other hidden layers. The third hidden layer, called the demapping layer, has the same activation function as the mapping layer. The mapping layer and the demapping layer have the same number of neurons.
The output data of the bottleneck layer compresses the output data of the input layer. The operation of the auto-associative neural network is based on the concept of principal component analysis, which is a multivariate statistical method that can be used to analyze highly correlated measurement data containing noise. The principal component analysis is applicable to both linear and non-linear dependent variables, using a method that projects high dimensional information into low dimensional subspaces and preserves primary process information. The output nodes of the bottleneck layer can be viewed as if the irrelevant variables of the input relevant data after compression are consistent with the main components.
As with principal component analysis, the goal of the bottleneck layer in self-associative neural networks is to compress the data into a series of uncorrelated variables that are stored in a new space and make the data less dimensional, making it simpler and more compact to process.
Unlike network structures that contain only one hidden layer, the main reason for the three hidden layers used in the self-associative neural network structure is the need to compress the data to filter out noise and bias. According to 10, the self-associative neural network should be viewed as two serially connected neural network structures with a single hidden layer. Wherein the input, mapping layer and bottleneck layer are collectively represented as a non-linear function G Rm→RfIt can reduce the dimension of the input vector to meet the design requirement. The mappingCan be expressed by the following expressions
T=G(Y)
Wherein G is a non-linear vector function having a plurality of independent non-linear functions G ═ G1,G2,...,Gf]T. By TiThe output or vector T ═ T representing the ith node of the bottleneck layer1,T2,...,Ti]TI is the i-th element in 1, 2. Y ═ Y1,Y2,...,Yf]TRepresenting the input to the network. Thus, the above mapping can be described as Gi:Rm→ R, its composition:
Ti=Gi(Y),i=1,2,...,f
the next layer is the so-called inverse transform, which restores the data to the original dimension, and the output of the bottleneck layer, the demapping layer and the output layer form a second layer network with a nonlinear function model of H: Rf→RmA value approximating the input from the bottleneck layer element may be copied. This mapping can be expressed by the following equation:
Figure BDA0002871270730000081
where H is a non-linear vector function consisting of m non-linear functions, each output can be represented as Hj:Rf→ R, then:
Figure BDA0002871270730000082
in order not to lose generality, the subfunctions G and H must have the ability to represent non-linear functions of arbitrary nature. This can be achieved by providing a single-layer perceptron with a large number of nodes for each sub-network. The mapping layer is a hidden layer of the sub-function G and likewise the demapping layer is a hidden layer of the sub-function H.
The auto-associative neural network requires supervised training, i.e. a specified expected output for each training sample. At output of unknown conditionsIt is not possible to train the network G alone. Similarly, at the desired output (the target output is
Figure BDA0002871270730000083
) Given, but corresponding output T positions, it is also not possible for the network H to be trained separately. Therefore, it is not possible to perform supervised training independently for each layer of the network. To avoid this problem, the connection between the two networks is continuous, so that the output of G can be passed directly to H, so that both the output and the desired input of the network are known. In particular, it is possible to use, for example,
Figure BDA0002871270730000084
is both the input of G and the desired output of H. For a G and H continuous network comprising 3 hidden layers, the bottleneck layer is shared, and the output of G is the input of H.
3. Improved error back propagation algorithm
A back propagation network is one of the most commonly used feed forward networks. The learning of the back propagation network adopts a back propagation algorithm, namely an error back propagation algorithm, the back propagation network has M layers (not including an input layer), and the number of the nodes of the l layer is nl
Figure BDA0002871270730000085
Represents the output of the l-th node k, then
Figure BDA0002871270730000086
Is determined by the following two equations:
Figure BDA0002871270730000087
Figure BDA0002871270730000088
wherein
Figure BDA0002871270730000089
For the state of layer I neuron k, the neuron state is formulated, i.e.
Figure BDA00028712707300000810
The above formula is expressed by a vector method,
Figure BDA00028712707300000811
for a row vector of coefficients consisting of network weights, y(l-1)Is the output column vector of layer l-1. Input layer is treated as layer 0, so y(0)X is the input vector.
Given a sample pattern { X, Y }, the weights of the back propagation network will be adjusted to minimize the error objective function as follows:
Figure BDA0002871270730000091
in the above formula
Figure BDA0002871270730000092
W represents all weights in the back-propagation network, which is the output of the network. n isMThe number of nodes of the last layer (output layer).
According to the gradient descent optimization method, the weight values can be modified by the gradient of e (w). Weight vector to ith neuron of l layer
Figure BDA0002871270730000093
Is determined by the following equation:
Figure BDA0002871270730000094
for the output layer (Mth layer), in the above formula
Figure BDA0002871270730000095
Comprises the following steps:
Figure BDA0002871270730000096
for the hidden layer:
Figure BDA0002871270730000097
the above is the back propagation algorithm. For a given input and output sample, the weight is repeatedly calculated in an iterative mode according to the process, and finally the output of the network is close to the expected output. The network calculates the output by the input and then adjusts the weight reversely by the output. The two processes are repeatedly alternated until convergence. In order to solve the problems that the back propagation algorithm is likely to have the minimum local trapping and the convergence speed is low, the improved back propagation algorithm is adopted. The weight vector is modified according to the negative gradient direction of the error function e (w) until e (w) reaches a minimum. Thus, the iterative formula for the weight vector is:
W(k+1)=W(k)+ηG(k)
wherein η is a constant and represents the step size of learning; g (k) is a negative gradient of E (W), i.e.:
Figure BDA0002871270730000098
in order to accelerate the convergence speed of the back propagation algorithm, a momentum factor α is introduced, so that the weight vector iterative modification rule of W (k +1) ═ W (k) + η g (k) is improved as follows:
W(k+1)=W(k)+ηG(k)+α·ΔW(k)
in the above formula:
ΔW(k)=W(k)-W(k-1)
it memorizes the modification direction of the weight vector at the previous moment, so that the form of the formula W (k +1) ═ W (k) + η g (k) + α · Δ W (k) is similar to the conjugate gradient algorithm. The value range of the momentum factor alpha is 0< alpha <1, and the selection of the momentum factor alpha has important regulation effect on the convergence rate of the learning of the network.
4. Training of self-associative neural networks
The performance of the fault diagnosis and isolation system and the success rate of diagnosing and isolating component faults depend largely on the validity and quality of the measurement data. All of the sensors from gas turbine engines operate under severe conditions of high temperature, high pressure, etc., and the measured data often include sensor noise, offsets, drift, and other sensor failures. Failure and anomalies of these sensors can cause the measurements to deviate from the true values to some extent, resulting in inaccuracies in the failure diagnosis.
The factors necessary to construct a complete network include the number of layers in the network and the number of neurons in each layer. If the network has good performance meeting the requirement, the weight, the threshold value and the excitation function of the precise network are required, and more particularly, the number of the neurons in the bottleneck layer is corrected. This requires a large amount of sensor measurement data, which is the training of the network.
In the process of network training, the weight needs to be corrected according to an improved error back propagation algorithm, so that the input and the output can be consistent. The training data and the number of network bottleneck layer neurons need to be selected properly, so the internal performance of the network is determined by the set weight and the maximum information number which can be reserved.
The number of the neurons of the mapping layer, the bottleneck layer and the demapping layer of the network needs to be increased from a small number. It should be noted that the number of neurons in the bottleneck layer is less than the number of neurons in the mapping layer and the demapping layer, and the number of neurons in the mapping layer and the demapping layer is the same. In order to enable an auto-associative neural network to have good performance in data inspection, the key point is to provide a proper amount of data for the network so that the network has enough information to complete weight correction, and more importantly, a bottleneck layer has a proper number of neurons. A network with good noise reduction performance and low input and output errors can better complete data verification.
The weight value and the threshold value of the network are continuously updated through iteration, so that the mean square error of the whole network is minimum, and the output value is close to the input value as much as possible. The consistency of input and output also indicates that after the original data is compressed according to the dimension determined by the bottleneck layer in the network, the related information of the original data is still retained to the maximum extent.
In the present invention, the data used to train the network is derived from Matlab simulation models, and may also be trained using noisy real engine data. However, using noisy actual data as training data causes disadvantages such as a slow training speed, a large training error, and a reduction in noise reduction performance. After the point is known, the accurate value is adopted for training, so that a proper network structure can be quickly found, and then the structure is kept and actual data is used for adjusting the weight of the network. The process can enable the network to better accord with the specific performance of the engine, and further realize the optimization of the network performance. However, the engine actual data contains noise. It should be noted that the training must be stopped before the error function is minimized in the retraining stage, which is a situation that avoids generalization of the network. In other words, the training time is long enough to ensure the training error is minimized, and the training time is not long enough to avoid the network remembering the error.
First, to improve the training ability of the network, the output data of the sensors must be normalized even if the training data remains between-1 and 1. Secondly, the structure of the network is continuously changed by using the data without noise and faults, and the optimal network structure is searched according to the closest standard with the best input and noise reduction performance.
After the network structure is determined, a third step is required if the auto-associative neural network is not sufficiently capable of correcting the bias, and retraining is performed so that the output is an error-free and noise-free value in the case where the input contains a bias and an offset. To accomplish this, the input data must come from a faulty sensor, while the output data must be the corresponding correct value. According to the function of two sub-networks of the auto-associative neural network, noise is filtered out in the first sub-network, which includes a mapping layer and a bottleneck layer. The function of the sub-network is to compress the dimension of the input, and also to deal with redundancy and random variations due to measurement noise when compressing the spatial dimension of the input. The second subnetwork functions to restore the compressed data to its original dimensions.
According to this principle, the retraining process only repairs the weights of the second sub-network, while updating the weights of the first sub-network with the noise filtered out. That is, only the weights of the two layers of networks need to be updated in the retraining process.
The measured value of the sensor is selected as the measured data of the sensor, and the sensor comprises a temperature and pressure sensor and a fan rotating speed and compressor rotating speed sensor at the outlet of the air inlet channel, the outlet of the fan, the outlet of the compressor, the rear of the high-pressure turbine and the rear of the low-pressure turbine. Other inputs to the network include the engine's fuel flow (W)f) And a mode selection shutter MSV opening degree MSV, WfOften not directly measured, but rather is an input to the gas circuit system, which may be determined by the pilot's operated throttle lever Position (PLA).
In data verification of a self-associative neural network architecture, training data is derived from a simulation model of an engine. Since the variation range of the respective variable values of the engine is wide, the difference between the maximum value and the minimum value thereof is large. Training of the network often requires more efficient execution of specific processing steps between the input and the target. During the training process, the training program of the network is very sensitive to the data standardization method, and after a plurality of experiments, the invention determines to adopt deviation standardization to process the data. Therefore, all the input and output data are normalized.
The number of neurons in each layer of the network needs to be determined during training, and the number of hidden layers in each layer is determined step by step from a relatively small number of neurons until an optimal self-associative neural network structure is found. In the process of network optimization (i.e. finding the optimal number of neurons for each network), in order to determine the optimal structure of the auto-associative neural network, the number of neurons in the bottleneck layer varies from 3 to 7, and the number of neurons in the mapping layer and the demapping layer varies from 10 to 60. The number of neurons at the mapping and demapping layers is treated as equal for all structures.
With such a training method, each of the training inputs of the auto-associative neural network includes 7000 sets of data, and the number of training steps is between 30 and 60. Data is generated from changes in engine fuel flow. The external condition is set as a standard condition.
For all the results of the training, 9 different structures are listed, together with the training error J of the objective functionTrainTest error JTestAnd the average value of the noise variation of the corresponding output of all the inputs of the engine. Wherein the objective function is defined as:
Figure BDA0002871270730000111
wherein y isd(n) represents the desired output of the network, yNet(n) represents the actual output of the network. The present invention defines the general noise at cruise as the noise level.
Selecting a training error J with a strong noise reduction capabilityTrainAnd test error JTestA smaller structure.
5. Fault diagnosis based on self-association neural network
The self-association type neural network structure and weight are set by using a certain number of samples and an improved back propagation method, and the self-association type neural network simulates the interaction relation among variables in an engine gas path system. This makes the input and output as consistent as possible. Thus, when non-faulty data enters a trained network, the difference between the network input and output should be zero. When the data is contaminated (i.e. one sensor fails) the difference between the network input and output will no longer be zero. Based on this principle, the self-associative neural network can be used to diagnose whether a sensor has failed.
Fig. 10 shows a sensor supervision system for a set of n measurements. Inputting m according to the mapping rule of the self-association type neural networki(i ═ 1, 2.., n) becomes the output mi'(i=1,2,...,n)。
When the value m of the ith sensoriOutput m generated by the network as a fault input into the networki' will be as close as possible to the estimate miThe true value of (d). m isiAnd miThe difference between' can be used as an indicator for fault diagnosis. If the magnitude of the difference exceeds the threshold, the sensor is considered to be faulty.
The self-association type neural network carries out fault diagnosis under the condition that a gas path component fault exists in the aviation turbofan engine. Once a fault occurs, it causes a change in the health or performance parameters, which in turn causes a change in the engine measurements and their interrelationships. Therefore, a single self-associative neural network trained using health data does not fully estimate all variables of the engine under normal or fault conditions. Therefore, a series of self-associative neural networks need to be used. Each neural network needs to be trained with health data and corresponding fault data. Each network thus acts as an estimation, while estimating the health of the engine and the most likely faults of the engine.
Input and output variables, including measurements from various sensors and engine fuel flow WfAnd a mode selection shutter MSV opening degree MSV. Component failures of engines involve the following changes in 8 health parameters, respectively: the efficiency and the flow capacity of the low-pressure compressor, the efficiency and the flow capacity of the high-pressure compressor, the efficiency and the flow capacity of the low-pressure turbine and the efficiency and the flow capacity of the high-pressure turbine. The 8 component failures listed in fig. 11 were used to study engine failure diagnosis.
Therefore, 8 models, i.e., 8 self-associative neural networks, are required, each model representing and correlating two types of performance of the engine, one being a failure mode and the other being a normal mode. Fig. 11 shows an engine fault model for each network.
FIG. 12 is a schematic diagram of the variable cycle engine multiple fault diagnosis device based on the auto-associative neural network of the present invention, the inputs of the eight auto-associative neural networks are n-dimensional vectors m, eight formed by measurement parametersThe outputs of the self-associative neural networks are m respectively1'、m2'、…、m8', the inputs of eight auto-associative neural networks are respectively compared with the corresponding outputs to obtain a difference r1、r2、…、r8. After the network structure of the self-associative neural network is established, the generation of the residual error results from the difference between the output from the network and the actual output of the engine. The following four cases represent the case of frequent faults in the engine working state, and the concrete forms of the residual errors in the four cases are given.
Neither component nor sensor failures: when all network residuals are below a selected suitable threshold, the engine is considered normal with neither component nor sensor failures.
Only the component fails: the occurrence of a component failure may cause the residuals of the corresponding failure model (the auto-associative neural network trained for a particular failure condition) to be below a threshold, and the residuals of other auto-associative neural networks to be above the threshold.
Only sensor failure: when only a sensor fails, the residual error of the corresponding failed sensor exceeds the threshold value and occurs in the corresponding self-associative neural network.
Both component and sensor failures: the residuals of the corresponding fault model, except for the residuals corresponding to the faulty sensor above the threshold, are all below the threshold, and the residuals of the other auto-associative neural networks are all above the threshold.
Component faults can be isolated from sensor faults according to the characteristics generated by residual errors in the self-association neural network.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (9)

1. A variable cycle engine multiple fault diagnostor based on an auto-associative neural network is characterized in that: comprises eight self-associative neural networks;
the self-association neural network is a network formed by connecting a large number of neurons, the topological structure of the self-association neural network is a layered structure, and the neurons of different layers have no linking relation and comprise 3 hidden layers. The first hidden layer is called the mapping layer, the second hidden layer is called the bottleneck layer, and the third hidden layer is called the demapping layer.
The eight auto-associative neural networks firstly utilize noiseless and fault-free data, the number of neurons in each layer of the network is gradually determined from a relatively small number of neurons, the structure of the network is continuously changed, the optimal auto-associative neural network structure is searched according to the standard which is closest to the optimal input and noise reduction performance, and then the structure is maintained to respectively adjust the weight of the network according to the simulation data of a normal model and a fault model and an improved back propagation algorithm.
The inputs of the eight auto-associative neural networks are all sensor measurement parameters and corresponding oil supply WfAnd an n-dimensional vector m consisting of the opening degree MSV of the mode selection valve MSV, wherein the output of the eight auto-associative neural networks is m1'、m2'、…、m8', the inputs of eight auto-associative neural networks are respectively compared with the corresponding outputs to obtain a difference r1、r2、…、r8
2. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: and judging whether the size of the residual error is in a normal range according to the fact that the fault judgment is based on the difference value of the input signal and the output signal of the network, namely the residual error, and giving a fault alarm if the size of the residual error exceeds a threshold value. Neither component nor sensor failures: when all network residuals are below a selected suitable threshold, the engine is considered normal with neither component nor sensor failures; only the component fails: the occurrence of a component fault can cause the residual error of a corresponding fault model (the self-association type neural network is trained under a specific fault condition) to be below a threshold value, and the residual errors of other self-association type neural networks are all above the threshold value; only sensor failure: when only a sensor fails, the residual error of the corresponding failed sensor exceeds a threshold value and occurs in the corresponding self-association neural network; both component and sensor failures: the residuals of the corresponding fault model, except for the residuals corresponding to the faulty sensor above the threshold, are all below the threshold, and the residuals of the other auto-associative neural networks are all above the threshold.
3. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the input, mapping layer and bottleneck layer are expressed as a nonlinear function G: Rm→RfIt can reduce the dimension of the input vector to meet the design requirement. The mapping may be expressed by the following expression
T=G(Y)
Wherein G is a non-linear vector function having a plurality of independent non-linear functions G ═ G1,G2,...,Gf]T. By TiThe output or vector T ═ T representing the ith node of the bottleneck layer1,T2,...,Ti]TI is the i-th element in 1, 2. Y ═ Y1,Y2,...,Yf]TRepresenting the input to the network. Thus, the above mapping can be described as Gi:Rm→ R, its composition:
Ti=Gi(Y),i=1,2,...,f
4. the variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the output, demapping layer and output layer of the bottleneck layer form a second layer network, and the nonlinear function model of the second layer network is H: Rf→RmA value approximating the input from the bottleneck layer element may be copied.
This mapping can be expressed by the following equation:
Figure FDA0002871270720000021
where H is a non-linear vector function consisting of m non-linear functions, each output can be represented as Hj:Rf→ R, then:
Figure FDA0002871270720000022
5. the variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the improved back propagation algorithm is based on the traditional back propagation algorithm, and the weight vector is corrected according to the negative gradient direction of an error function E (W) until E (W) reaches the minimum value. Thus, the iterative formula for the weight vector is:
W(k+1)=W(k)+ηG(k)
wherein η is a constant and represents the step size of learning; g (k) is a negative gradient of E (W), i.e.:
Figure FDA0002871270720000023
in order to accelerate the convergence speed of the back propagation algorithm, a momentum factor α is introduced, so that the weight vector iterative modification rule of W (k +1) ═ W (k) + η g (k) is improved as follows:
W(k+1)=W(k)+ηG(k)+α·ΔW(k)
in the above formula:
ΔW(k)=W(k)-W(k-1)
it memorizes the modification direction of the weight vector at the previous moment, so that the form of the formula W (k +1) ═ W (k) + η g (k) + α · Δ W (k) is similar to the conjugate gradient algorithm. The value range of the momentum factor alpha is 0< alpha <1, and the selection of the momentum factor alpha has important regulation effect on the convergence rate of the learning of the network.
6. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the correction of the network weight needs to start from a small number of neurons of a mapping layer, a bottleneck layer and a demapping layer of the network, and gradually increase the number of the neurons. The weight value and the threshold value of the network are continuously updated through iteration, so that the mean square error of the whole network is minimum, and the output value is close to the input value as much as possible.
7. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the data used by the training network is from Matlab simulation model, and all input and output data are normalized by dispersion normalization, even though the training data is kept between-1 and 1.
8. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: after the network structure is determined, the self-associative neural network needs to be retrained if the self-associative neural network is insufficient in performance of correcting the deviation. The weights of the network are corrected using the data from the faulty sensor as input until the output data is the corresponding correct value, i.e. retraining is such that the output is an error-free and noise-free value in case the input contains deviations and offsets.
9. The variable-cycle engine multiple fault diagnoser based on the auto-associative neural network of claim 1, wherein: the engine faults comprise sensor faults and component faults, and the sensors comprise temperature and pressure sensors behind an air inlet outlet, a fan outlet, a gas compressor outlet, a high-pressure turbine and a low-pressure turbine, and a fan rotating speed and gas compressor rotating speed sensor. Component failures include the following changes in 8 health parameters, respectively: the efficiency and the flow capacity of the low-pressure compressor, the efficiency and the flow capacity of the high-pressure compressor, the efficiency and the flow capacity of the low-pressure turbine and the efficiency and the flow capacity of the high-pressure turbine.
CN202011600688.9A 2020-12-30 2020-12-30 Variable-cycle engine multiple fault diagnosis device based on self-association neural network Pending CN112818461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011600688.9A CN112818461A (en) 2020-12-30 2020-12-30 Variable-cycle engine multiple fault diagnosis device based on self-association neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011600688.9A CN112818461A (en) 2020-12-30 2020-12-30 Variable-cycle engine multiple fault diagnosis device based on self-association neural network

Publications (1)

Publication Number Publication Date
CN112818461A true CN112818461A (en) 2021-05-18

Family

ID=75855259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011600688.9A Pending CN112818461A (en) 2020-12-30 2020-12-30 Variable-cycle engine multiple fault diagnosis device based on self-association neural network

Country Status (1)

Country Link
CN (1) CN112818461A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6804600B1 (en) * 2003-09-05 2004-10-12 Honeywell International, Inc. Sensor error detection and compensation system and method
CN101561442A (en) * 2009-05-11 2009-10-21 江南大学 Restructured Pichia pastoris in expression period two-phase on line fault diagnostic method based on artificial neural network
CN106441888A (en) * 2016-09-07 2017-02-22 广西大学 High-speed train rolling bearing fault diagnosis method
CN108196444A (en) * 2017-12-08 2018-06-22 重庆邮电大学 Based on the control of the variable pitch wind energy conversion system of feedback linearization sliding formwork and SCG and discrimination method
CN111474919A (en) * 2020-04-27 2020-07-31 西北工业大学 Aeroengine control system sensor fault diagnosis method based on AANN network group

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6804600B1 (en) * 2003-09-05 2004-10-12 Honeywell International, Inc. Sensor error detection and compensation system and method
CN101561442A (en) * 2009-05-11 2009-10-21 江南大学 Restructured Pichia pastoris in expression period two-phase on line fault diagnostic method based on artificial neural network
CN106441888A (en) * 2016-09-07 2017-02-22 广西大学 High-speed train rolling bearing fault diagnosis method
CN108196444A (en) * 2017-12-08 2018-06-22 重庆邮电大学 Based on the control of the variable pitch wind energy conversion system of feedback linearization sliding formwork and SCG and discrimination method
CN111474919A (en) * 2020-04-27 2020-07-31 西北工业大学 Aeroengine control system sensor fault diagnosis method based on AANN network group

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUIHUI LI等: "Multiple Fault Diagnosis of Aeroengine Control System Based on Autoassociative Neural Network", 《2020 11TH INTERNATIONAL CONFERENCE ON MECHANICAL AND AEROSPACE ENGINEERING (ICMAE)》 *
吕景秀等: "基于模糊神经网络的异步电动机故障诊断研究", 《电机技术》 *
袁曾燕等: "神经网络在悬索桥故障诊断中的应用", 《江苏电器》 *

Similar Documents

Publication Publication Date Title
CN110118128B (en) Fault diagnosis and fault-tolerant control method for micro gas turbine sensor
Volponi et al. The use of Kalman filter and neural network methodologies in gas turbine performance diagnostics: a comparative study
CN105868467A (en) Method for establishing dynamic and static aero-engine onboard model
Merrill Sensor failure detection for jet engines using analytical redundancy
Kobayashi et al. Evaluation of an enhanced bank of Kalman filters for in-flight aircraft engine sensor fault diagnostics
Kong Review on advanced health monitoring methods for aero gas turbines using model based methods and artificial intelligent methods
CN111880403A (en) Fault-tolerant two-degree-of-freedom [ mu ] controller for maximum thrust state of aircraft engine
Volponi Data fusion for enhanced aircraft engine prognostics and health management
CN116106021A (en) Precision improving method for digital twin of aeroengine performance
CN112906855A (en) Dynamic threshold variable cycle engine multiple fault diagnosis device
CN112749789A (en) Aero-engine multiple fault diagnosis device based on self-association neural network
CN113761803A (en) Method for training compensation model of gas turbine and use
Tayarani-Bathaie et al. Fault detection of gas turbine engines using dynamic neural networks
CN111382500B (en) Safety analysis and verification method for turbocharging system of aircraft engine
CN112818461A (en) Variable-cycle engine multiple fault diagnosis device based on self-association neural network
Sampath et al. Fault diagnosis of a two spool turbo-fan engine using transient data: A genetic algorithm approach
CN112801267A (en) Multiple fault diagnosis device for aircraft engine with dynamic threshold value
CN110985216A (en) Intelligent multivariable control method for aero-engine with online correction
Bin et al. An investigation of artificial neural network (ANN) in quantitative fault diagnosis for turbofan engine
Viassolo et al. Advanced estimation for aircraft engines
Cao et al. A two-layer multi-model gas path fault diagnosis method
CN113722989B (en) CPS-DP model-based aeroengine service life prediction method
KrishnaKumar et al. Jet engine performance estimation using intelligent system technologies
Zhu et al. Application of adaptive square root cubature Kalman filter in turbofan engine gas path performance monitoring
Bettocchi et al. Artificial Intelligence for the Diagnostics of Gas Turbines: Part I—Neural Network Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210518

WD01 Invention patent application deemed withdrawn after publication