CN110516394A - Aero-engine steady-state model modeling method based on deep neural network - Google Patents

Aero-engine steady-state model modeling method based on deep neural network Download PDF

Info

Publication number
CN110516394A
CN110516394A CN201910823633.5A CN201910823633A CN110516394A CN 110516394 A CN110516394 A CN 110516394A CN 201910823633 A CN201910823633 A CN 201910823633A CN 110516394 A CN110516394 A CN 110516394A
Authority
CN
China
Prior art keywords
neural network
layer
steady
state model
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910823633.5A
Other languages
Chinese (zh)
Inventor
郑前钢
金崇文
陈浩颖
汪勇
房娟
项德威
胡忠志
张海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910823633.5A priority Critical patent/CN110516394A/en
Publication of CN110516394A publication Critical patent/CN110516394A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The aero-engine steady-state model modeling method based on deep neural network that the invention discloses a kind of, aero-engine steady-state model is constructed using deep neural network, the deep neural network is successively to criticize normalized deep neural network, it increases by one batch of normalization layer between adjacent hidden layer, is standardized for the output to previous hidden layer.The present invention carries out the modeling of aero-engine steady-state model using deep neural network, and increases the neural network number of plies by introducing batch normalization layer in deep neural network, improves the capability of fitting of network, and then improve the precision of aero-engine steady-state model.

Description

Aero-engine steady-state model modeling method based on deep neural network
Technical Field
The invention relates to the technical field of control of aero-engines, in particular to a steady-state model modeling method for an aero-engine.
Background
The aeroengine is a multivariable, strong nonlinear and complex pneumatic thermodynamic system, the safe and stable operation of the aeroengine puts high requirements on an engine control system, and in order to control the aeroengine well, a mathematical model must be established firstly. The mathematical model is used for replacing a real engine to be used as a controlled object to carry out simulation research, so that a large amount of expensive experimental expenses can be saved, and accidental out-of-control accidents possibly generated when the real engine is used for debugging a control system can be avoided. In addition, advanced aero-engine control technologies such as model-based control, flight/propulsion system performance optimization control, direct thrust control, life extension control, emergency control, performance recovery, etc., are based on high-precision airborne engine real-time models.
The modeling method of the aircraft engine is many, the current popular part-level model comprises a piecewise linearization model, a support vector machine and a traditional neural network, and the part-level model has the greatest advantage of high model precision and is generally used as a simulation object, but the real-time performance of the part-level model is poor and is difficult to be used as an airborne model; the piecewise linearization model has high real-time performance, but because the engine is a strong nonlinear object, the modeling error caused by linearization is larger; the real-time performance and modeling precision of a support vector machine and a traditional neural network are between a component-level model and a linearization model, the traditional neural network is easy to fall into a local optimal value, so that the model is over-fitted, the generalization capability of the support vector machine is strong, but the traditional neural network is difficult to apply to large sample training data, an engine is a multivariable and complex operating environment and can be degraded and is a strong nonlinear object, so that the training data is increased inevitably when an airborne model which can be applied to a large envelope line is to be established, and the application of the support vector machine in modeling of an aeroengine is limited.
Neural networks are of great interest because they can theoretically fit arbitrary functions. The traditional neural network generally adopts three layers, the network fitting capability of the traditional neural network is stronger and stronger along with the increase of the number of the network layers, but the phenomena of gradient disappearance and gradient explosion can occur after the number of the network layers is increased. With the development of neural network technology in the last decade, particularly after Hinton G E proposes deep learning-deep belief neural network, neural network makes breakthrough in many key technologies and finds huge application in engineering, such as speech recognition, pattern recognition, target detection, and character recognition. However, the deep learning-deep neural network aspect is rarely applied to steady-state modeling of the aero-engine at present.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the aeroengine steady-state model modeling method based on the deep neural network, so that the precision of the aeroengine steady-state model can be effectively improved.
The invention specifically adopts the following technical scheme to solve the technical problems:
a deep neural network is utilized to construct an aircraft engine steady-state model, the deep neural network is a layer-by-layer batch normalization deep neural network, and a batch normalization layer is added between adjacent hidden layers and is used for carrying out standardization processing on the output of the previous hidden layer.
Preferably, the normalization process is specifically as follows:
wherein,for the output after the normalization process, ε is a positive integer with a small value,to enter into a batchNeural network output, μ, before normalization layerBAndmean and variance of the sample dataset, respectively, and γ and β are two learning parameters.
Further preferably, the modeling method comprises the steps of:
step 1, acquiring training data of a steady-state model of an aircraft engine;
step 2, determining the structure of the layer-by-layer batch normalization deep neural network;
step 3, carrying out forward calculation on the layer-by-layer batch normalized deep neural network to obtain a loss function value;
step 4, calculating the gradient of the depth neural network of layer-by-layer batch normalization by using a back propagation algorithm, and updating the weight;
and 5, judging whether the layer-by-layer batch normalization deep neural network is converged, if so, outputting a steady-state model, otherwise, continuously iterating, and returning to the step 3.
Preferably, the training data of the steady-state model of the aircraft engine are obtained through engine test run experiments or/and engine nonlinear component-level models.
Preferably, the aircraft engine steady-state model takes the flying height, the Mach number, the fuel flow, the area of a throat of a tail nozzle, a guide vane angle of a phoenix fan and a guide vane angle of a compressor as model input quantities, and takes the engine oil consumption rate, the installation thrust, the rotating speed of a fan rotor, the rotating speed of a compressor rotor, the fan surge margin, the compressor surge margin and the inlet temperature of a high-pressure turbine as model output quantities.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the method utilizes the deep neural network to carry out modeling on the steady-state model of the aircraft engine, and increases the number of neural network layers by introducing the batch normalization layer in the deep neural network, thereby improving the fitting capability of the network and further improving the precision of the steady-state model of the aircraft engine.
Drawings
FIG. 1 is a schematic diagram of a five-layer neural network structure;
FIG. 2 is a schematic structural diagram of a layer-by-layer batch normalization deep neural network;
FIG. 3 is a data distribution diagram;
FIG. 4 is a Sigmod graph;
FIG. 5 is a schematic representation of the principle of counter-propagation;
FIG. 6 is a graph of relative training errors for deep neural network training;
FIG. 7 is a three-layer BP neural network training relative training error;
FIG. 8 is a graph of relative test error for deep neural network training;
FIG. 9 shows the relative test error of three-layer BP neural network training.
Detailed Description
The invention mainly aims at the situation that the precision of the traditional modeling method of the steady state process of the aero-engine is difficult to improve, and provides a modeling method of the steady state model of the aero-engine based on a deep neural network.
The invention provides an aeroengine steady-state model modeling method based on a deep neural network, which mainly comprises the following steps: step 1, acquiring training data of a steady-state model of an aircraft engine;
step 2, determining the structure of the layer-by-layer batch normalization deep neural network;
step 3, carrying out forward calculation on the layer-by-layer batch normalized deep neural network to obtain a loss function value;
step 4, calculating the gradient of the depth neural network of layer-by-layer batch normalization by using a back propagation algorithm, and updating the weight;
and 5, judging whether the layer-by-layer batch normalization deep neural network is converged, if so, outputting a steady-state model, otherwise, continuously iterating, and returning to the step 3.
The steady-state data of the engine are engine parameters when the engine runs stably, the data can be obtained through an engine test experiment or/and an engine nonlinear component level model, and the engine steady-state data are generally obtained through the engine nonlinear component level model at present due to high test experiment cost.
Taking a five-layer neural network as an example, the basic structure is shown in fig. 1. W in the figureiAnd bii is 1,2,3,4 is weight and bias respectively, and J is loss function. To w1And b1And (3) calculating a partial derivative to obtain:
because of σ' (h)i) Are all less than 0.25 when wiWhen the value is less than or equal to 1, w2σ′(h2) Less than or equal to 0.25, which indicates that the more layers, the gradient course index decreases, and the problem is called gradient disappearance; also, assume wi≥100,σ′(h2) When the value is equal to 0.1, then w2σ′(h2) A value of ≧ 10, which again occurs, as the number of layers increases, the gradient course increases exponentially, a problem known as gradient overflow.
In order to overcome the problem of gradient disappearance, the invention adopts layer-by-layer batch normalization (BNBatchnormalization), which adds a BN layer between two hidden layers, and can effectively avoid the problems of gradient disappearance and gradient overflow, and the network structure is shown in FIG. 2.
When the weights of the neural network are initialized, the weights are always in accordance with the Gaussian distribution with the mean value of zero and the variance of 1, meanwhile, input data is subjected to normalization processing, and output data is subjected to normalization or normalization processing. After mapping and training, the data distribution of each layer is changed and greatly different, which results in that the distribution of the weight of each layer of neural network is greatly different, and the learning rate of each layer of neural network is always the same during training, which greatly reduces the convergence speed of the network. If the data is whitened, the principle is shown in the following figure, fig. 3a is a training data distribution diagram, and it can be seen from the figure that the data distribution deviates from gaussian distribution, the mean value is subtracted, and after the operations such as decorrelation and the like, the data shown in fig. 3b is obtained, so that the data conforms to the gaussian distribution, and the learning rate of the neural network is accelerated.
There are many kinds of whitening operations, commonly used is PCA whitening, which is to make the data satisfy 0 mean, unit variance and weak correlation, however, whitening is not preferable, mainly because the whitening operation needs to calculate covariance matrix, inversion and so on, and the amount of calculation is large, and when propagating backwards, the whitening operation is not always conductive, so that batch normalization is adopted, each hidden layer node is normalized, after the weight multiplication and before the activation function, it is assumed that the neural network forward propagation is the output of ith node of the l layer asWherein j ∈ χkAnd k represents the kth training data set, then
WhereinFor the output after normalization, ε is a positive integer with a small value,for neural network output before entering BN layer, μBAndrespectively, mean value and variance, the calculation formula is as follows
hl=σ(Wl-1hl-1+bl-1) (4)
However, if only batch standardization operations are performed, the expressive power of the network is reduced. As shown in fig. 4, if the activation function is sigmoid, the data is limited to zero mean unit variance, which is equivalent to using the linear part of the activation function, and the non-linear parts on both sides are rarely involved, which obviously reduces the expression capability of the network.
For this reason, the invention adds two learning parameters of gamma and beta to maintain the expression ability of the network, and the expression is as follows:
mu obtained by the above formulaBAndis found at minimum batch (Min-batch) and theoretically should be the mean and variance of the entire data set.
The deep neural network training mode also adopts a back propagation algorithm, network parameters needing to be updated are W, b, gamma and beta, a gradient descent method is adopted, and the updating is as follows:
by using back propagation algorithmsGradient, principle as shown in FIG. 5, assumingIs composed of
Suppose deltalIs composed of
Wherein l is nnet,nnet-2,…,2
For l ═ nnet-1,nnet2, …,2Is provided with
The gradient of the network parameters is thus obtained as:
except for the BN layer, the other layer derivation formulas are the same as MGDNN.
In order to verify the effectiveness and the advancement of the steady-state modeling method of the aircraft engine, a component-level model of the aircraft engine with a small bypass ratio is taken as a simulation object, an airborne model of the aircraft engine for performance optimization based on the method provided by the invention is established and compared with MGD-NN, and the MGD-NN trains the network by using a minimum batch gradient descent Method (MGD), so that the defect that the traditional neural network cannot be suitable for determining large sample data is overcome, in order to enable the deep neural network to use and train the large sample, the MGD method is also adopted for training, and the steady-state model of the deep neural network provided by the invention is called BN-MGD-DNN hereinafter. The network structure of BN-MGD-DNN is obtained through cross validation and brushing selection, and is [6,10,15,15,10,7 ]]The network structure of MGD-NN is [6,40,7 ]]The minimum training sample set in the MGD algorithm is 3000, and the regularization constant is 10-6.
When the airplane is cruising, the flying height H and the Mach number Ma are slowly changed, and the engine control quantity except the fuel WfbAnd the throat area A of the exhaust nozzle8The angle of the vanes of the fan and compressor also has a large effect on the fuel consumption, and is therefore referred to herein as H, Ma, Wfb、A8Guide vane angle alpha of fanfAnd compressor vane angle alphacAs model input, engine fuel consumption SfcMounting thrust FinFan rotor speed NfSpeed N of compressor rotorcSurge margin S of fanmfSurge margin S of compressormcAnd high pressure turbine inlet temperature T4For the model output, a prediction model of the engine parameters is constructed as follows:
y=fBN-MGD-DNN(x) (22)
wherein
Because the neural network is similar to a nonlinear interpolator, the precision is high during internal interpolation and low during external interpolation, the selected training samples contain the maximum value and the minimum value of the input parameters as much as possible, in addition, in order to avoid overfitting, the training samples are as many as possible, the H value range is 9-13 km for subsonic and supersonic cruising, the Ma value is 0.7-1.5, the W value range is 0.7-1.5fbThe variation range is changed with PLA and Ma, A8The range of variation is the area A of the throat of the jet pipe from the design point8,dsTo 1.3A8,ds,αfAnd alphacThe variation range of (a) is-3 ° to 3 °. 3726498 training sample sets are selected, 7536 test sample sets are selected
FIGS. 6-9 show the relative training errors of BN-MGD-DNN and MGD-NN, respectively, and it can be seen that the error of BN-MGD-DNN is basically less than 3%, and meets the precision requirement, and the training precision is obviously higher than that of MGD-NN, especially Sfc、Nf、SmfAnd SmcThe training precision is about one time higher than that of MGD-NN. FIGS. 6 and 7 show the relative error of the training of BN-MGD-DNN and MGD-NN, and it can be seen that the test error of BN-MGD-DNN is other than SmfAnd SmcWithin 2%, the other precision is within 1%, the precision requirement is met, and as can be seen from fig. 8 and 9, the precision of the deep neural network is greatly improved compared with that of the traditional BP neural network, particularly the rotating speed and surge margin of the fan and compressor rotors, which shows that the BN-MGD-DNN has stronger generalization capability.
Table 1 shows the average relative test error and average relative training error of BN-MGD-DNN and MGD-NN, which have higher training precision and testing precision than MGD-NN. Wherein S of the BN-MGD-DNN modeling method is proposed hereinfc、Nf、Nc、Fin、T4,SmfAnd SmcThe average training relative error of the model is respectively reduced by 1.4, 2.17, 2.0, 1.3, 1.13, 2.4 and 2.8 times compared with MGD-NN, and the average relative testing error particularly relevant to the generalization performance of the model is respectively reduced by 175, 2.0, 2.3, 1.3, 2.3 and 3.3 times.
The data storage, the calculation complexity and the average test time of the MGD-NN and the BN-MGD-DNN are shown in the table 2, and the table shows that the algorithm complexity, the data storage capacity and the average test time of the MGD-NN and the BN-MGD-DNN are low, and meet airborne requirements.
Wherein the data storage amount of MGD-NN is 567 (weight 520(6 × 40+40 × 7) +47 offset (40+ 7)); the data storage amount of BN-MGD-DNN is 940
MGD-NN has a computational complexity of 614 (multiplication 520(6 × 40+40 × 7) + addition 47(40+7) + activation function 47(40+ 7));
the data storage amount of BN-MGD-DNN is 940 (multiplication 712(6 × 10+10 × 15+15 × 15+15 × 10+10 × 7+10+15+15+10+7) + division 57(10+15 +10+7) + addition 57(10+15+15 +7+10+15 +10+7) + subtraction 57(10+15+15+10+7) + activation function 57(10+15+15+10+7))
Both program execution environments are: operating system Windows 7Ultimate with Service Pack 1(x 64); the processor (CPU) is Intel (R) core (TM) i5-4590h, the main frequency is 3.30GHz, the memory (RAM) is 8G, the running software is MATLAB2016a, the performance optimization mode simulation environment is the same as that, which is not described below, and the test time of MGD-NN and BN-MGD-DNN is respectively 0.067 milliseconds and 0.223 milliseconds.
TABLE 1 average relative test and training error Table
TABLE 2 comparison of MGD-NN and BN-MGD-DNN algorithms

Claims (5)

1. The method is characterized in that the deep neural network is a layer-by-layer batch normalization deep neural network, and a batch normalization layer is added between adjacent hidden layers and is used for carrying out standardization processing on the output of the previous hidden layer.
2. The modeling method for the steady-state model of the aircraft engine as defined in claim 1, wherein the normalization process is specifically as follows:
wherein,for the output after the normalization process, ε is a positive integer with a small value,to enter neural network output before batch normalization layer, μBAndmean and variance of the sample dataset, respectively, and γ and β are two learning parameters.
3. An aircraft engine steady state model modeling method as defined in claim 2, said modeling method comprising the steps of:
step 1, acquiring training data of a steady-state model of an aircraft engine;
step 2, determining the structure of the layer-by-layer batch normalization deep neural network;
step 3, carrying out forward calculation on the layer-by-layer batch normalized deep neural network to obtain a loss function value;
step 4, calculating the gradient of the depth neural network of layer-by-layer batch normalization by using a back propagation algorithm, and updating the weight;
and 5, judging whether the layer-by-layer batch normalization deep neural network is converged, if so, outputting a steady-state model, otherwise, continuously iterating, and returning to the step 3.
4. An aircraft engine steady-state model modeling method as claimed in claim 3, characterized in that the training data of the aircraft engine steady-state model is obtained through engine test runs or/and engine nonlinear component level models.
5. The modeling method of the steady-state model of the aircraft engine according to any one of claims 1 to 4, wherein the steady-state model of the aircraft engine takes the flight altitude, the Mach number, the fuel flow, the area of the throat of the tail nozzle, the guide vane angle of the fan and the guide vane angle of the compressor as model input quantities, and takes the fuel consumption rate of the engine, the installation thrust, the rotating speed of the fan rotor, the rotating speed of the compressor rotor, the surge margin of the fan, the surge margin of the compressor and the inlet temperature of the high-pressure turbine as model output quantities.
CN201910823633.5A 2019-09-02 2019-09-02 Aero-engine steady-state model modeling method based on deep neural network Withdrawn CN110516394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910823633.5A CN110516394A (en) 2019-09-02 2019-09-02 Aero-engine steady-state model modeling method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910823633.5A CN110516394A (en) 2019-09-02 2019-09-02 Aero-engine steady-state model modeling method based on deep neural network

Publications (1)

Publication Number Publication Date
CN110516394A true CN110516394A (en) 2019-11-29

Family

ID=68630337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910823633.5A Withdrawn CN110516394A (en) 2019-09-02 2019-09-02 Aero-engine steady-state model modeling method based on deep neural network

Country Status (1)

Country Link
CN (1) CN110516394A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111486009A (en) * 2020-04-23 2020-08-04 南京航空航天大学 Aero-engine control method and device based on deep reinforcement learning
CN111914461A (en) * 2020-09-08 2020-11-10 北京航空航天大学 Intelligent assessment method for one-dimensional cold efficiency of turbine guide vane
CN113282004A (en) * 2021-05-20 2021-08-20 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113485117A (en) * 2021-07-28 2021-10-08 沈阳航空航天大学 Multivariable reinforcement learning control method for aircraft engine based on input and output information
CN113741170A (en) * 2021-08-17 2021-12-03 南京航空航天大学 Aero-engine direct thrust inverse control method based on deep neural network
CN113804446A (en) * 2020-06-11 2021-12-17 卓品智能科技无锡有限公司 Diesel engine performance prediction method based on convolutional neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598220A (en) * 2018-11-26 2019-04-09 山东大学 A kind of demographic method based on the polynary multiple dimensioned convolution of input

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598220A (en) * 2018-11-26 2019-04-09 山东大学 A kind of demographic method based on the polynary multiple dimensioned convolution of input

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHENG,QIANGANG 等: "Aero-Engine On-Board Model Based on Batch Normalize Deep Neural Network", 《IEEE ACCESS》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111486009A (en) * 2020-04-23 2020-08-04 南京航空航天大学 Aero-engine control method and device based on deep reinforcement learning
CN113804446A (en) * 2020-06-11 2021-12-17 卓品智能科技无锡有限公司 Diesel engine performance prediction method based on convolutional neural network
CN111914461A (en) * 2020-09-08 2020-11-10 北京航空航天大学 Intelligent assessment method for one-dimensional cold efficiency of turbine guide vane
CN113282004A (en) * 2021-05-20 2021-08-20 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113282004B (en) * 2021-05-20 2022-06-10 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113485117A (en) * 2021-07-28 2021-10-08 沈阳航空航天大学 Multivariable reinforcement learning control method for aircraft engine based on input and output information
CN113485117B (en) * 2021-07-28 2024-03-15 沈阳航空航天大学 Multi-variable reinforcement learning control method for aeroengine based on input and output information
CN113741170A (en) * 2021-08-17 2021-12-03 南京航空航天大学 Aero-engine direct thrust inverse control method based on deep neural network

Similar Documents

Publication Publication Date Title
CN110516394A (en) Aero-engine steady-state model modeling method based on deep neural network
EP3121742A1 (en) Methods and apparatus to model thermal mixing for prediction of multi-stream flows
CN108256173B (en) Gas circuit fault diagnosis method and system for dynamic process of aircraft engine
Xu et al. Improved hybrid modeling method with input and output self-tuning for gas turbine engine
CN110502840B (en) Online prediction method for gas circuit parameters of aero-engine
WO2023115598A1 (en) Planar cascade steady flow prediction method based on generative adversarial network
CN111042928B (en) Variable cycle engine intelligent control method based on dynamic neural network
CN109031951B (en) Method for establishing state variable model of aero-engine on line based on accurate partial derivative
Pang et al. A hybrid onboard adaptive model for aero-engine parameter prediction
CN103489032A (en) Aero-engine gas path component health diagnosis method based on particle filtering
Junying et al. Compressor geometric uncertainty quantification under conditions from near choke to near stall
CN113221237B (en) Large attack angle flutter analysis method based on reduced order modeling
Yanhui et al. Performance improvement of optimization solutions by POD-based data mining
CN111860791A (en) Aero-engine thrust estimation method and device based on similarity transformation
Robinson et al. Nacelle design for ultra-high bypass ratio engines with CFD based optimisation
Goulos et al. Design optimisation of separate-jet exhausts for the next generation of civil aero-engines
Teng et al. Generative adversarial surrogate modeling framework for aerospace engineering structural system reliability design
Xu et al. An adaptive on-board real-time model with residual online learning for gas turbine engines using adaptive memory online sequential extreme learning machine
Sanchez Moreno et al. Optimization of installed compact and robust nacelles using surrogate models
Kerestes et al. LES Modeling of High-Lift High-Work LPT Blades: Part II—Validation and Application
Huang et al. Gas path deterioration observation based on stochastic dynamics for reliability assessment of aeroengines
CN110985216A (en) Intelligent multivariable control method for aero-engine with online correction
Tsilifis et al. Inverse design under uncertainty using conditional normalizing flows
CN114996863A (en) Turbofan engine T-S fuzzy modeling method based on feature extraction
CN115114731A (en) Aircraft engine dynamic modeling method based on pseudo Jacobian matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20191129

WW01 Invention patent application withdrawn after publication