CN110516394A - Steady-state modeling method of aero-engine based on deep neural network - Google Patents

Steady-state modeling method of aero-engine based on deep neural network Download PDF

Info

Publication number
CN110516394A
CN110516394A CN201910823633.5A CN201910823633A CN110516394A CN 110516394 A CN110516394 A CN 110516394A CN 201910823633 A CN201910823633 A CN 201910823633A CN 110516394 A CN110516394 A CN 110516394A
Authority
CN
China
Prior art keywords
neural network
layer
steady
deep neural
aero
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910823633.5A
Other languages
Chinese (zh)
Inventor
郑前钢
金崇文
陈浩颖
汪勇
房娟
项德威
胡忠志
张海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910823633.5A priority Critical patent/CN110516394A/en
Publication of CN110516394A publication Critical patent/CN110516394A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The aero-engine steady-state model modeling method based on deep neural network that the invention discloses a kind of, aero-engine steady-state model is constructed using deep neural network, the deep neural network is successively to criticize normalized deep neural network, it increases by one batch of normalization layer between adjacent hidden layer, is standardized for the output to previous hidden layer.The present invention carries out the modeling of aero-engine steady-state model using deep neural network, and increases the neural network number of plies by introducing batch normalization layer in deep neural network, improves the capability of fitting of network, and then improve the precision of aero-engine steady-state model.

Description

基于深度神经网络的航空发动机稳态模型建模方法Steady-state modeling method of aero-engine based on deep neural network

技术领域technical field

本发明涉及航空发动机控制技术领域,尤其涉及一种航空发动机稳态模型建模方法。The invention relates to the technical field of aero-engine control, in particular to an aero-engine steady-state model modeling method.

背景技术Background technique

航空发动机是多变量、强非线性和复杂的气动热力学系统,其安全稳定运行对发动机控制系统提出了很高的要求,为了对其进行良好的控制,首先必须建立一个数学模型。利用数学模型代替真实发动机作为被控对象进行仿真研究,如此可节约大量昂贵的实验经费,还可以避免用真实发动机对控制系统进行调试时可能产生的意外失控事故。此外,先进的航空发动机控制技术,如模型基控制、飞行/推进系统性能寻优控制、直接推力控制、寿命延长控制、应急控制,性能恢复等,都是以高精度的机载发动机实时模型为基础。Aeroengine is a multivariable, strongly nonlinear and complex aerodynamic thermodynamic system. Its safe and stable operation puts forward high requirements on the engine control system. In order to control it well, a mathematical model must first be established. Using the mathematical model instead of the real engine as the controlled object for simulation research can save a lot of expensive experiment funds and avoid accidental out-of-control accidents that may occur when using a real engine to debug the control system. In addition, advanced aero-engine control technologies, such as model-based control, flight/propulsion system performance optimization control, direct thrust control, life extension control, emergency control, performance recovery, etc., are based on high-precision real-time models of airborne engines. Base.

航空发动机建模方法有很多,目前比较流行的有部件级模型,分段线性化模型、支持向量机以及传统神经网络,部件级模型其最大的优点是模型精度高,一般作为仿真对象,然而其实时性差,难以作为机载模型;分段线性化模型实时性高,但由于发动机是一个强非线性对象,因而线性化带来的建模误差比较大;支持向量机和传统神经网络的实时性和建模精度在于部件级模型和线性化模型之间,传统神经网络易于陷入局部最优值,使模型过拟合,支持向量机泛化能力强,但其难以应用于大样本训练数据,而发动机是多变量、运行环境复杂、会发生退化且强非线性对象,因而要建立能应用于大包线的机载模型,训练数据必然增加,这些都限制了支持向量机在航空发动机建模的应用。There are many modeling methods for aero-engines. The most popular ones are component-level models, segmented linearization models, support vector machines, and traditional neural networks. Poor timeliness, it is difficult to be used as an airborne model; the piecewise linearization model has high realtimeness, but because the engine is a strongly nonlinear object, the modeling error caused by linearization is relatively large; the realtimeness of support vector machines and traditional neural networks And the modeling accuracy lies between the component-level model and the linearized model. The traditional neural network is easy to fall into the local optimum, which makes the model overfit. The support vector machine has strong generalization ability, but it is difficult to apply to large sample training data, while The engine is a multi-variable, complex operating environment, degenerate and strongly nonlinear object. Therefore, to establish an airborne model that can be applied to a large envelope, the training data will inevitably increase, which limits the application of support vector machines in aero-engine modeling. application.

神经网络由于其在理论上可以拟合任意函数而得到广泛关注。传统神经网络一般采用三层,随着网络层数的增加,其网络拟合能力越来越强,但网络层数增加之后,会出现梯度消失和梯度爆炸现象。随着最近十几年神经网络技术的发展,特别是在Hinton G E提出深度学习-深度置信神经网络之后,神经网络在很多关键技术取得突破,并在工程上得到巨大的应用,比如语音识别、图形识别、目标检测以及文字识别等方面。然而目前深度学习-深度神经网络方面在航空发动机稳态建模方面鲜有应用。Neural networks have attracted widespread attention due to their ability to fit arbitrary functions in theory. The traditional neural network generally uses three layers. As the number of network layers increases, its network fitting ability becomes stronger and stronger. However, after the number of network layers increases, the phenomenon of gradient disappearance and gradient explosion will appear. With the development of neural network technology in the past ten years, especially after Hinton GE proposed deep learning-deep confidence neural network, neural network has made breakthroughs in many key technologies and has been widely used in engineering, such as speech recognition, graphics Recognition, object detection, and text recognition. However, there are few applications of deep learning-deep neural network in the steady-state modeling of aeroengines.

发明内容Contents of the invention

本发明所要解决的技术问题在于克服现有技术不足,提供一种基于深度神经网络的航空发动机稳态模型建模方法,可有效提高航空发动机稳态模型的精度。The technical problem to be solved by the present invention is to overcome the deficiencies of the prior art and provide a deep neural network-based modeling method for the steady-state model of the aero-engine, which can effectively improve the accuracy of the steady-state model of the aero-engine.

本发明具体采用以下技术方案解决上述技术问题:The present invention specifically adopts the following technical solutions to solve the above technical problems:

一种基于深度神经网络的航空发动机稳态模型建模方法,利用深度神经网络构建航空发动机稳态模型,所述深度神经网络为逐层批归一化的深度神经网络,其在相邻隐含层之间均增加一个批归一化层,用于对前一隐含层的输出进行标准化处理。A method for modeling the steady-state model of an aero-engine based on a deep neural network, using a deep neural network to construct a steady-state model of an aero-engine, the deep neural network is a layer-by-layer batch normalized deep neural network, and its adjacent implicit A batch normalization layer is added between layers to normalize the output of the previous hidden layer.

优选地,所述标准化处理具体如下:Preferably, the standardization process is specifically as follows:

其中,为标准化处理之后的输出,ε为数值很小的正整数,为进入批归一化层之前的神经网络输出,μB分别为样本数据集的均值和方差,γ和β为两个学习参数。in, is the output after normalization processing, ε is a positive integer with a very small value, is the output of the neural network before entering the batch normalization layer, μ B and are the mean and variance of the sample data set, respectively, and γ and β are two learning parameters.

进一步优选地,所述建模方法包括以下步骤:Further preferably, the modeling method includes the following steps:

步骤1、获取航空发动机稳态模型的训练数据;Step 1, obtaining the training data of the steady-state model of the aeroengine;

步骤2、确定逐层批归一化的深度神经网络的结构;Step 2, determine the structure of the deep neural network of layer-by-layer batch normalization;

步骤3、对逐层批归一化的深度神经网络进行前向计算,得到损失函数值;Step 3. Perform forward calculation on the layer-by-layer batch normalized deep neural network to obtain the loss function value;

步骤4、使用反向传播算法计算逐层批归一化的深度神经网络梯度,并更新权值;Step 4. Use the backpropagation algorithm to calculate the layer-by-layer batch normalized depth neural network gradient, and update the weights;

步骤5、判断逐层批归一化的深度神经网络是否收敛,是则输出稳态模型,否则继续迭代,返回步骤3。Step 5. Determine whether the layer-by-layer batch normalized deep neural network is convergent, if yes, output the steady-state model, otherwise continue to iterate, and return to step 3.

优选地,通过发动机试车实验或/和发动机非线性部件级模型得到所述航空发动机稳态模型的训练数据。Preferably, the training data of the steady-state model of the aeroengine is obtained through engine test experiments or/and engine nonlinear component-level models.

优选地,所述航空发动机稳态模型以飞行高度、马赫数、燃油流量、尾喷管喉道面积、凤扇导叶角和压气机导叶角为模型输入量,以发动机耗油率、安装推力、风扇转子转速、压气机转子转速、风扇喘振裕度、压气机喘振裕度和高压涡轮进口温度为模型输出量。Preferably, the aero-engine steady-state model takes flight altitude, Mach number, fuel flow, nozzle throat area, fan guide vane angle and compressor guide vane angle as model inputs, and takes engine fuel consumption rate, installed Thrust, fan rotor speed, compressor rotor speed, fan surge margin, compressor surge margin and high pressure turbine inlet temperature are model outputs.

相比现有技术,本发明技术方案具有以下有益效果:Compared with the prior art, the technical solution of the present invention has the following beneficial effects:

本发明利用深度神经网络进行航空发动机稳态模型的建模,并通过在深度神经网络中引入批归一化层来增加神经网络层数,提高网络的拟合能力,进而提高航空发动机稳态模型的精度。The present invention utilizes the deep neural network to model the steady-state model of the aero-engine, and increases the number of layers of the neural network by introducing a batch normalization layer into the deep neural network, improves the fitting ability of the network, and then improves the steady-state model of the aero-engine accuracy.

附图说明Description of drawings

图1为五层神经网络结构示意图;Fig. 1 is a schematic diagram of a five-layer neural network structure;

图2为逐层批归一化的深度神经网络的结构示意图;Fig. 2 is the structural representation of the deep neural network of layer-by-layer batch normalization;

图3为数据分布图;Fig. 3 is a data distribution diagram;

图4为Sigmod曲线图;Fig. 4 is a Sigmod curve;

图5为反向传播原理示意图;Figure 5 is a schematic diagram of the principle of backpropagation;

图6为深度神经网络训练相对训练误差;Fig. 6 is the relative training error of deep neural network training;

图7为三层BP神经网络训练相对训练误差;Fig. 7 is the relative training error of three-layer BP neural network training;

图8为深度神经网络训练相对测试误差;Figure 8 is the relative test error of deep neural network training;

图9为三层BP神经网络训练相对测试误差。Figure 9 shows the relative test error of three-layer BP neural network training.

具体实施方式Detailed ways

本发明主要针对传统航空发动机稳态过程建模方法精度难以提高的情况,提出一种基于深度神经网络的航空发动机稳态模型建模方法,该方法使用逐层批归一化法的深度神经网络,其在相邻隐含层之间均增加一个批归一化层,用于对前一隐含层的输出进行标准化处理,从而使得所提出的建模方法的网络层数高,拟合能力强,进而提高航空发动机稳态模型的建模精度。The present invention mainly aims at the situation that the precision of the traditional aero-engine steady-state process modeling method is difficult to improve, and proposes a kind of aero-engine steady-state model modeling method based on a deep neural network, which uses the deep neural network of the layer-by-layer batch normalization method , which adds a batch normalization layer between adjacent hidden layers to normalize the output of the previous hidden layer, so that the proposed modeling method has a high number of network layers and a good fitting ability Strong, and then improve the modeling accuracy of the aeroengine steady-state model.

本发明提出基于深度神经网络的航空发动机稳态模型建模方法,其主要包括以下几个步骤:步骤1、获取航空发动机稳态模型的训练数据;The present invention proposes a method for modeling the steady-state model of an aero-engine based on a deep neural network, which mainly includes the following steps: Step 1, obtaining the training data of the steady-state model of the aero-engine;

步骤2、确定逐层批归一化的深度神经网络的结构;Step 2, determine the structure of the deep neural network of layer-by-layer batch normalization;

步骤3、对逐层批归一化的深度神经网络进行前向计算,得到损失函数值;Step 3. Perform forward calculation on the layer-by-layer batch normalized deep neural network to obtain the loss function value;

步骤4、使用反向传播算法计算逐层批归一化的深度神经网络梯度,并更新权值;Step 4. Use the backpropagation algorithm to calculate the layer-by-layer batch normalized depth neural network gradient, and update the weights;

步骤5、判断逐层批归一化的深度神经网络是否收敛,是则输出稳态模型,否则继续迭代,返回步骤3。Step 5. Determine whether the layer-by-layer batch normalized deep neural network is convergent, if yes, output the steady-state model, otherwise continue to iterate, and return to step 3.

发动机稳态数据为发动机稳定运行时的发动机参数,其数据可通过发动机试车实验或/和发动机非线性部件级模型得到,由于试车实验成本高昂,目前普遍通过发动机非线性部件级模型得到发动机稳态数据。The engine steady-state data is the engine parameters when the engine is running stably, and its data can be obtained through engine test experiments or/and engine nonlinear component-level models. Due to the high cost of test run experiments, engine steady-state data are generally obtained through engine nonlinear component-level models data.

以五层神经网络为例,其基本结构如图1所示。图中wi和bi i=1,2,3,4分别为权重和偏置,J为损失函数。对w1和b1求偏导,得到:Taking the five-layer neural network as an example, its basic structure is shown in Figure 1. In the figure, w i and b i i=1, 2, 3, 4 are the weight and bias respectively, and J is the loss function. Taking partial derivatives with respect to w 1 and b 1 gives:

因为σ′(hi)的导数都是小于0.25,当wi≤1时,w2σ′(h2)≤0.25,这说明层数越多,梯度程指数减小,该问题称为梯度消失;而且,假定wi≥100,σ′(h2)=0.1,则w2σ′(h2)≥10,这又会出现,随着层数的增加,梯度程指数增加,该问题称为梯度溢出。Because the derivative of σ′(h i ) is less than 0.25, when w i ≤1, w 2 σ′(h 2 )≤0.25, which means that the more layers there are, the gradient index decreases. This problem is called gradient disappear; moreover, assuming w i ≥100, σ′(h 2 )=0.1, then w 2 σ′(h 2 )≥10, this will appear again, as the number of layers increases, the gradient degree increases exponentially, the problem called gradient overflow.

为了克服梯度消失问题,本发明采用逐层批归一化(BNBatchNormalization),它在两个隐含层之间增加一个BN层,它能够有效避免了梯度消失和梯度溢出问题,其网络结构如图2所示.In order to overcome the problem of gradient disappearance, the present invention adopts layer-by-layer batch normalization (BNBatchNormalization), which adds a BN layer between two hidden layers, which can effectively avoid the problem of gradient disappearance and gradient overflow, and its network structure is shown in the figure 2 shown.

在神经网络权重初始化时,往往是让权重符合均值为零,方差为1的高斯分布,与此同时,输入数据进行标准化处理,输出数据进行归一化或标准化处理。经过映射、训练,每一层的数据分布都发生了变化,而且差异很大,这导致每一层神经网络权重的分布差异很大,而训练时每一层的神经网络学习率往往相同,这极大地降低网络的收敛速度。如果对数据进行白化操作,其原理如下图所示,图3a为训练数据分布图,图中可以看出,数据分布偏离了高斯分布,进行减去均值,在去相关等操作之后,得到如图3b所示数据,使得数据符合高斯分布,从而加快了神经网络的学习速率。When the weights of the neural network are initialized, the weights are often made to conform to a Gaussian distribution with a mean value of zero and a variance of 1. At the same time, the input data is standardized, and the output data is normalized or standardized. After mapping and training, the data distribution of each layer has changed, and the difference is very large, which leads to a large difference in the distribution of the weight of each layer of neural network, and the learning rate of each layer of neural network is often the same during training. It greatly reduces the convergence speed of the network. If the data is whitened, the principle is shown in the figure below. Figure 3a is the training data distribution diagram. It can be seen in the figure that the data distribution deviates from the Gaussian distribution, and the mean value is subtracted. After decorrelation and other operations, the obtained The data shown in 3b makes the data conform to the Gaussian distribution, thus speeding up the learning rate of the neural network.

白化操作有很多种,常用的有PCA白化,它是让数据满足0均值、单位方差并且弱相关,然而,白化不可取的,主要因为白化操作需要计算协方差矩阵、求逆等操作,计算量大,而且,反向传播时,白化操作不一定可导,于是采用批规一化,对每个隐含层节点进行标准化处理,它在权重乘积之后,激活函数之前,假设神经网络正向传播是第l层第i个节点的输出为其中j∈χk,k代表第k组训练数据集,则有There are many kinds of whitening operations, the commonly used one is PCA whitening, which makes the data satisfy 0 mean, unit variance and weak correlation. However, whitening is not advisable, mainly because the whitening operation needs to calculate the covariance matrix, inverse and other operations, the amount of calculation Large, and, when backpropagating, the whitening operation is not necessarily derivable, so batch normalization is used to standardize each hidden layer node. After the weight product and before the activation function, it is assumed that the neural network is forward propagating is the output of the i-th node in layer l as Where j∈χ k , k represents the kth group of training data sets, then

其中为归一化之后的输出,ε为数值很小的正整数,为进入BN层之前的神经网络输出,μB分别为均值和方差,计算公式如下in is the output after normalization, ε is a positive integer with a very small value, is the neural network output before entering the BN layer, μ B and are the mean and variance, respectively, and are calculated as follows

hl=σ(Wl-1hl-1+bl-1) (4)h l =σ(W l-1 h l-1 +b l-1 ) (4)

然而,如果只进行批标准化操作,会降低网络的表达能力。如图4所示,如果激活函数为sigmoid时候,把数据限制在零均值单位方差,那么只相当于使用激活函数的线性部分,而两侧的非线性部分很少涉及,这显然会降低网络的表达能力。However, if only batch normalization is performed, it will reduce the expressive power of the network. As shown in Figure 4, if the activation function is sigmoid, the data is limited to zero mean unit variance, then it is only equivalent to using the linear part of the activation function, and the nonlinear part on both sides is rarely involved, which will obviously reduce the network. expression ability.

为此,本发明又增加了γ和β这两个学习参数,来保持网络的表达能力,其表达式如下:For this reason, the present invention adds these two learning parameters of γ and β again, to keep the expressive ability of the network, its expression is as follows:

上式求得的μB是最小批量(Min-batch)下求得的,而理论上应该是整个数据集的均值和方差。The μ B and It is obtained under the minimum batch (Min-batch), and theoretically it should be the mean and variance of the entire data set.

深度神经网络的训练方式也是采用反向传播算法,需要更新的网络参数为W,b,γ,β,采用梯度下降法,更新如下:The training method of the deep neural network also adopts the backpropagation algorithm. The network parameters that need to be updated are W, b, γ, and β. The gradient descent method is used, and the update is as follows:

采用反向传播算法求得梯度,原理如图5所示,假设The backpropagation algorithm is used to obtain the gradient, the principle is shown in Figure 5, assuming for

假设δlSuppose δ l is

其中l=nnet,nnet-2,…,2where l=n net , n net -2,...,2

对于l=nnet-1,nnet-2,…,2的For l=n net -1,n net -2,...,2 Have

因此得到网络参数的梯度为:Therefore, the gradient of the network parameters obtained is:

除BN层外,其它层求导公式于MGDNN相同。Except for the BN layer, the derivation formulas of other layers are the same as those of MGDNN.

为了验证本发明所提出航空发动机稳态建模方法的有效性与先进性,以小涵道比航空发动机部件级模型为仿真对象,建立基于本发明提出方法的用于性能寻优的航空发动机机载模型,并与MGD-NN做比较,MGD-NN使用最小批量梯度下降法(MGD,mini-batchgradient descent)对网络进行训练,克服传统神经网络不能适用于大样本数据的确定,为了使得深度神经网络使用与大样本训练,同样采用MGD方法进行训练,以下称本文提出的深度神经网络稳态模型称为BN-MGD-DNN。经过交叉验证刷选,得到BN-MGD-DNN的网络结构为[6,10,15,15,10,7],MGD-NN的网络结构为[6,40,7],MGD算法中最小训练样本集为3000,正则化常数为10-6.In order to verify the effectiveness and advancement of the aero-engine steady-state modeling method proposed in the present invention, the aero-engine model for performance optimization based on the method proposed in the present invention is established by taking the component-level model of the aero-engine with small bypass ratio as the simulation object. Load the model and compare it with MGD-NN. MGD-NN uses the minimum batch gradient descent method (MGD, mini-batch gradient descent) to train the network, which overcomes the fact that the traditional neural network cannot be applied to large sample data. In order to make the deep neural network Network use and large sample training are also trained using the MGD method. Hereinafter, the deep neural network steady-state model proposed in this paper is called BN-MGD-DNN. After cross-validation brush selection, the network structure of BN-MGD-DNN is [6,10,15,15,10,7], the network structure of MGD-NN is [6,40,7], the minimum training in MGD algorithm The sample set is 3000, and the regularization constant is 10 -6 .

飞机在巡航时,飞行高度H和马赫数Ma在缓慢变化,发动机控制量除了燃油Wfb和尾喷管喉道面积A8,风扇和压气机的导叶角角度对耗油率的影响也很大,因此,本文以H、Ma、Wfb、A8、凤扇导叶角αf和压气机导叶角αc为模型输入量,发动机耗油率Sfc、安装推力Fin、风扇转子转速Nf、压气机转子转速Nc、风扇喘振裕度Smf、压气机喘振裕度Smc和高压涡轮进口温度T4为模型输出量,构建发动机参数的预测模型如下:When the aircraft is cruising, the flight altitude H and the Mach number Ma are changing slowly. In addition to the fuel oil W fb and the nozzle throat area A 8 , the engine control quantity, the guide vane angle of the fan and the compressor also have a great influence on the fuel consumption rate. Therefore, this paper takes H, Ma, W fb , A 8 , fan guide vane angle α f and compressor guide vane angle α c as model inputs, engine fuel consumption rate S fc , installed thrust F in , fan rotor The rotational speed N f , compressor rotor rotational speed N c , fan surge margin S mf , compressor surge margin S mc and high pressure turbine inlet temperature T 4 are the output of the model, and the engine parameter prediction model is constructed as follows:

y=fBN-MGD-DNN(x) (22)y=f BN-MGD-DNN (x) (22)

其中in

由于神经网络类似于非线性插值器,在内插值时精度高,在外插值时精度低,因此所选的训练样本尽可能地包含输入参数的最大值和最小值,而且为了避免过拟合,训练样本应尽可能地多,针对亚音速和超音速巡航,H取值范围9~13km,Ma选取0.7~1.5,Wfb变化范围为随PLA和Ma变化,A8变化范围为从设计点的尾喷管喉道面积A8,ds到1.3A8,ds,αf和αc的变化范围为-3°到3°。选取训练样本集为3726498个,测试样本集选取7536个Since the neural network is similar to a nonlinear interpolator, it has high precision during interpolation and low precision during extrapolation, so the selected training samples contain the maximum and minimum values of the input parameters as much as possible, and in order to avoid overfitting, training The number of samples should be as large as possible. For subsonic and supersonic cruising, the value range of H is 9-13km, Ma is selected from 0.7-1.5, the range of W fb changes with PLA and Ma, and the range of A 8 changes from the tail of the design point. Nozzle throat area A 8,ds to 1.3A 8,ds , α f and α c vary from -3° to 3°. The training sample set is selected to be 3726498, and the test sample set is selected to be 7536

图6-9分别给出了BN-MGD-DNN和MGD-NN的相对训练误差,从图中可以看出BN-MGD-DNN的误差基本在3%以下,满足精度要求,而且其训练精度明显高于MGD-NN,特别是Sfc、Nf、Smf和Smc,其训练精度比MGD-NN高了一倍左右。图6和7给出了BN-MGD-DNN和MGD-NN的训练相对误差,从图中可以看出,BN-MGD-DNN的测试误差除了Smf和Smc在2%以内,其它的其精度都在1%以内,满足精度要求,而且从图8和9可以看出,深度神经网络精度比传统BP神经网络有着较大提高,特别是风扇和压气机转子转速和喘振裕度,这说明BN-MGD-DNN具有更强的泛化能力。Figure 6-9 shows the relative training errors of BN-MGD-DNN and MGD-NN respectively. It can be seen from the figure that the error of BN-MGD-DNN is basically below 3%, which meets the accuracy requirements, and its training accuracy is obvious Higher than MGD-NN, especially S fc , N f , S mf and S mc , its training accuracy is about double that of MGD-NN. Figures 6 and 7 show the relative training errors of BN-MGD-DNN and MGD-NN. It can be seen from the figure that the test error of BN-MGD-DNN is within 2% except for S mf and S mc . The accuracy is within 1%, which meets the accuracy requirements, and it can be seen from Figures 8 and 9 that the accuracy of the deep neural network has been greatly improved compared with the traditional BP neural network, especially the fan and compressor rotor speed and surge margin. It shows that BN-MGD-DNN has stronger generalization ability.

表1给出了BN-MGD-DNN和MGD-NN的平均相对测试误差和平均相对训练误差,与MGD-NN相比,BN-MGD-DNN具有更高的训练精度和测试精度。其中本文提出BN-MGD-DNN建模方法的Sfc、Nf、Nc、Fin、T4,Smf和Smc的平均训练相对误差在比MGD-NN分别减小了1.4、2.17、2.0、1.3、1.13、2.4和2.8倍,对于与模型泛化性能特别相关的平均相对测试误差,分别减小了1.75、2.0、2.3、1.3、1.3、2.3和3.3倍。Table 1 presents the average relative test error and average relative training error of BN-MGD-DNN and MGD-NN, compared with MGD-NN, BN-MGD-DNN has higher training accuracy and test accuracy. Among them, the average training relative errors of S fc , N f , N c , F in , T 4 , S mf and S mc of the BN-MGD-DNN modeling method proposed in this paper are 1.4, 2.17, A factor of 2.0, 1.3, 1.13, 2.4 and 2.8, and a factor of 1.75, 2.0, 2.3, 1.3, 1.3, 2.3 and 3.3, respectively, for the average relative test error, which is particularly relevant to model generalization performance.

表2给出MGD-NN和BN-MGD-DNN的数据存量、计算复杂度、平均测试时间,从表中可以看出两者的算法复杂度低、数据存储量小、平均测试时间短,都满足机载要求。Table 2 shows the data storage, computational complexity, and average test time of MGD-NN and BN-MGD-DNN. It can be seen from the table that the algorithm complexity of the two is low, the data storage is small, and the average test time is short. Meet airborne requirements.

其中MGD-NN的数据存储量为567(权重520(6×40+40×7)+47偏置(40+7));BN-MGD-DNN的数据存储量为940 Among them, the data storage capacity of MGD-NN is 567 (weight 520 (6×40+40×7)+47 bias (40+7)); the data storage capacity of BN-MGD-DNN is 940

MGD-NN的计算复杂度为614(乘法运算520(6×40+40×7)+加法运算47(40+7)+激活函数47(40+7));The computational complexity of MGD-NN is 614 (multiplication 520(6×40+40×7)+addition 47(40+7)+activation function 47(40+7));

BN-MGD-DNN的数据存储量为940(乘法运算712(6×10+10×15+15×15+15×10+10×7+10+15+15+10+7)+除法运算57(10+15+15+10+7)+加法运算57(10+15+15+10+7+10+15+15+10+7)+减法运算57(10+15+15+10+7)+激活函数57(10+15+15+10+7))The data storage capacity of BN-MGD-DNN is 940 (multiplication operation 712 (6×10+10×15+15×15+15×10+10×7+10+15+15+10+7)+division operation 57 (10+15+15+10+7)+addition 57(10+15+15+10+7+10+15+15+10+7)+subtraction 57(10+15+15+10+7 )+activation function 57(10+15+15+10+7))

两个程序运行环境都为:操作系统Windows 7Ultimate with Service Pack 1(x64);处理器(CPU)为Intel(R)Core(TM)i5-4590h,它的主频为3.30GHz,内存(RAM)为8G,运行的软件为MATLAB2016a,性能寻优模式仿真环境与这相同,以下不在阐述,从表中可以看出MGD-NN和BN-MGD-DNN测试时间分别为0.067毫秒和0.223毫秒。The running environments of both programs are: the operating system Windows 7 Ultimate with Service Pack 1 (x64); the processor (CPU) is Intel(R) Core(TM) i5-4590h, its main frequency is 3.30GHz, and the memory (RAM) It is 8G, and the running software is MATLAB2016a. The simulation environment of the performance optimization mode is the same as this, which will not be elaborated below. It can be seen from the table that the test times of MGD-NN and BN-MGD-DNN are 0.067 milliseconds and 0.223 milliseconds respectively.

表1平均相对测试与训练误差表Table 1 Average relative test and training error table

表2 MGD-NN和BN-MGD-DNN算法比较Table 2 Comparison of MGD-NN and BN-MGD-DNN algorithms

Claims (5)

1.一种基于深度神经网络的航空发动机稳态模型建模方法,利用深度神经网络构建航空发动机稳态模型,其特征在于,所述深度神经网络为逐层批归一化的深度神经网络,其在相邻隐含层之间均增加一个批归一化层,用于对前一隐含层的输出进行标准化处理。1. an aeroengine steady-state model modeling method based on deep neural network, utilize deep neural network to construct aeroengine steady-state model, it is characterized in that, described deep neural network is the deep neural network of layer-by-layer batch normalization, It adds a batch normalization layer between adjacent hidden layers to standardize the output of the previous hidden layer. 2.如权利要求1所述航空发动机稳态模型建模方法,其特征在于,所述标准化处理具体如下:2. aero-engine steady-state model modeling method as claimed in claim 1, is characterized in that, described standardization process is specifically as follows: 其中,为标准化处理之后的输出,ε为数值很小的正整数,为进入批归一化层之前的神经网络输出,μB分别为样本数据集的均值和方差,γ和β为两个学习参数。in, is the output after normalization processing, ε is a positive integer with a very small value, is the output of the neural network before entering the batch normalization layer, μ B and are the mean and variance of the sample data set, respectively, and γ and β are two learning parameters. 3.如权利要求2所述航空发动机稳态模型建模方法,其特征在于,所述建模方法包括以下步骤:3. aero-engine steady-state model modeling method as claimed in claim 2, is characterized in that, described modeling method comprises the following steps: 步骤1、获取航空发动机稳态模型的训练数据;Step 1, obtaining the training data of the steady-state model of the aeroengine; 步骤2、确定逐层批归一化的深度神经网络的结构;Step 2, determine the structure of the deep neural network of layer-by-layer batch normalization; 步骤3、对逐层批归一化的深度神经网络进行前向计算,得到损失函数值;Step 3. Perform forward calculation on the layer-by-layer batch normalized deep neural network to obtain the loss function value; 步骤4、使用反向传播算法计算逐层批归一化的深度神经网络梯度,并更新权值;Step 4. Use the backpropagation algorithm to calculate the layer-by-layer batch normalized depth neural network gradient, and update the weights; 步骤5、判断逐层批归一化的深度神经网络是否收敛,是则输出稳态模型,否则继续迭代,返回步骤3。Step 5. Determine whether the layer-by-layer batch normalized deep neural network is convergent, if yes, output the steady-state model, otherwise continue to iterate, and return to step 3. 4.如权利要求3所述航空发动机稳态模型建模方法,其特征在于,通过发动机试车实验或/和发动机非线性部件级模型得到所述航空发动机稳态模型的训练数据。4. aero-engine steady-state model modeling method as claimed in claim 3, is characterized in that, obtain the training data of described aero-engine steady-state model through engine test run experiment or/and engine nonlinear component level model. 5.如权利要求1~4任一项所述航空发动机稳态模型建模方法,其特征在于,所述航空发动机稳态模型以飞行高度、马赫数、燃油流量、尾喷管喉道面积、凤扇导叶角和压气机导叶角为模型输入量,以发动机耗油率、安装推力、风扇转子转速、压气机转子转速、风扇喘振裕度、压气机喘振裕度和高压涡轮进口温度为模型输出量。5. as any one of claim 1~4 described aero-engine steady-state model modeling method, it is characterized in that, described aero-engine steady-state model is based on flight height, Mach number, fuel flow, tail nozzle throat area, The fan guide vane angle and the compressor guide vane angle are used as model inputs, and the engine fuel consumption rate, installation thrust, fan rotor speed, compressor rotor speed, fan surge margin, compressor surge margin and high-pressure turbine inlet Temperature is the model output.
CN201910823633.5A 2019-09-02 2019-09-02 Steady-state modeling method of aero-engine based on deep neural network Withdrawn CN110516394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910823633.5A CN110516394A (en) 2019-09-02 2019-09-02 Steady-state modeling method of aero-engine based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910823633.5A CN110516394A (en) 2019-09-02 2019-09-02 Steady-state modeling method of aero-engine based on deep neural network

Publications (1)

Publication Number Publication Date
CN110516394A true CN110516394A (en) 2019-11-29

Family

ID=68630337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910823633.5A Withdrawn CN110516394A (en) 2019-09-02 2019-09-02 Steady-state modeling method of aero-engine based on deep neural network

Country Status (1)

Country Link
CN (1) CN110516394A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111486009A (en) * 2020-04-23 2020-08-04 南京航空航天大学 An aero-engine control method and device based on deep reinforcement learning
CN111914461A (en) * 2020-09-08 2020-11-10 北京航空航天大学 Intelligent assessment method for one-dimensional cold efficiency of turbine guide vane
CN113282004A (en) * 2021-05-20 2021-08-20 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113485117A (en) * 2021-07-28 2021-10-08 沈阳航空航天大学 Multivariable reinforcement learning control method for aircraft engine based on input and output information
CN113741170A (en) * 2021-08-17 2021-12-03 南京航空航天大学 Aero-engine direct thrust inverse control method based on deep neural network
CN113804446A (en) * 2020-06-11 2021-12-17 卓品智能科技无锡有限公司 Diesel engine performance prediction method based on convolutional neural network
CN114154234A (en) * 2021-11-04 2022-03-08 中国人民解放军海军航空大学青岛校区 An aero-engine modeling method, system, and storage medium
CN114580267A (en) * 2022-01-25 2022-06-03 南京航空航天大学 Turbofan engine dynamic thrust estimation method based on recurrent neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598220A (en) * 2018-11-26 2019-04-09 山东大学 A kind of demographic method based on the polynary multiple dimensioned convolution of input

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598220A (en) * 2018-11-26 2019-04-09 山东大学 A kind of demographic method based on the polynary multiple dimensioned convolution of input

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHENG,QIANGANG 等: "Aero-Engine On-Board Model Based on Batch Normalize Deep Neural Network", 《IEEE ACCESS》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111486009A (en) * 2020-04-23 2020-08-04 南京航空航天大学 An aero-engine control method and device based on deep reinforcement learning
CN113804446A (en) * 2020-06-11 2021-12-17 卓品智能科技无锡有限公司 Diesel engine performance prediction method based on convolutional neural network
CN111914461A (en) * 2020-09-08 2020-11-10 北京航空航天大学 Intelligent assessment method for one-dimensional cold efficiency of turbine guide vane
CN113282004A (en) * 2021-05-20 2021-08-20 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113282004B (en) * 2021-05-20 2022-06-10 南京航空航天大学 Neural network-based aeroengine linear variable parameter model establishing method
CN113485117A (en) * 2021-07-28 2021-10-08 沈阳航空航天大学 Multivariable reinforcement learning control method for aircraft engine based on input and output information
CN113485117B (en) * 2021-07-28 2024-03-15 沈阳航空航天大学 Multi-variable reinforcement learning control method for aeroengine based on input and output information
CN113741170A (en) * 2021-08-17 2021-12-03 南京航空航天大学 Aero-engine direct thrust inverse control method based on deep neural network
CN114154234A (en) * 2021-11-04 2022-03-08 中国人民解放军海军航空大学青岛校区 An aero-engine modeling method, system, and storage medium
CN114580267A (en) * 2022-01-25 2022-06-03 南京航空航天大学 Turbofan engine dynamic thrust estimation method based on recurrent neural network

Similar Documents

Publication Publication Date Title
CN110516394A (en) Steady-state modeling method of aero-engine based on deep neural network
US11823057B2 (en) Intelligent control method for dynamic neural network-based variable cycle engine
Ekradi et al. Performance improvement of a transonic centrifugal compressor impeller with splitter blade by three-dimensional optimization
CN103306822B (en) Aerial turbofan engine control method based on surge margin estimation model
CN111679574B (en) A Transition State Optimization Method of Variable Cycle Engine Based on Large-scale Global Optimization Technology
CN110219736B (en) Direct thrust control method of aero-engine based on nonlinear model predictive control
WO2023168821A1 (en) Reinforcement learning-based optimization control method for aeroengine transition state
CN110502840A (en) On-line Prediction Method of Gas Path Parameters of Aeroengine
WO2022037157A1 (en) Narma-l2 multi-variable control method based on neural network
CN109709792A (en) Aero-engine stable state circuit pi controller and its design method and device
CN110516391A (en) A Neural Network-Based Modeling Method of Aeroengine Dynamic Model
CN110516395A (en) An Aeroengine Control Method Based on Nonlinear Model Prediction
Hu et al. The application of support vector regression and virtual sample generation technique in the optimization design of transonic compressor
Shuang et al. An adaptive compressor characteristic map method based on the Bézier curve
Zheng et al. A study on aero-engine direct thrust control with nonlinear model predictive control based on deep neural network
CN112149233A (en) Aero-engine dynamic thrust estimation method based on echo state network
CN105785791B (en) The modeling method of airborne propulsion system under a kind of supersonic speed state
Zhou et al. Design methods and strategies for forward and inverse problems of turbine blades based on machine learning
CN114415506A (en) Design method of dual-mode tracking and predicting control system of aircraft engine based on self-correcting model
Chen et al. Fuzzy logic-based adaptive tracking weight-tuned direct performance predictive control method of aero-engine
Zheng et al. On-board real-time optimization control for turbo-fan engine life extending
CN110362960B (en) Aero-engine system identification method based on multi-cell reduced balanced manifold expansion model
Wang et al. Data-driven framework for prediction and optimization of gas turbine blade film cooling
CN114047692B (en) A robust fault-tolerant and anti-interference model reference dynamic output feedback control method for turbofan engines
Du et al. Nonlinear model predictive control strategy for limit management of aero-engines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191129