WO2022151154A1 - 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法 - Google Patents

一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法 Download PDF

Info

Publication number
WO2022151154A1
WO2022151154A1 PCT/CN2021/071766 CN2021071766W WO2022151154A1 WO 2022151154 A1 WO2022151154 A1 WO 2022151154A1 CN 2021071766 W CN2021071766 W CN 2021071766W WO 2022151154 A1 WO2022151154 A1 WO 2022151154A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
network
combustion chamber
prediction model
model
Prior art date
Application number
PCT/CN2021/071766
Other languages
English (en)
French (fr)
Inventor
孙希明
唐琦
赵宏阳
全福祥
丁子尧
郭迪
Original Assignee
大连理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大连理工大学 filed Critical 大连理工大学
Priority to US17/606,180 priority Critical patent/US20220372891A1/en
Priority to PCT/CN2021/071766 priority patent/WO2022151154A1/zh
Publication of WO2022151154A1 publication Critical patent/WO2022151154A1/zh

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01DNON-POSITIVE DISPLACEMENT MACHINES OR ENGINES, e.g. STEAM TURBINES
    • F01D21/00Shutting-down of machines or engines, e.g. in emergency; Regulating, controlling, or safety means not otherwise provided for
    • F01D21/003Arrangements for testing or measuring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05DINDEXING SCHEME FOR ASPECTS RELATING TO NON-POSITIVE-DISPLACEMENT MACHINES OR ENGINES, GAS-TURBINES OR JET-PROPULSION PLANTS
    • F05D2240/00Components
    • F05D2240/35Combustors or associated equipment
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05DINDEXING SCHEME FOR ASPECTS RELATING TO NON-POSITIVE-DISPLACEMENT MACHINES OR ENGINES, GAS-TURBINES OR JET-PROPULSION PLANTS
    • F05D2260/00Function
    • F05D2260/81Modelling or simulation
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05DINDEXING SCHEME FOR ASPECTS RELATING TO NON-POSITIVE-DISPLACEMENT MACHINES OR ENGINES, GAS-TURBINES OR JET-PROPULSION PLANTS
    • F05D2270/00Control
    • F05D2270/40Type of control system
    • F05D2270/44Type of control system active, predictive, or anticipative
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05DINDEXING SCHEME FOR ASPECTS RELATING TO NON-POSITIVE-DISPLACEMENT MACHINES OR ENGINES, GAS-TURBINES OR JET-PROPULSION PLANTS
    • F05D2270/00Control
    • F05D2270/70Type of control algorithm
    • F05D2270/709Type of control algorithm with neural networks
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05DINDEXING SCHEME FOR ASPECTS RELATING TO NON-POSITIVE-DISPLACEMENT MACHINES OR ENGINES, GAS-TURBINES OR JET-PROPULSION PLANTS
    • F05D2270/00Control
    • F05D2270/80Devices generating input signals, e.g. transducers, sensors, cameras or strain gauges
    • F05D2270/804Optical devices
    • F05D2270/8041Cameras

Definitions

  • the invention relates to a gas turbine engine combustion chamber stability analysis method based on image sequence prediction, and belongs to the field of aero-engine fault prediction and health management.
  • Aero-engines are exposed to high temperature, high speed and heavy load for a long time, so the engine is prone to various failures.
  • Combustion chamber is one of the key components of aero-engine. Because the working state of aero-engine changes greatly in a short time, the combustion chamber will alternate between lean and rich working states in a short time, resulting in unstable combustion. Combustion chamber failure can wreak havoc on an engine, so it is desirable to perform predictive analysis of combustion chamber operating conditions before the failure occurs, so that repairs can be performed before the lower performance limit is reached.
  • the traditional combustion chamber failure prediction is based on the time series data measured by various sensors, which has a certain delay in time and has measurement errors, which will lead to problems such as lag and poor accuracy in the prediction.
  • the flow field distribution inside the combustion chamber can best characterize the engine operating state. Analyzing and processing the flow field distribution image rather than based on time series data such as gas path parameters can retain the original information to the greatest extent and improve the predictive analysis ability.
  • the present invention provides a method for analyzing the stability of a gas turbine engine combustion chamber based on image sequence analysis using deep learning.
  • a method for analyzing the stability of a gas turbine engine combustion chamber based on image sequence analysis comprising the following steps:
  • S1.2 samples the simulation process at equal time intervals to obtain a single frame of image, wherein 30 frames are sampled per second.
  • N is the number of observations, and N is 3 in this method;
  • I j (x, y) is the instantaneous acquisition image at time j; is the average image at time t;
  • w j is the weight coefficient, which can be determined according to the Gaussian distribution.
  • S2.2 Denoise the image obtained by the weighted average to obtain a clearer flow field image.
  • This method uses median filtering, uses a 3 ⁇ 3 window for sliding, sorts the pixel values in the window, and takes the median value instead of The original grayscale of the center pixel of the window.
  • S2.3 stores the denoised image in the form of a matrix, and converts it into a floating-point tensor to obtain an image set, and normalizes the pixel value and divides it by 255 in order to save the computational load.
  • step S2.4 assigns labels "0” and "1" to each frame of the image according to whether the flow field state obtained in step S2.3 is stable, "0" represents instability, and "1" represents normal, so as to construct Discriminant model dataset.
  • the discriminant model data set is shuffled and divided into training set and test set according to the ratio of 4:1.
  • step S2.6 Construct a sample set using a window with a length of 129 on the image set obtained in step S2.3, take the data falling within this window as a sample, the first 128 data of each sample as input, and the last data As output, build a predictive model dataset from this.
  • the prediction model data set is shuffled and divided into training set and test set according to the ratio of 4:1.
  • the input data dimension of the prediction model dataset rows is the number of rows of the picture, cols is the number of columns of the picture, and the flow field image is a black and white image, so the number of channels is 1.
  • step S4.1 In order to ensure that the discriminant network of the prediction model can process the data output from step S3.2, the input dimension of the network is consistent with the output dimension of step S3.2.
  • Convolutional layers are used for feature extraction.
  • a batch normalization layer is added after each convolutional layer.
  • S4.2 uses a fully connected layer and uses the sigmoid function to output a probability value to represent the probability that the input image is a real image.
  • S4.3 uses the binary cross-entropy loss function as the loss function when the network is trained.
  • step S5.2 Use the training set in the prediction model data set obtained in step S2.7 to train the prediction model network, and use the test set to evaluate the model after the training times.
  • the input of the discriminant model is the discriminant model data set obtained in step S2.4.
  • the convolution layer is used to extract the image features. Adding the maximum pooling layer can reduce the dimension of the data on the basis of retaining the regional characteristics of the image. Adding The dropout layer avoids overfitting.
  • S6.2 uses the sigmoid function as the activation function to output a probability value to indicate whether the flow field in the combustion chamber is normal.
  • step S6.3 uses the training set obtained in step S2.5 to train the discriminant model, and uses the test set to evaluate the discriminant model.
  • the present invention uses the method based on image sequence prediction to analyze the stability of the combustion chamber, and the most original data can contain more information so that the analysis is more accurate.
  • Another innovation is the use of 3D convolution modules in the WaveNet architecture, which can capture the temporal and spatial information of image frames to help process time-series image data.
  • the idea of generative adversarial network is used, and the discriminator is added to the prediction model to train the generator to obtain more realistic generated images.
  • the generated prediction image is input into the discriminant model to obtain a probability that the current state can run stably, and different measures can be taken to adjust the probability value, so as to conduct stability analysis.
  • the invention innovatively applies the image sequence prediction technology to the combustion chamber stability analysis, which can effectively improve the prediction accuracy and stability.
  • Fig. 1 is the flow chart of the stability analysis method of gas turbine engine combustion chamber based on image sequence prediction analysis
  • Fig. 2 is the data preprocessing flow chart
  • Figure 3 is the 3DWaveNet network structure diagram
  • Fig. 4 is the discriminant network structure diagram of the prediction model
  • Figure 5 is a structural diagram of a prediction model added to a discriminant network
  • Figure 6 is a network structure diagram of the discriminant model.
  • S1.1 CFD is used to simulate the flow field of the combustion chamber, and the image is consistent with the results obtained by the PIV experiment in some characteristics, and has the ability to be used as an approximation of the real data, so the CFD simulation is used to obtain the data;
  • the simulation process is sampled at equal time intervals to obtain a single frame of image, and the present invention samples 30 frames per second.
  • Figure 2 is a data preprocessing flow chart. The data preprocessing steps are as follows:
  • N is the number of observations, in this method N is 3, I j (x, y) is the instantaneous acquisition image at time j, is the average image at time t, and w j is the weight coefficient, which can be determined according to the Gaussian distribution.
  • S2.2 Denoise the image obtained by the weighted average to obtain a clearer flow field image.
  • This method uses median filtering, uses a 3 ⁇ 3 window for sliding, sorts the pixel values in the window, and takes the median value instead of The original grayscale of the pixel in the center of the window, in order to ensure that the size of the image after denoising remains unchanged, zero-padding is performed on the edge of the image;
  • S2.3 stores the denoised image in the form of a matrix and converts it into a floating-point tensor, and normalizes the pixel value and divides it by 255 in order to save the amount of computation;
  • S2.4 assigns labels "0" and "1" to each frame of the picture according to whether the flow field state obtained in S2.3 is stable, "0" means instability, "1” means normal, so as to construct the discriminant model data set;
  • the discriminant model data set is shuffled and divided into training set and test set according to the ratio of 4:1;
  • S2.6 uses a window of length 129 to construct a sample set on the image set obtained in S2.3. The data falling within this window is used as a sample, the first 128 data of each sample is used as input, and the last data is used as output. , to build a predictive model dataset;
  • the prediction model data set is shuffled and divided into training set and test set according to the ratio of 4:1.
  • n_steps is the time step
  • rows is the number of rows of the picture
  • cols is the number of columns
  • the flow field image is represented by a streamline diagram, which is a black and white image, so the number of channels is 1;
  • FIG. 3 shows part of the dilated convolution network layer
  • the present invention sets two identical dilated convolution modules, and the expansion factor of each dilated convolution module Incrementing in the form of 2 n , the maximum expansion factor is 64.
  • the 3D convolution module is set to (2, 3, 3), where 2 represents the time step, sliding in a 3 ⁇ 3 window, and each layer of convolution uses 32 filter.
  • Each layer uses residuals and skip connections to ensure that the gradient can flow for a long time to speed up the convergence rate.
  • the extracted features are gradually advanced, and the features at the bottom layer are effectively preserved through skip connections to obtain rich feature information.
  • Each layer of convolution introduces a gated activation unit to effectively round off the information.
  • the specific formula is:
  • tank represents the hyperbolic tangent activation function
  • is the sigmoid function
  • * represents the convolution operator
  • represents the element-wise multiplication operator
  • k represents the number of layers
  • W represents the learnable convolution kernel.
  • FIG. 4 is a structural diagram of the discriminant network, including the following steps:
  • x i is the input
  • y i is the output
  • a i is a parameter greater than 1.
  • S4.2 finally uses the fully connected layer, and uses the sigmoid function as the activation function to output a probability value to represent the probability that the input image is a real image;
  • S4.3 uses the binary cross-entropy loss function as the loss function when the network is trained.
  • Figure 4 is a structural diagram of the prediction model added to the adversarial network, including the following steps:
  • S5.1 first set the discriminator to the non-trainable mode, input the input samples of the prediction model data set obtained in S2.6 into the generator, and then input the generated image into the discriminator to construct the prediction model network;
  • S5.2 trains the discriminator separately, and the training set in the prediction model data set obtained in S2.7 is input to the generator to generate the prediction image and assigns the label "0" to represent the generated image.
  • the training set in the prediction model data set obtained in S2.7 is input to the generator to generate the prediction image and assigns the label "0" to represent the generated image.
  • the output data of the training set gives the label "1”, mixes the real and false pictures and adds noise to the label, and then trains the discriminator;
  • S5.3 sets the discriminator to be non-trainable, trains the entire prediction model network, inputs the input data of the training set obtained in S2.7 into the prediction network, and sets the output label to "1", that is, it is expected that the discriminant network will generate
  • the prediction image generated by the network is judged to be a real image, and the generation network and the discriminant network are alternately trained, and this cycle repeats until the training times are terminated.
  • the test set obtained in S2.7 is used to evaluate the prediction model. % or so, which proves that the images generated by the generating network are so real that the discriminative network cannot distinguish them.
  • Figure 6 is a network structure diagram of the discriminant model, including the following steps:
  • the input of this model is the discriminative model data set obtained in S2.4, and the output is the corresponding "0" and "1" labels.
  • the convolutional layer is used to extract the image features, and the maximum pooling layer is added to preserve the regional features of the image. On the basis of it, it can reduce the dimension of the data, add a dropout layer to prevent over-fitting, and the loss function is a binary cross-entropy function;
  • S6.2 uses the sigmoid function as the activation function to output a probability value to indicate whether the flow field in the combustion chamber is normal;
  • S6.3 uses the training set obtained in S2.5 to train the discriminant model, and uses the test set to evaluate the model

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Fluid Mechanics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法,属于航空发动机故障预测与健康管理领域。首先,获取燃气涡轮发动机燃烧室内部流场数据。其次,对燃烧室流场图像进行预处理,分别得到判别模型数据集、预测模型数据集,并将两个数据集打乱顺序后均划分成训练集和测试集。再次,构建3DWaveNet模块作为预测模型的生成网络,并构建其判别网络,组合生成、判别网络构成预测模型,以预测模型数据集中的训练集进行训练,使用测试集进行评估。最后,根据判别模型数据集构建判别模型,以判别模型数据集中的训练集对模型进行训练,使用测试集进行测评。本发明在燃烧室稳定性分析上应用图像序列预测技术,能够对预测准确性和稳定性进行有效提升。

Description

一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法 技术领域
本发明涉及一种基于图像序列预测的燃气涡轮发动机燃烧室稳定性分析方法,属于航空发动机故障预测与健康管理领域。
背景技术
航空发动机长期处于高温、高速以及重载的运行环境,所以发动机容易出现各种故障。燃烧室是航空发动机的关键部件之一,由于航空发动机在短时间内工作状态变化大,燃烧室将在很短时间内发生贫、富油工作状态的交替从而造成燃烧的不稳定。燃烧室故障的发生会使发动机造成严重破坏,所以希望在故障发生之前对燃烧室工作状态进行预测分析,从而在达到性能下限之前及时进行维修。
传统的燃烧室故障预测都是基于各类传感器测得的时序数据,时间上具有一定的延后性并且存在测量误差,这会导致预测存在滞后性和准确性差等问题。而燃烧室内部的流场分布作为原始数据最能够表征发动机运行状态,对流场分布图像进行分析处理而不是基于气路参数等时序数据,能够最大程度保留原有信息从而提高预测分析能力。
传统的时序图像预测技术包括单质心跟踪方法,光流法等,均需要人为对图像进行预处理,这将导致部分信息的丢失从而产生不精确的预测结果。
发明内容
针对现有技术预测精度低,本发明提供一种应用深度学习的基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法。
为了达到上述目的,本发明采用的技术方案为:
一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法,包括以下步骤:
S1.获取燃气涡轮发动机燃烧室内部流场数据,包括以下步骤:
S1.1考虑到粒子图像测速(PIV)设备难以获取,使用计算流体动力学(CFD)进行燃烧室流场模拟。
S1.2对模拟过程进行等时间间隔采样,获取单帧图像,其中,每秒取样30帧。
S2.对燃烧室流场图像进行预处理,包括以下步骤:
S2.1考虑到燃烧是一个动态过程,外加各种随机扰动,首先对图像进行加权平均,在一个小时间间隔内连续取多幅图像,利用其平均结果表征该时间段的图像性质,计算公式如下:
Figure PCTCN2021071766-appb-000001
式中,N为观察次数,本方法中N取3;I j(x,y)是j时刻的瞬时采集图像;
Figure PCTCN2021071766-appb-000002
是t时刻平 均图像;w j为权系数,可按高斯分布确定。
S2.2对加权平均得到的图片进行去噪以获得更清楚的流场图像,本方法使用中值滤波,使用3×3的窗口进行滑动,将窗口中的像素值进行排序,取其中值代替窗口中心像素原来的灰度。
S2.3将去噪后的图像以矩阵的形式进行存储,并转换成浮点型张量,得到图像集,为节省运算量将像素值进行归一化同除255。
S2.4将步骤S2.3获得的图像集根据其流场状态是否稳定,给每帧图片赋予标签“0”和“1”,“0”代表失稳,“1”代表正常,以此构建判别模型数据集。
S2.5将判别模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集。
S2.6在步骤S2.3获得的图像集上使用长度为129大小的窗口构建样本集,将落在此窗口内的数据作为一个样本,每个样本的前128个数据作为输入,最后一个数据作为输出,以此构建预测模型数据集。
S2.7将预测模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集。
S3.构建3DWaveNet模块作为预测模型的生成网络,包括以下步骤:
S3.1将每个样本维度调整为(n_steps,rows,cols,1),作为生成网络的输入;其中,n_steps为时间步,在本发明中,n_steps=128,即为步骤S2.6获得的预测模型数据集的输入数据维度,rows为图片的行数,cols为图片的列数,流场图像为黑白图像,所以通道数为1。
S3.2搭建基于因果卷积和膨胀卷积的膨胀卷积模块,使用3D卷积增加一个时间维度从而捕捉并使用残差连接确保梯度不消失,引入门控激活,采用跳跃连接保留每一层的特征并在最后进行结合后输出,输出一帧图像。
S3.3使用均方误差(mse)作为训练该网络时的损失函数,计算公式如下:
Figure PCTCN2021071766-appb-000003
式中,Q为训练集样本数目,
Figure PCTCN2021071766-appb-000004
为第n张真实图像上(i,j)点的像素值,
Figure PCTCN2021071766-appb-000005
为第n张生成网络生成的图像,L mse为损失函数。
S4.构建预测模型的判别网络,包括以下步骤:
S4.1为了保证预测模型的判别网络能够处理来自步骤S3.2输出的数据,网络的输入维度与步骤S3.2的输出维度保持一致。使用卷积层进行特征提取,为保证每一层神经网络的输入具有相同分布,在每个卷积层后加入批标准化层。
S4.2使用全连接层,利用sigmoid函数输出一个概率值,表征输入图片为真实图片的概率。
S4.3使用二进制交叉熵损失函数作为该网络训练时的损失函数。
S5.组合生成网络和判别网络以构成预测模型,包括以下步骤:
S5.1将判别器设置成不可训练,将步骤S2.6获得的预测模型数据集的输入样本输入到生成器后,将生成的图像输入到判别网络以此构建预测模型网络。
S5.2使用步骤S2.7获得的预测模型数据集中的训练集对预测模型网络进行训练,训练次数结束后使用测试集对模型进行评估。
S6.构建判别模型,包括以下步骤:
S6.1该判别模型的输入为步骤S2.4获得的判别模型数据集,使用卷积层提取图片特征,添加最大池化层在保留图片的区域特征的基础上又能降低数据的维度,添加dropout层避免过拟合。
S6.2使用sigmoid函数作为激活函数输出一个概率值表征燃烧室流场是否正常。
S6.3使用步骤S2.5获得的训练集对判别模型进行训练,使用测试集对判别模型进行测评。
S6.4最终实现将预测模型生成的预测图片输入到已经训练好的判别模型以得到一个当前状态能够正常运行的概率。
本发明的有益效果:相较于传统的基于时序数据进行稳定性分析的方法,本发明使用基于图像序列预测的方法对燃烧室稳定性进行分析,最原始的数据能包含更多的信息从而使分析更为准确。另一创新点在于在WaveNet构架中使用3D卷积模块,能够捕获图像帧的时间和空间信息从而有助于处理时序图像数据。同时在整体的网络结构搭建中,使用生成对抗网络的思想,在预测模型中增加判别器对生成器进行训练以获得更加真实的生成图像。将生成的预测图像输入到判别模型中得到一个当前状态能够稳定运行的概率并且可以通过概率值的大小采取不同的措施进行调控,由此进行稳定性分析。本发明创新性地在燃烧室稳定性分析上应用图像序列预测技术,能够对预测准确性和稳定性进行有效提升。
附图说明
图1为基于图像序列预测分析的燃气涡轮发动机燃烧室稳定性分析方法流程图;
图2为数据预处理流程图;
图3为3DWaveNet网络结构图;
图4为预测模型的判别网络结构图;
图5为加入判别网络的预测模型结构图;
图6为判别模型网络结构图。
具体实施方式
下面结合附图对本发明作进一步说明,本发明依托CFD数值仿真的涡轮发动机燃烧室流 场图像,基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法流程如图1所示。
S1.燃气涡轮发动机燃烧室内部流场数据获取,包括以下步骤:
S1.1使用CFD进行燃烧室流场模拟,其图像与通过PIV实验获得的结果在某些特征上具有一致性,有作为真实数据的近似值的能力,所以使用CFD仿真进行数据的获取;
S1.2对模拟过程进行等时间间隔采样,获取单帧图像,本发明每秒取样30帧。
S2.对燃烧室流场图像进行预处理,图2为数据预处理流程图,数据预处理步骤如下:
S2.1考虑到燃烧是一个动态过程,外加各种随机干扰,理想的稳定流场是不存在的,为了得到某个时刻流场的稳定图像,本发明采取在一个小时间间隔内连续取多幅图像,利用其平均结果表征该时间段的图像性质,计算公式如下:
Figure PCTCN2021071766-appb-000006
式中N为观察次数,本方法中N取3,I j(x,y)是j时刻的瞬时采集图像,
Figure PCTCN2021071766-appb-000007
是t时刻平均图像,w j为权系数,可按高斯分布确定。
S2.2对加权平均得到的图片进行去噪以获得更加清楚的流场图像,本方法使用中值滤波,使用3×3的窗口进行滑动,将窗口中的像素值进行排序,取其中值代替窗口中心像素原来的灰度,为保证去噪后的图像大小不变,对图片边缘进行补零填充;
S2.3将去噪后的图像以矩阵的形式进行存储并转换成浮点型张量,为节省运算量将像素值进行归一化同除255;
S2.4将S2.3获得的图像根据其流场状态是否稳定给每帧图片赋予标签“0”和“1”,“0”代表失稳,“1”代表正常,以此构建判别模型数据集;
S2.5将判别模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集;
S2.6在S2.3获得的图像集上使用长度为129大小的窗口构建样本集,落在此窗口内的数据作为一个样本,每个样本的前128个数据作为输入,最后一个数据作为输出,以此构建预测模型数据集;
S2.7将预测模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集。
S3.构建3DWaveNet模块作为预测模型的生成网络,图3为3DWaveNet网络结构图,构建3DWaveNet网络的步骤如下:
S3.1将每个样本维度调整为(n_steps,rows,cols,1)作为3DWaveNet网络模块的输入,其中n_steps为时间步,在本发明中n_steps=128为S2.6获得的预测模型数据集的输入数据维度,rows为图片的行数,cols为列数,流场图像用流线图表示,为黑白图像,所以通道数为1;
S3.2搭建基于因果卷积和膨胀卷积的膨胀卷积模块,图3仅展示部分膨胀卷积网络层, 本发明设置两个相同的膨胀卷积模块,每个膨胀卷积模块的膨胀因子以2 n的形式递增,最大膨胀因子为64。3D卷积模块设置为(2,3,3),其中2代表时间步长,以3×3的窗口进行滑动,每层卷积采用32个滤波器。每层使用残差和跳跃连接确保梯度能长时间流动加快收敛速度,通过逐层卷积,提取的特征逐渐高级,而位于底层的特征通过跳跃连接得到有效保存从而得到丰富的特征信息。每层卷积引入门控激活单元对信息进行有效舍取,具体公式为:
z=tanh(W f,k*x)⊙σ(W q,k*x)
式中tank表示双曲正切激活函数,σ是sigmoid函数,*表示卷积运算符,⊙表示逐元素乘法运算符,k代表层数,W表示可学习的卷积核。
S3.3使用均方误差(mse)作为训练该网络时的损失函数,计算公式如下:
Figure PCTCN2021071766-appb-000008
式中,Q为训练集样本数目,
Figure PCTCN2021071766-appb-000009
为第n张真实图像上(i,j)点的像素值,
Figure PCTCN2021071766-appb-000010
为第n张生成网络生成的图像,L mse为损失函数。
S4.构建预测模型的判别网络,图4为判别网络结构图,包括以下步骤:
S4.1为了保证预测模型的判别网络能够处理来自S3.2输出的数据,网络的输入与S3.2的输出维度保持一致。使用卷积层进行特征提取,为保证每一层神经网络具有相同分布,在每个卷积层后引入批标准化层将输入数据归一化到零均值单位方差的正态分布上,避免梯度消失,使用Leaky ReLU作为激活函数,保证负数导数依旧存在,具体公式为:
Figure PCTCN2021071766-appb-000011
式中,x i是输入,y i为输出,a i是一个大于1的参数。
S4.2最后使用全连接层,使用sigmoid函数作为激活函数输出一个概率值,表征输入图片为真实图片的概率;
S4.3使用二进制交叉熵损失函数作为该网络训练时的损失函数。
S5.组合生成网络和对抗网络以构成预测模型,图4为加入对抗网络的预测模型结构图,包括以下步骤:
S5.1首先将判别器设置成不可训练模式,将S2.6获得的预测模型数据集的输入样本输入到生成器后,将生成的图像输入到判别器以此构建预测模型网络;
S5.2单独对判别器进行训练,S2.7获得的预测模型数据集中的训练集输入到生成器生成预测图片并赋予标签“0”代表生成图片,对相应的真实图片(训练集的输出数据)赋予标签“1”,将真假图片进行混合并对标签添加噪声,然后对判别器进行训练;
S5.3将判别器设置成不可训练,对整个预测模型网络进行训练,将S2.7获得的训练集的输入数据输入到预测网络中,输出标签设置为“1”,即期望判别网络将生成网络生成的预测图片判断成真实图像,生成网络和判别网络交替训练,以此循环往复,直至训练次数终止,使用S2.7获得的测试集对预测模型进行评估,期望判别网络的正确率在50%左右,以此证明生成网络生成图像较为真实以至于判别网络无法区分。
S6.构建判别模型,图6为判别模型的网络结构图,包括以下步骤:
S6.1该模型的输入为S2.4获得的判别模型数据集,输出为相对应的“0”“1”标签,使用卷积层提取图片特征,添加最大池化层在保留图片的区域特征的基础上又能降低数据的维度,添加dropout层防止过拟合,损失函数为二进制交叉熵函数;
S6.2使用sigmoid函数作为激活函数输出一个概率值表征燃烧室流场是否正常;
S6.3使用S2.5获得的训练集对判别模型进行训练,使用测试集对模型进行测评;
S6.4将预测模型生成的预测图片输入到已经训练好的判别模型以得到一个当前状态能够正常运行(是否稳定)的概率,并且可以通过概率值的大小采取不同的措施进行调控。
以上所述实施例仅表达本发明的实施方式,但并不能因此而理解为对本发明专利的范围的限制,应当指出,对于本领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些均属于本发明的保护范围。

Claims (3)

  1. 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法,其特征在于,包括以下步骤:
    S1.获取燃气涡轮发动机燃烧室内部流场数据,包括以下步骤:
    S1.1采用计算流体动力学模拟燃烧室流场;
    S1.2对模拟过程进行等时间间隔采样,获取单帧图像;
    S2.对燃烧室流场图像进行预处理,包括以下步骤:
    S2.1对图像进行加权平均,在一个小时间间隔内连续取多幅图像,利用其平均结果表征该时间段的图像性质,计算公式为:
    Figure PCTCN2021071766-appb-100001
    式中,N为观察次数;I j(x,y)是j时刻的瞬时采集图像;
    Figure PCTCN2021071766-appb-100002
    是t时刻平均图像;w j为权系数,可按高斯分布确定;
    S2.2对加权平均得到的图片进行去噪处理得到流场图像;
    S2.3将去噪后的图像以矩阵的形式进行存储,并转换成浮点型张量,得到图像集;
    S2.4将步骤S2.3获得的图像集根据其流场状态是否稳定,给每帧图片赋予标签“0”和“1”,“0”代表失稳,“1”代表正常,以此构建判别模型数据集;
    S2.5将判别模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集;
    S2.6在步骤S2.3获得的图像集上使用长度为129大小的窗口构建样本集,将落在此窗口内的数据作为一个样本,每个样本的前128个数据作为输入,最后一个数据作为输出,以此构建预测模型数据集;
    S2.7将预测模型数据集打乱顺序后按照4:1的比例划分成训练集和测试集;
    S3.构建3DWaveNet模块作为预测模型的生成网络,包括以下步骤:
    S3.1将每个样本维度调整为(n_steps,rows,cols,1),作为生成网络的输入;其中,n_steps为时间步,n_steps=128,即为步骤S2.6获得的预测模型数据集的输入数据维度;rows为图片的行数;cols为图片的列数;流场图像为黑白图像,通道数为1;
    S3.2搭建基于因果卷积和膨胀卷积的膨胀卷积模块,使用3D卷积增加一个时间维度从而捕捉并使用残差连接确保梯度不消失,引入门控激活,采用跳跃连接保留每一层的特征并在最后进行结合后输出,输出一帧图像;
    S3.3使用均方误差mse作为训练该网络时的损失函数;
    S4.构建预测模型的判别网络,包括以下步骤:
    S4.1为了保证预测模型的判别网络能够处理来自步骤S3.2输出的数据,网络的输入维度 与步骤S3.2的输出维度保持一致;使用卷积层进行特征提取,为保证每一层神经网络的输入具有相同分布,在每个卷积层后加入批标准化层;
    S4.2使用全连接层,利用sigmoid函数输出一个概率值,表征输入图片为真实图片的概率;
    S4.3使用二进制交叉熵损失函数作为该网络训练时的损失函数;
    S5.组合生成网络和判别网络以构成预测模型,包括以下步骤:
    S5.1将判别器设置成不可训练,将步骤S2.6获得的预测模型数据集的输入样本输入到生成器后,将生成的图像输入到判别网络以此构建预测模型网络;
    S5.2使用步骤S2.7获得的预测模型数据集中的训练集对预测模型网络进行训练,训练次数结束后使用测试集对模型进行评估;
    S6.构建判别模型,包括以下步骤:
    S6.1该判别模型的输入为步骤S2.4获得的判别模型数据集,使用卷积层提取图片特征,添加最大池化层在保留图片的区域特征的基础上又能降低数据的维度,添加dropout层避免过拟合;
    S6.2使用sigmoid函数作为激活函数输出一个概率值表征燃烧室流场是否正常;
    S6.3使用步骤S2.5获得的训练集对判别模型进行训练,使用测试集对判别模型进行测评;
    S6.4最终实现将预测模型生成的预测图片输入到已经训练好的判别模型以得到一个当前状态能够正常运行的概率。
  2. 根据权利要求1所述的一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法,其特征在于,所述步骤S1.2中每秒取样30帧。
  3. 根据权利要求1或2所述的一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法,其特征在于,所述步骤S3.3中损失函数的计算公式如下:
    Figure PCTCN2021071766-appb-100003
    式中,Q为训练集样本数目,
    Figure PCTCN2021071766-appb-100004
    为第n张真实图像上(i,j)点的像素值,
    Figure PCTCN2021071766-appb-100005
    为第n张生成网络生成的图像,L mse为损失函数。
PCT/CN2021/071766 2021-01-14 2021-01-14 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法 WO2022151154A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/606,180 US20220372891A1 (en) 2021-01-14 2021-01-14 Method for stability analysis of combustion chamber of gas turbine engine based on image sequence analysis
PCT/CN2021/071766 WO2022151154A1 (zh) 2021-01-14 2021-01-14 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/071766 WO2022151154A1 (zh) 2021-01-14 2021-01-14 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法

Publications (1)

Publication Number Publication Date
WO2022151154A1 true WO2022151154A1 (zh) 2022-07-21

Family

ID=82447713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/071766 WO2022151154A1 (zh) 2021-01-14 2021-01-14 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法

Country Status (2)

Country Link
US (1) US20220372891A1 (zh)
WO (1) WO2022151154A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740384B (zh) * 2024-02-07 2024-04-16 中国航发四川燃气涡轮研究院 一种燃烧性能敏感性评估方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951919A (zh) * 2017-03-02 2017-07-14 浙江工业大学 一种基于对抗生成网络的流速监测实现方法
US20190227049A1 (en) * 2017-03-13 2019-07-25 Lucidyne Technologies, Inc. Method of board lumber grading using deep learning techniques
CN110163278A (zh) * 2019-05-16 2019-08-23 东南大学 一种基于图像识别的火焰稳定性监测方法
CN111027626A (zh) * 2019-12-11 2020-04-17 西安电子科技大学 基于可变形卷积网络的流场识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951919A (zh) * 2017-03-02 2017-07-14 浙江工业大学 一种基于对抗生成网络的流速监测实现方法
US20190227049A1 (en) * 2017-03-13 2019-07-25 Lucidyne Technologies, Inc. Method of board lumber grading using deep learning techniques
CN110163278A (zh) * 2019-05-16 2019-08-23 东南大学 一种基于图像识别的火焰稳定性监测方法
CN111027626A (zh) * 2019-12-11 2020-04-17 西安电子科技大学 基于可变形卷积网络的流场识别方法

Also Published As

Publication number Publication date
US20220372891A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
CN112765908B (zh) 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法
CN109308522B (zh) 一种基于循环神经网络的gis故障预测方法
CN110889343B (zh) 基于注意力型深度神经网络的人群密度估计方法及装置
CN111814704B (zh) 基于级联注意力与点监督机制的全卷积考场目标检测方法
CN111709292A (zh) 基于递归图和深度卷积网络的压气机振动故障检测法
CN109190537A (zh) 一种基于掩码感知深度强化学习的多人物姿态估计方法
CN110348059B (zh) 一种基于结构化网格的通道内流场重构方法
CN112232486A (zh) 一种yolo脉冲神经网络的优化方法
Xiao et al. Addressing Overfitting Problem in Deep Learning‐Based Solutions for Next Generation Data‐Driven Networks
CN112116002A (zh) 一种检测模型的确定方法、验证方法和装置
WO2022151154A1 (zh) 一种基于图像序列分析的燃气涡轮发动机燃烧室稳定性分析方法
CN115346149A (zh) 基于时空图卷积网络的跳绳计数方法和系统
WO2022188425A1 (zh) 一种融入先验知识的深度学习故障诊断方法
CN114897138A (zh) 基于注意力机制和深度残差网络的系统故障诊断方法
CN111814403B (zh) 一种配电主设备分布式状态传感器可靠性评估方法
CN113987910A (zh) 一种耦合神经网络与动态时间规划的居民负荷辨识方法及装置
CN117591950A (zh) 滚动轴承变工况的故障诊断方法、装置、终端和存储介质
CN115048873B (zh) 一种用于飞机发动机的剩余使用寿命预测系统
CN116579468A (zh) 基于云系记忆的台风生成预测方法、装置、设备及介质
CN113255789B (zh) 基于对抗网络和多被试脑电信号的视频质量评价方法
CN106778558B (zh) 一种基于深度分类网络的面部年龄估计方法
CN115099135A (zh) 一种改进的人工神经网络多类型作业功耗预测方法
CN114397521A (zh) 一种针对电子设备的故障诊断方法及系统
CN113723482A (zh) 基于多示例孪生网络的高光谱目标检测方法
CN112434614A (zh) 一种基于Caffe框架的滑窗动作检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21918363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21918363

Country of ref document: EP

Kind code of ref document: A1