WO2023115598A1 - 一种基于生成式对抗网络的平面叶栅定常流动预测方法 - Google Patents

一种基于生成式对抗网络的平面叶栅定常流动预测方法 Download PDF

Info

Publication number
WO2023115598A1
WO2023115598A1 PCT/CN2021/141541 CN2021141541W WO2023115598A1 WO 2023115598 A1 WO2023115598 A1 WO 2023115598A1 CN 2021141541 W CN2021141541 W CN 2021141541W WO 2023115598 A1 WO2023115598 A1 WO 2023115598A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
encoding
image
module
flow field
Prior art date
Application number
PCT/CN2021/141541
Other languages
English (en)
French (fr)
Inventor
杨斌
张鑫源
孙希明
全福祥
Original Assignee
大连理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大连理工大学 filed Critical 大连理工大学
Priority to US17/920,167 priority Critical patent/US20240012965A1/en
Publication of WO2023115598A1 publication Critical patent/WO2023115598A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Definitions

  • the invention relates to a method for predicting the steady flow of a plane cascade based on a generative confrontation network, and belongs to the technical field of aeroengine modeling and simulation.
  • Aeroengine is the jewel in the crown of modern industry, which is of great significance to the development of national military and civilian use.
  • the axial compressor is the core component of the aeroengine, and its stable operation directly determines the operating performance of the aeroengine.
  • Rotating stall and surge are two common unstable flow phenomena of axial flow compressors. These abnormal flow phenomena will lead to failure of axial flow compressors, and then affect the working state of aeroengines. Therefore, it is very important to predict the unstable flow of fluid inside the axial compressor in time to ensure the stable operation of the aero-engine.
  • the flow field image of the planar cascade of the axial compressor can more intuitively and clearly reflect the flow field changes inside the entire axial compressor Condition.
  • image sequence data has become an extremely important type of data in the real world, and the application of deep learning technology in the field of image sequence prediction has gradually matured.
  • image sequence prediction is more widely used in the field of automatic driving and weather forecasting, and good progress has been made.
  • the prediction of flow field at home and abroad is still in the preliminary exploration stage. Applying image sequence prediction technology to the steady state of the plane cascade Flow forecasting holds great promise.
  • the present invention provides a method for predicting the steady flow of a plane cascade based on a generative confrontation network.
  • a method for predicting the steady flow of a planar cascade based on a generative confrontation network comprising the following steps:
  • the flow field image data of the steady flow of the plane blade cascade of the axial flow compressor is obtained through the CFD simulation experiment.
  • the simulation experiment data involves blade shape, Under the conditions of Mach number and inlet airflow angle, the inlet angle of attack changes with time as 0°, 1°, 2°,...,9°,10°,..., which is positively correlated with time, so it will be in the same airfoil, Mach
  • the flow field images of the inlet angle of attack changing with time under the condition of number and inlet airflow angle constitute an image sequence as a sample.
  • This experiment is an equal-length sequence input, so the redundant data in the sample is eliminated to ensure that the length of the image sequence in each sample is the same.
  • the simulation experiment data it is divided into a test data set and a training data set;
  • the image sequence length of each sample is 11 frames, the first 10 frames of images are used as network input values, and the last frame is used as the true value of the image prediction target;
  • the Encoding network is composed of multiple encoding modules; the flow field image sequence of the plane cascade has high-dimensional features, and the encoding module reduces the dimension of the high-dimensional features, eliminates the secondary features in the flow field image sequence, and extracts effective spatio-temporal feature.
  • the encoding module reduces the dimension of the high-dimensional features, eliminates the secondary features in the flow field image sequence, and extracts effective spatio-temporal feature.
  • the low-level encoding module can extract the local flow field spatial structure features, thereby capturing the details of the flow field area changes.
  • each encoding module can extract a wider range of spatial features by increasing the receptive field, and capture the characteristics of the abrupt manifold near the leading edge of the blade in the planar cascade flow field image; each encoding module consists of a downsampling layer and a The ConvLSTM layer is composed of; the role of the downsampling layer is to reduce the amount of calculation and increase the receptive field; the role of the ConvLSTM layer is to capture the nonlinear spatiotemporal evolution characteristics of the flow field; each layer of ConvLSTM contains multiple ConvLSTM units, and the output of the downsampling layer passes through the gate
  • the gated activation unit is input to the ConvLSTM layer, and each encoding module is connected by a gated activation unit; each encoding module learns the high-dimensional spatio-temporal features of the flow field image sequence, outputs lower-dimensional spatio-temporal features and passes them to the next encoding module;
  • the Forecasting network is composed of multiple decoding modules; the function of the decoding module is to expand the low-dimensional flow field spatio-temporal features extracted by the encoding module into higher-dimensional features, so as to achieve the purpose of finally reconstructing the high-dimensional flow field image; each The decoding module consists of an upsampling layer and a ConvLSTM layer; the role of the upsampling layer is to expand the feature dimension; each layer of ConvLSTM contains multiple ConvLSTM units, and the output of the ConvLSTM layer is input to the upsampling layer through the gating activation unit.
  • each decoding module will decode the spatio-temporal features of the input image sequence extracted by the encoding module at the same position of the Encoding network, obtain the feature information of the historical moment and pass it to the next decoding module;
  • the different encoding layers of the Encoding network output the extracted spatio-temporal features of the planar cascade flow field image sequence of different dimensions, and the Forecasting network takes the spatio-temporal features of different dimensions as the initial state input of different decoding layers;
  • step S3.1 Adjust the dimension of the image prediction target true value in step S1.4 and the Encoding-Forecasting network prediction result obtained in step S2.5 to (N*seq_target,c,h,w), and use it as the depth The input of the convolutional network;
  • the deep convolutional network module consists of multiple convolutional modules and an output mapping module.
  • the role of the output mapping module It is to pass the features extracted by multiple convolution modules through a convolution layer, and use the sigmoid activation function to obtain an output value between 0 and 1, and then perform dimension transformation on the output value to obtain a probability output value, and use the probability output value as The final output of the deep convolutional network module, whose dimension is (N*seq_target,1).
  • the probability value represents the probability that the deep convolutional network determines that the image is a real image. It is marked as 1 for the real image and 0 for the Encoding-Forecasting network prediction image.
  • the method of generative confrontation network training is adopted to make the deep convolutional network module provide the learning gradient for the Encoding-Forecasting network , further optimize the parameters of the Encoding-Forecasting network; use the Encoding-Forecasting network constructed in step S2 as the generator of the generative confrontation network, denoted as G; use the deep convolutional network module constructed in step S3 as the discriminator of the generative confrontation network , denoted as D;
  • the Encoding-Forecasting network module can be used as an independent prediction network, it has certain reliability for the prediction of flow field images. In addition, applying the discriminator too early will lead to instability in the training process. Therefore, the present invention uses the Encoding-Forecasting network to be trained separately first. When the error value is less than 0.001, a deep convolutional network module is added as a discriminator to form a joint training strategy of the generative confrontation network to stabilize the training process and further restore the flow field. The purpose of image detail.
  • the MSE loss function is:
  • G(X) represents the predicted image sequence of the Encoding-Forecasting network
  • N is Number of samples
  • step S4.3 When the training error of the Encoding-Forecasting network module in step S4.2 is less than 0.001, the network module and the deep convolutional network module form a generative confrontation network and then train.
  • the optimization objective function of the traditional generative confrontation network is given by
  • the optimization objective function of the generator and the discriminator is composed of two parts, and its specific form is:
  • D( ) represents the probability value output by the deep convolutional network module after processing the input data.
  • the discriminator in the present invention is trained using the discriminator part LD in the traditional generative confrontational network loss function, and its calculation method is as follows:
  • the improved generator loss function consists of two parts:
  • the other part is the MSE error loss function L MSE , which is used to ensure the stability of the generator model training.
  • the weight parameters ⁇ adv and ⁇ MSE are used to adjust the loss function L adv and L MSE to achieve a balance between training stability and prediction result clarity Therefore, the final loss function of the generator is:
  • ⁇ x is the mean value of x
  • ⁇ y is the mean value of y
  • ⁇ y is the variance of x
  • ⁇ xy is the covariance of x and y.
  • L is the dynamic range of pixel values.
  • the value range of SSIM is [0,1], and the closer the value is to 1, the more similar the two image structures are.
  • step S5.1 Preprocess the test data set in step S1.1 according to the steps in S1, and adjust the data dimension of the test data set according to the input requirements in step S2.1 and step S3.1;
  • step S5.2 Use the final generative confrontation network prediction model in step S4.4 to predict the image of the last frame of each test sample, and obtain the flow field prediction image of the plane cascade when the inlet angle of attack is 10°.
  • the method provided by the present invention is used to predict the flow field image of the steady flow of the axial flow compressor plane cascade.
  • the present invention can effectively extract and utilize the spatio-temporal characteristics of the flow field image sequence , under the premise of ensuring the prediction accuracy, it can directly and clearly reflect the change of the flow field inside the axial flow compressor.
  • the model prediction results in the present invention are in good agreement with the CFD calculation results, and the characteristics of the planar cascade flow field changing with the intake angle of attack under different blade shapes and Mach numbers can be learned, and it is more computationally efficient than CFD Resources, under the condition of ensuring validity, can replace CFD to generate the required flow field simulation data.
  • the invention is based on data drive, and the model can be conveniently applied to flow field prediction of axial flow compressors of different blade shapes by training different data sets, and has certain universality.
  • Figure 1 is a flow chart of the method for predicting the steady flow of a planar cascade based on a generative confrontation network
  • Fig. 2 is a flow chart of data preprocessing
  • FIG. 3 is a structural diagram of the ConvLSTM unit
  • Figure 4 is a structural diagram of the Encoding-Forecasting model
  • Figure 5 is a structural diagram of the generative confrontation network model
  • Fig. 6 is three samples selected from the prediction result graph of the generative confrontation network on the test data, where (a), (c) and (e) are planar cascades of different blade shapes at the inlet angle of attack The real flow field image at 10°, (b), (d) and (f) are the predicted flow field images of planar cascades with different blade shapes at an inlet attack angle of 10°.
  • the present invention relies on the CFD simulation data of the flow field of the planar cascade of an axial compressor, and the process flow of the method for predicting the steady flow of the planar cascade based on the generative confrontation network is shown in FIG. 1 .
  • FIG. 2 is a flow chart of data preprocessing, and the steps of data preprocessing are as follows:
  • the flow field image data of the steady flow of the planar blade cascade of the axial flow compressor is obtained through the CFD simulation experiment.
  • the simulation experiment data involves the airfoil , Mach number and inlet airflow angle conditions, the inlet angle of attack varies with time as 0°, 1°, 2°,...,9°, 10°,..., which is positively correlated with time, so it will be in the same blade type,
  • the flow field images of the inlet angle of attack changing with time under the condition of Mach number and inlet airflow angle constitute an image sequence as a sample.
  • This experiment is an equal-length sequence input, so the redundant data in the sample is eliminated to ensure that the length of the image sequence in each sample is the same.
  • the image sequence length of each sample is 11 frames, the first 10 frames of images are used as network input values, and the last frame is used as the predicted target true value;
  • S1.5 divides the training data set into a training set and a verification set at a ratio of 4:1.
  • the verification set needs to contain samples of different leaf types.
  • Figure 3 shows the internal structure of the ConvLSTM unit:
  • the main disadvantage of the traditional LSTM unit in processing spatiotemporal data is that it uses full connections in the input-to-state and state-to-state transitions, where there is no spatial information encoding.
  • ConvLSTM uses convolution operators in input-to-state and state-to-state transitions to realize the function of determining the future state of a unit through the input and historical hidden state information near a unit in the space.
  • the input, cell output, and cell state of ConvLSTM will be three-dimensional tensors, the first dimension is the number of channels, and the second and third dimensions represent the image resolution of the output.
  • the input, unit output, and unit state of traditional LSTM can be regarded as three-dimensional tensors with the last two dimensions being 1. In this sense, traditional LSTM is actually a special case of ConvLSTM. If the states of units in space are regarded as hidden representations of moving objects, a ConvLSTM with a larger kernel should be able to capture faster motion, while a ConvLSTM with a smaller kernel should be able to capture slower motion.
  • h t represents the output of the unit at the current time
  • h t-1 represents the output of the unit at the previous time
  • c t is the state of the unit at the current time
  • c t-1 represents the state of the unit at the previous time
  • represents the Hardman product
  • Conv() represents convolution operation, it
  • f t , o t represent input gate, forget gate and output gate respectively
  • w represents weight
  • b bias
  • Tanh() represents hyperbolic tangent activation function
  • sigmoid( ) represents the sigmoid activation function.
  • the Encoding-Forecasting network structure diagram is shown in Figure 4, where the encoder is an Encoding network and the decoder is a Forecasting network. Adjust the dimension of each input sample in the training set to (seq_input,c,h,w), and adjust the dimension of the image prediction target true value to (seq_target,c,h,w), where seq_input is the length of the input image sequence, and seq_target is Predict the length of the image sequence, c represents the number of image channels, (h, w) represents the image resolution;
  • the S2.2Encoding network is composed of multiple encoding modules; the flow field image sequence of the planar cascade has high-dimensional features, and the encoding module reduces the dimensionality of the high-dimensional features, eliminates the secondary features in the flow field image sequence, and extracts effective spatio-temporal features .
  • the encoding module reduces the dimensionality of the high-dimensional features, eliminates the secondary features in the flow field image sequence, and extracts effective spatio-temporal features .
  • the low-level encoding module can extract the local flow field spatial structure features, thereby capturing the details of the flow field area changes.
  • each encoding module can extract a wider range of spatial features by increasing the receptive field, and capture the characteristics of the abrupt manifold near the leading edge of the blade in the planar cascade flow field image; each encoding module consists of a downsampling layer and a The ConvLSTM layer is composed of; the role of the downsampling layer is to reduce the amount of calculation and increase the receptive field; the role of the ConvLSTM layer is to capture the nonlinear spatiotemporal evolution characteristics of the flow field; each layer of ConvLSTM contains multiple ConvLSTM units, and the output of the downsampling layer passes through the gate
  • the gated activation unit is input to the ConvLSTM layer, and each encoding module is connected by a gated activation unit; each encoding module learns the high-dimensional spatio-temporal features of the flow field image sequence, outputs lower-dimensional spatio-temporal features and passes them to the next encoding module;
  • the Forecasting network is composed of multiple decoding modules; the function of the decoding module is to expand the low-dimensional flow field spatio-temporal features extracted by the encoding module into higher-dimensional features, so as to achieve the purpose of finally reconstructing the high-dimensional flow field image; each A decoding module consists of an upsampling layer and a ConvLSTM layer; the role of the upsampling layer is to expand the feature dimension; each layer of ConvLSTM contains multiple ConvLSTM units, and the output of the ConvLSTM layer is input to the upsampling layer through the gating activation unit, each The decoding modules are connected through the gated activation unit; each decoding module will decode the spatio-temporal features of the input image sequence extracted by the encoding module at the same position of the Encoding network, obtain the feature information of the historical moment and pass it to the next decoding module ;
  • the different encoding layers of the Encoding network output the spatio-temporal features of the plane cascade flow field image sequence extracted in different dimensions, and the Forecasting network takes the spatio-temporal features of different dimensions as the initial state input of different decoding layers;
  • step S3.1 Adjust the dimension of the image prediction target true value in step S1.4 and the Encoding-Forecasting network prediction result obtained in step S2.5 to (N*seq_target,c,h,w), and use it as the depth The input of the convolutional network;
  • the deep convolutional network module consists of multiple convolutional modules and an output mapping module.
  • the function of the output mapping module is to combine multiple The features extracted by the convolution module pass through a convolution layer, and use the sigmoid activation function to obtain an output value between 0 and 1, and then perform dimension transformation on the output value to obtain a probability output value, which is used as a deep convolutional network
  • the final output of the module with dimension (N*seq_target,1).
  • the probability value represents the probability that the deep convolutional network determines that the image is a real image. It is marked as 1 for the real image and 0 for the Encoding-Forecasting network prediction image.
  • the method of generative confrontation network training is adopted to enable the deep convolutional network module to provide learning gradients for the Encoding-Forecasting network and further optimize The parameters of the Encoding-Forecasting network; the Encoding-Forecasting network built in step S2 is used as the generator of the generative confrontation network, denoted as G; the deep convolutional network module constructed in step S3 is used as the discriminator of the generative confrontation network, denoted as D;
  • the Encoding-Forecasting network module can be used as an independent prediction network, it has certain reliability for the prediction of flow field images. In addition, applying the discriminator too early will lead to instability in the training process. Therefore, the present invention uses the Encoding-Forecasting network to be trained separately first. When the error value is 0.0009, a deep convolutional network module is added as a discriminator to form a joint training strategy of the generative confrontation network to stabilize the training process and further restore the flow field. The purpose of image detail.
  • the MSE loss function is:
  • G(X) represents the predicted image sequence of the Encoding-Forecasting network
  • N is Number of samples.
  • step S4.3 When the training error of the Encoding-Forecasting network module in step S4.2 is 0.0009, the network module and the deep convolutional network module form a generative confrontation network and then train, the optimization objective function of the traditional generative confrontation network It consists of two parts of the optimization objective function of the generator and the discriminator, and its specific form is:
  • D( ) represents the probability value output by the deep convolutional network module after processing the input data.
  • the discriminator in the present invention is trained using the discriminator part LD in the traditional generative confrontational network loss function, and its calculation method is as follows:
  • the improved generator loss function consists of two parts:
  • the other part is the MSE error loss function L MSE , which is used to ensure the stability of the generator model training.
  • the weight parameters ⁇ adv and ⁇ MSE are used to adjust the loss function L adv and L MSE to achieve a balance between training stability and prediction result clarity Therefore, the final loss function of the generator is:
  • ⁇ x is the mean value of x
  • ⁇ y is the mean value of y
  • ⁇ y is the variance of x
  • ⁇ xy is the covariance of x and y.
  • L is the dynamic range of pixel values.
  • the value range of SSIM is [0,1], and the closer the value is to 1, the more similar the two image structures are.
  • step S5.1 Preprocess the test data set in step S1.1 according to the steps in S1, and adjust the data dimension of the test data set according to the input requirements in step S2.1 and step S3.1;
  • step S5.2 Use the final generative confrontation network prediction model in step S4.4 to predict the image of the last frame of each test sample, and obtain the flow field prediction image of the plane cascade when the inlet angle of attack is 10°.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Fluid Mechanics (AREA)
  • Algebra (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于生成式对抗网络的平面叶栅定常流动预测方法,首先,对平面叶栅CFD仿真实验数据进行预处理,在仿真实验数据中划分出测试数据集和训练数据集;其次,依次构建Encoding-Forecasting网络模块、深度卷积网络模块、生成式对抗网络预测模型;最后,在测试集数据上进行预测。对测试集数据采用相同方式进行预处理,并按保存的最优的预测模型的输入要求调整数据维度;通过预测模型得到进气攻角为10°时的平面叶栅流场图像。能够有效避免轴流压气机内部传感器测量范围有限的问题,预测结果与CFD的计算结果高度吻合,具有较高的预测精度;基于数据驱动,通过训练不同数据集可以将模型方便地应用于不同叶型的轴流压气机的流场预测中,具有一定的普适性。

Description

一种基于生成式对抗网络的平面叶栅定常流动预测方法 技术领域
本发明涉及一种基于生成式对抗网络的平面叶栅定常流动预测方法,属于航空发动机建模与仿真技术领域。
背景技术
航空发动机是现代工业皇冠上的明珠,对国家军事和民用方面的发展都具有极其重要的意义,轴流压气机作为航空发动机的核心部件,其稳定工作与否直接决定了航空发动机的运行性能,而旋转失速和喘振是轴流压气机两种较为常见的不稳定流动现象,这些异常流动现象会导致轴流压气机故障,进而影响航空发动机的工作状态。因此,及时预测出轴流压气机内部流体的不稳定流动情况对于保证航空发动机的稳定运行有着至关重要的作用。
传统的轴流压气机稳定性检测和判别方法有两种:一种是通过研究轴流压气机内部旋转失速和喘振的机理,利用数学和物理的方法建立方程组,得到模拟轴流压气机流场的模型。然而由于轴流压气机系统中各因素复杂的相互作用所造成的系统不确定性和内部演化的复杂性,模型不能准确地反映轴流压气机流场的变化趋势。另一种是基于轴流压气机内部不同测量点的传感器采集的数据,利用时域分析、频域分析和时频分析算法分析信号的状态特征,从而避免失稳状态的发生。而相比较固定测量点的传感器所采集到的轴流压气机内部有限范围的数据,轴流压气机的平面叶栅流场图像更能直观清晰地反映出整个轴流压气机内部的流场变化情况。随着人工智能的发展,图像序列数据已经成为现实世界中一类极为重要的数据,深度学习技术在图像序列预测领域的应用已经逐步走向成熟。目前图像序列预测被更多的应用于自动驾驶领域和气象预报领域,并取得了不错的进展,而国内外对于流场的预测还处于初步探索阶段,将图像序列预测技术应用于平面叶栅定常流动预测具有很明朗的前景。
由于航空发动机是高精尖设备,且实验操作复杂,因此轴流压气机内部流场图像的实验数据较难获取。计算流体力学(CFD)技术在解决该问题上取得了重大进展,通过CFD仿真实验能够得到不同条件下平面叶栅流场变化的图像序列数据。本文基于CFD仿真实验数据驱动方法,利用生成式对抗网络模型提取历史时刻的平面叶栅定常流动流场图像的表征,对快速变化的轴流压气机内部流场进行预测,有效地避免了轴流压气机内部传感器测量范围有限的问题。
发明内容
针对现有技术中准确性低,可靠性差的问题,本发明提供一种基于生成式对抗网络的平面叶栅定常流动预测方法。
本发明的技术方案:
一种基于生成式对抗网络的平面叶栅定常流动预测方法,包括以下步骤:
S1.对轴流压气机的平面叶栅流场仿真图像数据进行预处理,包括以下步骤:
S1.1由于航空发动机的轴流压气机内部流场的实验数据较难获取,因此通过CFD仿真实验获取轴流压气机平面叶栅定常流动的流场图像数据,仿真实验数据中涉及叶型、马赫数和入口气流角条件,其进气攻角随时间变化为0°,1°,2°,…,9°,10°,…,与时间呈正相关性,因此将在相同叶型、马赫数和入口气流角的条件下进气攻角随时间变化的流场图像组成图像序列作为一个样本。本实验为等长序列输入,因此剔除样本中多余的数据保证每个样本中的图像序列长度一致。样本数据集共12组,每组样本中的图像序列长度为11帧,即进气攻角为0°,1°,2°,3°,…,9°,10°下的平面叶栅流场图像序列。为保证测试结果的客观性,在对仿真实验数据进行处理前,将其划分为测试数据集和训练数据集;
S1.2采用中值滤波、均值滤波和高斯滤波对流场图像数据进行去噪处理;
S1.3对滤波后的流场图像进行裁剪得到平面叶栅边缘的流场图像,利用线性插值将裁剪后的图像分辨率统一调整为256×256,并对训练集数据进行归一化;
S1.4每个样本的图像序列长度为11帧,将前10帧图像作为网络输入值,将最后一帧作为图像预测目标真值;
S1.5将训练数据集按4:1的比例划分为训练集和验证集。
S2.构建Encoding-Forecasting网络模块,包括以下步骤:
S2.1将训练集中每个输入样本的维度调整为(seq_input,c,h,w),图像预测目标真值的维度调整为(seq_target,c,h,w),其中seq_input为输入图像序列长度,seq_target为预测图像序列长度,c表示图像通道数量,(h,w)为图像分辨率;
S2.2 Encoding网络由多个编码模块构成;平面叶栅的流场图像序列具有高维特征,编码模块将高维特征进行降维,剔除流场图像序列中的次要特征,提取有效的时空特征。此外,平面叶栅定常流动的流场图像中存在移动缓慢且变化不明显的大面积流场区域,低级的编码模块能够提取局部的流场空间结构特征,从而捕捉到该流场区域变化的细节;高级的编码模块则能够通过增大感受野来提取更大范围的空间特征,捕获平面叶栅流场图像中叶片前缘附近突变流形的特征;每个编码模块由一个下采样层和一个ConvLSTM层组成;下采样层的作用是降低计算量和增大感受野;ConvLSTM层的作用是捕捉流场的非线性时空演化特征;每层ConvLSTM包含多个ConvLSTM单元,下采样层的输出通过门控激活单元输入到ConvLSTM层,每个编码模块之间通过门控激活单元相连;每个编码模块学习到流场图像序列高维的时空特征,输出更低维的时空特征并传入到下一个编码模块;
S2.3Forecasting网络由多个解码模块构成;解码模块的作用是将编码模块提取的低维流 场时空特征扩展成更高维的特征,以达到最终重构高维流场图像的目的;每个解码模块由一个上采样层和一个ConvLSTM层组成;上采样层的作用是扩大特征维度;每层ConvLSTM包含多个ConvLSTM单元,ConvLSTM层的输出通过门控激活单元输入到上采样层,每个解码模块之间通过门控激活单元相连;每个解码模块将对Encoding网络相同位置的编码模块所提取的输入图像序列的时空特征进行解码,得到历史时刻的特征信息并传入到下一个解码模块;
S2.4Encoding网络的不同编码层输出提取到的不同维度的平面叶栅流场图像序列的时空特征,Forecasting网络将该不同维度的时空特征作为不同解码层的初始状态输入;
S2.5为了保证输入图像和预测图像具有相同的分辨率,将Forecasting网络中最后一个解码模块的输出特征通过一个卷积层,并采用ReLu激活函数激活,生成最终的预测图像并输出,将其作为Encoding-Forecasting网络的预测结果,其维度为(N,seq_target,c,h,w),其中N为样本个数。
S3.构建深度卷积网络模块,包括以下步骤:
S3.1将步骤S1.4中的图像预测目标真值和步骤S2.5中得到的Encoding-Forecasting网络预测结果的维度调整为(N*seq_target,c,h,w),并将其作为深度卷积网络的输入;
S3.2将卷积层、批标准化层(BatchNormalization层)和LeakyRelu激活函数按顺序连接组成卷积模块,深度卷积网络模块由多个卷积模块和一个输出映射模块构成,输出映射模块的作用是将多个卷积模块提取到的特征通过一个卷积层,并利用sigmoid激活函数得到0到1之间的输出值,再对输出值进行维度变换得到概率输出值,将该概率输出值作为深度卷积网络模块的最终输出,其维度为(N*seq_target,1)。该概率值表示深度卷积网络判定图像是真实图像的概率,对于真实图像标记为1,对于Encoding-Forecasting网络预测图像标记为0。
S4.构建生成式对抗网络预测模型,包括以下步骤:
S4.1由于单独使用Encoding-Forecasting网络所得到的流场预测图像存在细节模糊的问题,因此,采用生成式对抗网络训练的方式,使得深度卷积网路模块为Encoding-Forecasting网络提供学习的梯度,进一步优化Encoding-Forecasting网络的参数;将步骤S2构建的Encoding-Forecasting网络作为生成式对抗网络的生成器,记为G;将步骤S3构建的深度卷积网络模块作为生成式对抗网络的判别器,记为D;
S4.2由于Encoding-Forecasting网络模块可作为一个独立的预测网络,对于流场图像的预测具有一定的可靠性,此外,过早地应用判别器会导致训练过程不稳定。因此,本发明利用先单独训练Encoding-Forecasting网络,当其误差值小于0.001时,加入深度卷积网络模块作为判别器构成生成式对抗网络共同训练的策略,以达到稳定训练过程并进一步还原流场图像细节的目的。
首先利用MSE损失函数单独训练Encoding-Forecasting网络,MSE损失函数为:
Figure PCTCN2021141541-appb-000001
其中,X=(X 1,…,X m)表示输入图像序列,Y=(Y 1,…Y n)表示预测目标图像序列,G(X)表示Encoding-Forecasting网络的预测图像序列,N为样本数量;
S4.3当步骤S4.2中Encoding-Forecasting网络模块的训练误差小于0.001,将该网络模块和深度卷积网络模块构成生成式对抗网络再进行训练,传统的生成式对抗网络的优化目标函数由生成器和判别器两部分的优化目标函数构成,其具体形式为:
Figure PCTCN2021141541-appb-000002
其中,D(·)表示深度卷积网络模块对输入数据进行处理后输出的概率值。
本发明中的判别器采用传统的生成式对抗网络损失函数中的判别器部分L D进行训练,其计算方式如下:
Figure PCTCN2021141541-appb-000003
针对生成式对抗训练中生成器训练不稳定的情况,设计一种改进的生成器损失函数。改进的生成器损失函数由两部分构成:
一部分是传统的生成式对抗网络损失函数中的生成器部分L adv,其计算方式如下:
Figure PCTCN2021141541-appb-000004
另一部分是MSE误差损失函数L MSE,用来保证生成器模型训练的稳定性,同时利用权重参数λ adv和λ MSE来调整损失函数L adv和L MSE以达到平衡训练稳定性和预测结果清晰性的目的,因此,生成器的最终损失函数为:
L G=λ advL advMSEL MSE
其中,λ adv∈(0,1),λ MSE∈(0,1);
因此,整个生成式对抗网络的损失函数为:
L total=L D+L G
S4.4保存步骤S4.3中训练后的生成式对抗网络并在验证集上测试,根据验证集评价指标调整模型超参数,评价指标采用结构相似性(SSIM)指标,保存使评价指标最优的模型得到最终的生成式对抗网络预测模型;
给定两个图像x和y,所述的SSIM指标为:
Figure PCTCN2021141541-appb-000005
其中,μ x是x的平均值,μ y是y的平均值,
Figure PCTCN2021141541-appb-000006
是x的方差,
Figure PCTCN2021141541-appb-000007
是y的方差,σ xy是x和y的协方差。 c 1=(k 1L) 2和c 2=(k 1L) 2是用来维持稳定的常数。L是像素值的动态范围。k 1=0.01,k 2=0.03。SSIM取值的范围为[0,1],值越接近1,说明两个图像结构越相似。
S5.利用预测模型对测试数据进行预测:
S5.1按S1中步骤对步骤S1.1的测试数据集进行预处理,并按步骤S2.1和步骤S3.1中的输入要求调整测试数据集的数据维度;
S5.2利用步骤S4.4中最终的生成式对抗网络预测模型对每个测试样本最后一帧的图像进行预测,得到进气攻角为10°时的平面叶栅的流场预测图像。
本发明的有益效果:通过本发明所提供的方法对轴流压气机平面叶栅定常流动的流场图像进行预测,相比于传统方法,本发明能够有效提取利用了流场图像序列的时空特征,在保证预测精度的前提下能够直观清晰地反映轴流压气机内部的流场变化情况。同时,本发明中的模型预测结果与CFD计算结果吻合较好,能够学习到不同叶型和马赫数下的平面叶栅流场随进气攻角变化的特征,并且相比于CFD更节省计算资源,在保证有效性的条件下可以替代CFD生成所需要的流场仿真数据。本发明基于数据驱动,通过训练不同数据集可以将模型方便地应用于不同叶型的轴流压气机的流场预测中,具有一定的普适性。
附图说明
图1为基于生成式对抗网络的平面叶栅定常流动预测方法流程图;
图2为数据预处理流程图;
图3为ConvLSTM单元结构图;
图4为Encoding-Forecasting模型结构图;
图5为生成式对抗网络模型结构图;
图6为生成式对抗网络在测试数据上的预测结果图中所选取的三个样例,其中(a)、(c)和(e)为不同叶型的平面叶栅在进气攻角为10°的真实流场图像,(b)、(d)和(f)为不同叶型的平面叶栅在进气攻角为10°的预测流场图像。
具体实施方式
下面结合附图对本发明作进一步说明,本发明依托背景为轴流压气机平面叶栅流场的CFD仿真数据,基于生成式对抗网络的平面叶栅定常流动预测方法流程如图1所示。
图2为数据预处理流程图,数据预处理步骤如下:
S1.对轴流压气机的平面叶栅流场图像数据进行预处理,包括以下步骤:
S1.1由于航空发动机的轴流压气机内部流场的实验数据较难获取,因此通过CFD仿真实验获取轴流压气机平面叶栅定常流动的流场图像数据,仿真实验数据中涉及了叶型、马赫数和入口气流角条件,其进气攻角随时间变化为0°,1°,2°,…,9°,10°,…,与时间呈正相关性,因此 将在相同叶型、马赫数和入口气流角的条件下进气攻角随时间变化的流场图像组成图像序列作为一个样本。本实验为等长序列输入,因此剔除样本中多余的数据保证每个样本中的图像序列长度一致。样本数据集共12组,每组样本中的图像序列长度为11帧,即进气攻角为0°,1°,2°,3°,…,9°,10°下的平面叶栅流场图像序列,为保证测试结果的客观性,在对仿真实验数据进行处理前,将其划分为测试数据集和训练数据集;
S1.2采用中值滤波、均值滤波和高斯滤波方法对流场图像数据进行去噪处理;
S1.3对滤波后的流场图像进行裁剪得到平面叶栅边缘的流场图像,利用线性插值将裁剪后的图像分辨率统一调整为256×256,并对训练集数据进行归一化;
S1.4每个样本的图像序列长度为11帧,将前10帧图像作为网络输入值,将最后一帧作为预测目标真值;
S1.5将训练数据集按4:1的比例划分为训练集和验证集,为了保证模型对各种叶型具有适应性,验证集需要包含不同叶型的样本。
图3为ConvLSTM单元的内部结构:传统的LSTM单元在处理时空数据方面的主要缺点是它在输入到状态和状态到状态的转换中使用了全连接,其中没有空间信息编码。ConvLSTM利用在输入到状态和状态到状态转换中使用卷积算子来实现了通过空间内某个单元附近的输入和历史的隐含状态信息来确定该单元未来状态的功能。
因此,ConvLSTM的输入、单元输出和单元状态将是三维张量,第一维为通道数,第二维和第三维表示输出的图像分辨率。传统LSTM的输入、单元输出和单元状态可以看作是后两个维度为1的三维张量,从这个意义上说,传统LSTM实际上是ConvLSTM的一种特殊情况。如果将空间内单元的状态视为运动对象的隐藏表示,具有较大卷积核的ConvLSTM应该能够捕捉较快的运动,而具有较小卷积核的ConvLSTM则能够捕捉较慢的运动。
ConvLSTM前向传播的公式为:
i t=Sigmoid(Conv(x t;w xi)+Conv(h t-1;w hi)+b i)
f t=Sigmoid(Conv(x t;w xf)+Conv(h t-1;w hf)+b f)
o f=Sigmoid(Conv(x t;w xo)+Conv(h t-1;w ho)+b o)
g t=Tanh(Conv(x t;w xg)+Conv(h t-1;w hg)+b g)
c t=f t⊙c t-1+i t⊙g t
h t=o t⊙Tanh(c t)
其中,h t表示当前时刻单元的输出,h t-1表示上一时刻单元的输出,c t是当前时刻单元的状态,c t-1表示上一时刻单元的状态,⊙表示哈德曼乘积,Conv()表示卷积操作,i t、f t、o t分别表示输入门、遗忘门和输出门,w表示权重,b表示偏置量,Tanh()表示双曲正切激活函数,sigmoid()表示sigmoid激活函数。
S2.构建Encoding-Forecasting网络模块,包括以下步骤:
S2.1Encoding-Forecasting网络结构图如图4所示,其中编码器为Encoding网络,解码器为Forecasting网络。将训练集中每个输入样本的维度调整为(seq_input,c,h,w),图像预测目标真值的维度调整为(seq_target,c,h,w),其中seq_input为输入图像序列长度,seq_target为预测图像序列长度,c表示图像通道数量,(h,w)表示图像分辨率;
S2.2Encoding网络由多个编码模块构成;平面叶栅的流场图像序列具有高维特征,编码模块将高维特征进行降维,剔除流场图像序列中的次要特征,提取有效的时空特征。此外,平面叶栅定常流动的流场图像中存在移动缓慢且变化不明显的大面积流场区域,低级的编码模块能够提取局部的流场空间结构特征,从而捕捉到该流场区域变化的细节;高级的编码模块则能够通过增大感受野来提取更大范围的空间特征,捕获平面叶栅流场图像中叶片前缘附近突变流形的特征;每个编码模块由一个下采样层和一个ConvLSTM层组成;下采样层的作用是降低计算量和增大感受野;ConvLSTM层的作用是捕捉流场的非线性时空演化特征;每层ConvLSTM包含多个ConvLSTM单元,下采样层的输出通过门控激活单元输入到ConvLSTM层,每个编码模块之间通过门控激活单元相连;每个编码模块学习到流场图像序列高维的时空特征,输出更低维的时空特征并传入到下一个编码模块;
S2.3 Forecasting网络由多个解码模块构成;解码模块的作用是将编码模块提取的低维流场时空特征扩展成更高维的特征,以达到最终重构高维流场图像的目的;每个解码模块由一个上采样层和一个ConvLSTM层组成;上采样层的作用是扩大特征维度;每层ConvLSTM包含多个ConvLSTM单元,ConvLSTM层的输出通过门控激活单元输入到上采样层,每个解码模块之间通过门控激活单元相连;每个解码模块将对Encoding网络相同位置的编码模块所提取的输入图像序列的时空特征进行解码,得到历史时刻的特征信息并传入到下一个解码模块;
S2.4 Encoding网络的不同编码层输出提取到的不同维度的平面叶栅流场图像序列的时空特征,Forecasting网络将该不同维度的时空特征作为不同解码层的初始状态输入;
S2.5为了保证输入图像和预测图像具有相同的分辨率,将Forecasting网络中最后一个解码模块的输出特征通过一个卷积层,并采用ReLu激活函数激活,生成最终的预测图像并输出,将其作为Encoding-Forecasting网络的预测结果,其维度为(N,seq_target,c,h,w),其中N为样本个数。
S3.构建深度卷积网络模块,包括以下步骤:
S3.1将步骤S1.4中的图像预测目标真值和步骤S2.5中得到的Encoding-Forecasting网络预测结果的维度调整为(N*seq_target,c,h,w),并将其作为深度卷积网络的输入;
S3.2将卷积层、批标准化层和LeakyRelu激活函数按顺序连接组成卷积模块,深度卷积网络模块由多个卷积模块和一个输出映射模块构成,输出映射模块的作用是将多个卷积模块 提取到的特征通过一个卷积层,并利用sigmoid激活函数得到0到1之间的输出值,再对输出值进行维度变换得到概率输出值,将该概率输出值作为深度卷积网络模块的最终输出,其维度为(N*seq_target,1)。该概率值表示深度卷积网络判定图像是真实图像的概率,对于真实图像标记为1,对于Encoding-Forecasting网络预测图像标记为0。
S4.构建生成式对抗网络预测模型,包括以下步骤:
S4.1生成式对抗网络模型结构如图5所示,其中,编码器为Encoding网络,解码器为Forecasting网路。
由于单独使用Encoding-Forecasting网络所得到的流场预测图像存在细节模糊的问题,因此,采用生成式对抗网络训练的方式,使得深度卷积网路模块为Encoding-Forecasting网络提供学习的梯度,进一步优化Encoding-Forecasting网络的参数;将步骤S2构建的Encoding-Forecasting网络作为生成式对抗网络的生成器,记为G;将步骤S3构建的深度卷积网络模块作为生成式对抗网络的判别器,记为D;
S4.2由于Encoding-Forecasting网络模块可作为一个独立的预测网络,对于流场图像的预测具有一定的可靠性,此外,过早地应用判别器会导致训练过程不稳定。因此,本发明利用先单独训练Encoding-Forecasting网络,当其误差值为0.0009时,加入深度卷积网络模块作为判别器构成生成式对抗网络共同训练的策略,以达到稳定训练过程并进一步还原流场图像细节的目的。
首先利用MSE损失函数单独训练Encoding-Forecasting网络,MSE损失函数为:
Figure PCTCN2021141541-appb-000008
其中,X=(X 1,…,X m)表示输入图像序列,Y=(Y 1,…Y n)表示预测目标图像序列,G(X)表示Encoding-Forecasting网络的预测图像序列,N为样本数量。
S4.3当步骤S4.2中Encoding-Forecasting网络模块的训练误差为0.0009时,将该网络模块和深度卷积网络模块构成生成式对抗网络再进行训练,传统的生成式对抗网络的优化目标函数由生成器和判别器两部分的优化目标函数构成,其具体形式为:
Figure PCTCN2021141541-appb-000009
其中,D(·)表示深度卷积网络模块对输入数据进行处理后输出的概率值。
本发明中的判别器采用传统的生成式对抗网络损失函数中的判别器部分L D进行训练,其计算方式如下:
Figure PCTCN2021141541-appb-000010
针对生成式对抗训练中生成器训练不稳定的情况,设计一种改进的生成器损失函数。改 进的生成器损失函数由两部分构成:
一部分是传统的生成式对抗网络损失函数中的生成器部分L adv,其计算方式如下:
Figure PCTCN2021141541-appb-000011
另一部分是MSE误差损失函数L MSE,用来保证生成器模型训练的稳定性,同时利用权重参数λ adv和λ MSE来调整损失函数L adv和L MSE以达到平衡训练稳定性和预测结果清晰性的目的,因此,生成器的最终损失函数为:
L G=λ advL advMSEL MSE
其中,λ adv∈(0,1),λ MSE∈(0,1)。
因此,整个生成式对抗网络的损失函数为:
L total=L D+L G
S4.4保存步骤S4.3中训练后的生成式对抗网络并在验证集上测试,根据验证集评价指标调整模型超参数,评价指标采用结构相似性(SSIM)指标,保存使评价指标最优的模型得到最终的生成式对抗网络预测模型;
给定两个图像x和y,所述的SSIM指标为:
Figure PCTCN2021141541-appb-000012
其中,μ x是x的平均值,μ y是y的平均值,
Figure PCTCN2021141541-appb-000013
是x的方差,
Figure PCTCN2021141541-appb-000014
是y的方差,σ xy是x和y的协方差。c 1=(k 1L) 2和c 2=(k 1L) 2是用来维持稳定的常数。L是像素值的动态范围。k 1=0.01,k 2=0.03。SSIM取值的范围为[0,1],值越接近1,说明两个图像结构越相似。
S5.利用预测模型对测试数据进行预测:
S5.1按S1中步骤对步骤S1.1的测试数据集进行预处理,并按步骤S2.1和步骤S3.1中的输入要求调整测试数据集的数据维度;
S5.2利用步骤S4.4中最终的生成式对抗网络预测模型对每个测试样本最后一帧的图像进行预测,得到进气攻角为10°时的平面叶栅的流场预测图像。
S5.3从测试结果中选取三组样例,如图6所示,(a)、(c)和(e)是轴流压气机在进气攻角为10°时不同的叶型和马赫数条件下利用CFD所计算生成的流场图像,(b)、(d)和(f)是与之对应的预测结果,可以看出预测图像和真实图像相比非常相似,叶片周围的加速区域和湍流以及缓慢移动的流场都能够很好地得到预测。整个测试集MSE误差为0.0012,SSIM评价指标均值为0.8667。实验证明,预测网络结构的各个部分都完成了预定的目标,实现了对定常流场的预测,不仅可以捕捉流场的演化过程,而且可以将低维特征表现为更高维的表征,能够预测流场的时空演化。
以上所述实施例仅表达本发明的实施方式,但并不能因此而理解为对本发明专利的范围的限制,应当指出,对于本领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些均属于本发明的保护范围。

Claims (5)

  1. 一种基于生成式对抗网络的平面叶栅定常流动预测方法,其特征在于,包括以下步骤:
    S1.对轴流压气机的平面叶栅定常流场仿真图像数据进行预处理,包括以下步骤:
    S1.1通过CFD仿真实验获取轴流压气机的平面叶栅定常流动的流场图像数据,将在相同叶型、马赫数和入口气流角的条件下,进气攻角随时间变化的流场图像组成图像序列作为一个样本;为等长序列输入;为保证测试结果的客观性,在对仿真实验数据进行处理前,将其划分为测试数据集和训练数据集;
    S1.2对流场图像数据进行去噪处理;
    S1.3对滤波后的流场图像进行裁剪得到平面叶栅边缘的流场图像,将裁剪后的图像分辨率进行统一,并对训练集数据进行归一化;
    S1.4每个样本的图像序列中,将最后一帧作为图像预测目标真值,将其它帧图像作为网络输入值;
    S1.5将训练数据集划分为训练集和验证集;
    S2.构建Encoding-Forecasting网络模块,包括以下步骤:
    S2.1将训练集中每个输入样本的维度调整为(seq_input,c,h,w),图像预测目标真值的维度调整为(seq_target,c,h,w),其中seq_input为输入图像序列长度,seq_target为预测图像序列长度,c表示图像通道数量,(h,w)为图像分辨率;
    S2.2 Encoding网络由多个编码模块构成;每个编码模块由一个下采样层和一个ConvLSTM层组成;每层ConvLSTM包含多个ConvLSTM单元,下采样层的输出通过门控激活单元输入到ConvLSTM层,每个编码模块之间通过门控激活单元相连;每个编码模块学习到流场图像序列高维的时空特征,输出更低维的时空特征并传入到下一个编码模块;
    S2.3Forecasting网络由多个解码模块构成;解码模块的作用是将编码模块提取的低维流场时空特征扩展成更高维的特征,达到最终重构高维流场图像的目的;每个解码模块由一个上采样层和一个ConvLSTM层组成;每层ConvLSTM包含多个ConvLSTM单元,ConvLSTM层的输出通过门控激活单元输入到上采样层,每个解码模块之间通过门控激活单元相连;每个解码模块将对Encoding网络相同位置的编码模块所提取的输入图像序列的时空特征进行解码,得到历史时刻的特征信息并传入到下一个解码模块;
    S2.4Encoding网络的不同编码层输出提取到的不同维度的平面叶栅流场图像序列的时空特征,Forecasting网络将该不同维度的时空特征作为不同解码层的初始状态输入;
    S2.5为了保证输入图像和预测图像具有相同的分辨率,将Forecasting网络中最后一个解码模块的输出特征通过一个卷积层,并采用ReLu激活函数激活,生成最终的预测图像并输出,将其作为Encoding-Forecasting网络的预测结果,其维度为(N,seq_target,c,h,w),其中N 为样本个数;
    S3.构建深度卷积网络模块,包括以下步骤:
    S3.1将步骤S1.4中的图像预测目标真值和步骤S2.5中得到的Encoding-Forecasting网络预测结果的维度调整为(N*seq_target,c,h,w),并将其作为深度卷积网络的输入;
    S3.2将卷积层、批标准化层和LeakyRelu激活函数按顺序连接组成卷积模块,深度卷积网络模块由多个卷积模块和一个输出映射模块构成,输出映射模块将多个卷积模块提取到的特征通过一个卷积层,并利用sigmoid激活函数得到0到1之间的输出值,再对输出值进行维度变换得到概率输出值,将该概率输出值作为深度卷积网络模块的最终输出,其维度为(N*seq_target,1);该概率值表示深度卷积网络判定图像是真实图像的概率,对于真实图像标记为1,对于Encoding-Forecasting网络预测图像标记为0;
    S4.构建生成式对抗网络预测模型,包括以下步骤:
    S4.1采用生成式对抗网络训练的方式,使得深度卷积网路模块为Encoding-Forecasting网络提供学习梯度,优化Encoding-Forecasting网络的参数;将步骤S2构建的Encoding-Forecasting网络作为生成式对抗网络的生成器,记为G;将步骤S3构建的深度卷积网络模块作为生成式对抗网络的判别器,记为D;
    S4.2单独训练Encoding-Forecasting网络,当其误差值小于0.001时,加入深度卷积网络模块作为判别器构成生成式对抗网络共同训练的策略,以达到稳定训练过程并进一步还原流场图像细节的目的;
    S4.3当步骤S4.2中Encoding-Forecasting网络模块的训练误差小于0.001时,将该网络模块和深度卷积网络模块构成生成式对抗网络再进行训练,训练过程中:
    判别器采用传统的生成式对抗网络损失函数中的判别器部分L D进行训练,其计算方式如下:
    Figure PCTCN2021141541-appb-100001
    针对生成式对抗训练中生成器训练不稳定的情况,提供改进的生成器损失函数;改进的生成器损失函数由两部分构成:
    一部分是传统的生成式对抗网络损失函数中的生成器部分L adv,其计算方式如下:
    Figure PCTCN2021141541-appb-100002
    另一部分是MSE误差损失函数L MSE,用来保证生成器模型训练的稳定性,同时利用权重参数λ adv和λ MSE来调整损失函数L adv和L MSE以达到平衡训练稳定性和预测结果清晰性的目的,则生成器的最终损失函数为:
    L G=λ advL advMSEL MSE
    其中,λ adv∈(0,1),λ MSE∈(0,1);
    因此,整个生成式对抗网络的损失函数为:
    L total=L D+L G
    S4.4保存步骤S4.3中训练后的生成式对抗网络并在验证集上测试,根据验证集评价指标调整模型超参数,评价指标采用结构相似性SSIM指标,保存使评价指标最优的模型得到最终的生成式对抗网络预测模型;
    S5.利用预测模型对测试数据进行预测:
    S5.1按S1中步骤对步骤S1.1的测试数据集进行预处理,并按步骤S2.1和步骤S3.1中的输入要求调整测试数据集的数据维度;
    S5.2利用步骤S4.4中最终的生成式对抗网络预测模型,对每个测试样本最后一帧的图像进行预测,得到进气攻角为10°时的平面叶栅的流场预测图像。
  2. 根据权利要求1所述的一种基于生成式对抗网络的平面叶栅定常流动预测方法,其特征在于,所述步骤S1.2中,采用中值滤波、均值滤波和高斯滤波对流场图像数据进行去噪处理。
  3. 根据权利要求1所述的一种基于生成式对抗网络的平面叶栅定常流动预测方法,其特征在于,所述步骤S1.5中,将训练数据集按4:1的比例划分为训练集和验证集。
  4. 根据权利要求1所述的一种基于生成式对抗网络的平面叶栅定常流动预测方法,其特征在于,所述步骤S4.2中,利用MSE损失函数单独训练Encoding-Forecasting网络,MSE损失函数为:
    Figure PCTCN2021141541-appb-100003
    其中,X=(X 1,…,X m)表示输入图像序列,Y=(Y 1,…Y n)表示预测目标图像序列,G(X)表示Encoding-Forecasting网络的预测图像序列,N为样本数量。
  5. 根据权利要求1所述的一种基于生成式对抗网络的平面叶栅定常流动预测方法,其特征在于,所述步骤S4.4中,给定两个图像x和y,所述的SSIM指标为:
    Figure PCTCN2021141541-appb-100004
    其中,μ x是x的平均值,μ y是y的平均值,
    Figure PCTCN2021141541-appb-100005
    是x的方差,
    Figure PCTCN2021141541-appb-100006
    是y的方差,σ xy是x和y的协方差;c 1=(k 1L) 2和c 2=(k 1L) 2是用来维持稳定的常数;L是像素值的动态范围;k 1=0.01,k 2=0.03;SSIM取值的范围为[0,1],值越接近1,说明两个图像结构越相似。
PCT/CN2021/141541 2021-12-22 2021-12-27 一种基于生成式对抗网络的平面叶栅定常流动预测方法 WO2023115598A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/920,167 US20240012965A1 (en) 2021-12-22 2021-12-27 Steady flow prediction method in plane cascade based on generative adversarial network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111577346.4 2021-12-22
CN202111577346.4A CN114329826A (zh) 2021-12-22 2021-12-22 一种基于生成式对抗网络的平面叶栅定常流动预测方法

Publications (1)

Publication Number Publication Date
WO2023115598A1 true WO2023115598A1 (zh) 2023-06-29

Family

ID=81054060

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141541 WO2023115598A1 (zh) 2021-12-22 2021-12-27 一种基于生成式对抗网络的平面叶栅定常流动预测方法

Country Status (3)

Country Link
US (1) US20240012965A1 (zh)
CN (1) CN114329826A (zh)
WO (1) WO2023115598A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116865261A (zh) * 2023-07-19 2023-10-10 王克佳 基于孪生网络的电力负荷预测方法及系统
CN117354058A (zh) * 2023-12-04 2024-01-05 武汉安域信息安全技术有限公司 基于时间序列预测的工控网络apt攻击检测系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115114859B (zh) * 2022-07-15 2023-03-24 哈尔滨工业大学 一种基于双向门控循环单元的高时间分辨率流场重构方法
CN117313579B (zh) * 2023-10-07 2024-04-05 中国航空发动机研究院 发动机压缩部件流场预测方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6098010A (en) * 1997-11-20 2000-08-01 The Regents Of The University Of California Method and apparatus for predicting and stabilizing compressor stall
CN110701087A (zh) * 2019-09-25 2020-01-17 杭州电子科技大学 基于单分类超限学习机的轴流压气机气动失稳检测方法
CN111737910A (zh) * 2020-06-10 2020-10-02 大连理工大学 一种基于深度学习的轴流压气机失速喘振预测方法
CN112943668A (zh) * 2021-02-24 2021-06-11 南京航空航天大学 航空轴流压气机复杂进气畸变下动态失速过程预测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6098010A (en) * 1997-11-20 2000-08-01 The Regents Of The University Of California Method and apparatus for predicting and stabilizing compressor stall
CN110701087A (zh) * 2019-09-25 2020-01-17 杭州电子科技大学 基于单分类超限学习机的轴流压气机气动失稳检测方法
CN111737910A (zh) * 2020-06-10 2020-10-02 大连理工大学 一种基于深度学习的轴流压气机失速喘振预测方法
CN112001128A (zh) * 2020-06-10 2020-11-27 大连理工大学 一种基于深度学习的轴流压气机失速喘振预测方法
CN112943668A (zh) * 2021-02-24 2021-06-11 南京航空航天大学 航空轴流压气机复杂进气畸变下动态失速过程预测方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116865261A (zh) * 2023-07-19 2023-10-10 王克佳 基于孪生网络的电力负荷预测方法及系统
CN116865261B (zh) * 2023-07-19 2024-03-15 梅州市嘉安电力设计有限公司 基于孪生网络的电力负荷预测方法及系统
CN117354058A (zh) * 2023-12-04 2024-01-05 武汉安域信息安全技术有限公司 基于时间序列预测的工控网络apt攻击检测系统及方法

Also Published As

Publication number Publication date
CN114329826A (zh) 2022-04-12
US20240012965A1 (en) 2024-01-11

Similar Documents

Publication Publication Date Title
WO2023115598A1 (zh) 一种基于生成式对抗网络的平面叶栅定常流动预测方法
CN112801404B (zh) 一种基于自适应空间自注意力图卷积的交通预测方法
CN112131760B (zh) 基于cbam模型的航空发动机剩余寿命预测方法
CN110070074B (zh) 一种构建行人检测模型的方法
AU2020104006A4 (en) Radar target recognition method based on feature pyramid lightweight convolutional neural network
CN112580263B (zh) 基于时空特征融合的涡扇发动机剩余使用寿命预测方法
CN109002845B (zh) 基于深度卷积神经网络的细粒度图像分类方法
CN112859898B (zh) 一种基于双通道双向神经网络的飞行器轨迹预测方法
CN108549866B (zh) 基于密集卷积神经网络的遥感飞机识别方法
CN109543615B (zh) 一种基于多级特征的双学习模型目标跟踪方法
CN111047078B (zh) 交通特征预测方法、系统及存储介质
CN111275168A (zh) 基于卷积全连接的双向门控循环单元的空气质量预测方法
CN115618733B (zh) 针对航空发动机剩余使用寿命预测的多尺度混杂注意力机制建模方法
Ma et al. A combined data-driven and physics-driven method for steady heat conduction prediction using deep convolutional neural networks
CN117252904A (zh) 基于长程空间感知与通道增强的目标跟踪方法与系统
Wang et al. Efficient object detection method based on improved YOLOv3 network for remote sensing images
CN110135561B (zh) 一种实时在线飞行器ai神经网络系统
CN115761654B (zh) 一种车辆重识别方法
CN115694985A (zh) 基于tmb的混合网络流量攻击预测方法
CN115357862A (zh) 一种狭长空间中的定位方法
CN115578325A (zh) 一种基于通道注意配准网络的图像异常检测方法
Zeng et al. Fast smoke and flame detection based on lightweight deep neural network
CN114065335A (zh) 基于多尺度卷积循环神经网络的建筑物能耗预测方法
Cui et al. Prediction of Aeroengine Remaining Useful Life Based on SE-BiLSTM
Feng et al. Research on optimization method of convolutional nerual network

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 17920167

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968727

Country of ref document: EP

Kind code of ref document: A1