CN115859781A - Flow field prediction method based on attention and convolutional neural network codec - Google Patents

Flow field prediction method based on attention and convolutional neural network codec Download PDF

Info

Publication number
CN115859781A
CN115859781A CN202211430114.0A CN202211430114A CN115859781A CN 115859781 A CN115859781 A CN 115859781A CN 202211430114 A CN202211430114 A CN 202211430114A CN 115859781 A CN115859781 A CN 115859781A
Authority
CN
China
Prior art keywords
flow field
attention
representing
generate
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211430114.0A
Other languages
Chinese (zh)
Inventor
黄宏宇
肖鸿飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202211430114.0A priority Critical patent/CN115859781A/en
Priority to CN202310242725.0A priority patent/CN116227359A/en
Publication of CN115859781A publication Critical patent/CN115859781A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Fluid Mechanics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention particularly relates to a flow field prediction method based on attention and convolutional neural network codec, which comprises the following steps: inputting the symbol distance function diagram into a trained flow field prediction model, and outputting a corresponding flow field prediction image; when the model is trained: firstly, inputting a training data set containing a symbol distance function diagram and a flow field real image into a flow field prediction model; then extracting image features of the symbol distance function graph through an encoder based on an attention mechanism and a convolutional neural network to generate an attention feature graph; then the attack angle and the Mach number of the attention feature graph and the symbol distance function graph are fused through a fusion mapping module, and a high-level feature graph is generated through mapping; then, decoding the high-level feature map through a decoder to generate a flow field prediction image; and finally, calculating training loss based on the flow field prediction image and the flow field real image, and optimizing parameters of the flow field prediction model through the training loss. The invention can improve the prediction precision and the prediction efficiency of the flow field prediction.

Description

Flow field prediction method based on attention and convolutional neural network codec
Technical Field
The invention relates to the technical field of flow field prediction, in particular to a flow field prediction method based on attention and convolutional neural network codecs.
Background
The wing profile optimization design of the wing usually selects the same series of wing profiles derived from a reference wing profile. The wing profile optimization design method is mainly developed from early wind tunnel experiments to Computational Fluid Dynamics (CFD), the design period is effectively shortened, and a large number of flow field analysis and calculation problems exist in the wing profile optimization process based on the CFD technology. In many practical engineering applications, flow field analysis is the most computationally intensive and time consuming part.
The deep learning has strong learning ability on high-order complex functions, has unique advantages in the aspect of feature extraction, and can carry out rapid and accurate prediction. Therefore, a chinese patent with publication number CN112784508a discloses a method for rapidly predicting an airfoil flow field based on deep learning, which comprises: generating a sample data set required by building a neural network; building and training a deep learning neural network model based on the sample data set; and the built deep neural network is used for quickly predicting the airfoil flow field. The model applies deep learning to the prediction of the airfoil flow field, can reduce the time cost and the resource consumption, and is a feasible new idea with wide application prospect.
Convolutional Neural Networks (CNNs) belong to a class of DNNs, commonly used for analysis of visual images. Much work has shown that convolutional neural networks have the potential to learn high-level features even though the data has strong spatial and channel correlations. Convolutional neural network models are of increasing interest in fluid mechanics because of their significant advantages in shape representation and scalability. However, the convolutional neural network has great limitations on the acquisition of global information, and cannot effectively realize global reference of a model, so that the accuracy of model prediction is insufficient. Meanwhile, the existing other methods have the problem of insufficient prediction efficiency. Therefore, how to design a method capable of improving the flow field prediction accuracy and prediction efficiency is a technical problem which needs to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a flow field prediction method based on attention and a convolutional neural network codec, which can analyze and predict main influence factors of a flow field around an object through the convolutional neural network, improve the prediction speed of a model through a codec structure, and extract important features from global information through an attention mechanism, so that the prediction precision and the prediction efficiency of flow field prediction can be improved, and further, effective technical support can be provided for the wing profile optimization design of a wing.
In order to solve the technical problems, the invention adopts the following technical scheme:
the flow field prediction method based on attention and convolutional neural network codec comprises the following steps:
s1: obtaining a symbol distance function graph of an object to be predicted;
s2: inputting the symbol distance function diagram into the trained flow field prediction model, and outputting a corresponding flow field prediction image;
when training the flow field prediction model: firstly, inputting a training data set containing a symbol distance function diagram and a corresponding flow field real image into a flow field prediction model; then extracting image features of the symbol distance function graph through an encoder based on an attention mechanism and a convolutional neural network to generate a corresponding attention feature graph; then the attack angle and the Mach number of the attention feature graph and the corresponding symbol distance function graph are fused through a fusion mapping module, and a corresponding high-level feature graph is generated through mapping; then, decoding the high-level feature map through a decoder to generate a corresponding flow field prediction image; finally, calculating training loss based on the flow field prediction image and the corresponding flow field real image, and optimizing parameters of a flow field prediction model through the training loss;
s3: and taking the predicted values of the velocity field and the pressure field in the flow field predicted image as the flow field prediction result of the object to be predicted.
Preferably, the training data set is constructed by:
s201: parameterizing a target wing profile, adding disturbance to generate a new wing profile, and taking various wing profiles as prediction objects;
s202: for a single prediction object, generating a corresponding symbol distance function graph by a Cartesian grid method;
s203: calculating the velocity field and the pressure field of a single prediction object under different attack angles and different Mach numbers by a numerical simulation method of a Reynolds average N-S equation to serve as flow field data;
s204: interpolating the flow field data of a single prediction object to a Cartesian grid with a corresponding size by a scattered data interpolation method of a triangular part to generate a corresponding real image of the flow field;
s205: and repeating the steps S202 to S204, and taking the symbol distance function graph and the flow field real image corresponding to the prediction object as a group of training data to generate a training data set containing the symbol distance function graph and the corresponding flow field real image.
Preferably, in step S202, the symbol distance function map is generated by:
s2021: gridding the corresponding prediction object by a Cartesian grid method to generate a corresponding wing grid;
s2022: calculating the symbolic distance between a given Cartesian grid point and a boundary point of a predicted object in the wing grid;
s2023: searching boundary points of a prediction object, calculating a scalar product between a normal vector at the boundary point closest to a given Cartesian grid point and a vector from the given Cartesian grid point to the closest boundary point, and judging a function sign through a scalar product value;
s2024: and generating a symbol distance function graph corresponding to the prediction object according to the calculated symbol distance and function symbol.
Preferably, the encoder generates the attention feature map by:
s211: inputting the symbol distance function graph into a convolution layer to carry out convolution filtering processing to generate a corresponding original characteristic graph;
s212: inputting the original feature map into a channel attention module to extract channel importance features and generate a corresponding channel attention feature map;
s213: inputting the channel attention feature map into a space attention module to extract space features, and generating a corresponding space attention feature map;
s214: and inputting the spatial attention feature map into the convolutional layer to carry out convolution filtering processing, and generating a corresponding attention feature map.
Preferably, in step S212, the channel attention module generates the channel attention feature map by:
s2121: inputting the original feature map into a maximum pooling layer and an average pooling layer which are parallel to each other, and generating a first feature map and a second feature map which correspond to each other;
s2122: respectively compressing the channel numbers of the first characteristic diagram and the second characteristic diagram to be 1/r times of the original channel number through a shared multilayer perceptron, and then expanding the channel numbers back to the original channel number to generate a corresponding first perception characteristic diagram and a corresponding second perception characteristic diagram;
s2123: activating the first perception characteristic diagram and the second perception characteristic diagram through a ReLU activation function, and performing element-by-element addition and sigmoid activation function processing on the activated results to generate a corresponding channel attention machine map;
s2124: and multiplying the channel attention map by the original feature map to generate a corresponding channel attention feature map.
Preferably, in step S213, the spatial attention module generates the spatial attention feature map by:
s2131: inputting the channel attention feature map into a maximum pooling layer to generate a corresponding third feature map;
s2132: inputting the third feature map into the convolution layer to carry out convolution filtering processing to generate a corresponding channel feature map;
s2133: performing sigmoid activation processing on a channel characteristic diagram to generate a corresponding spatial attention machine diagram;
s2134: and multiplying the spatial attention map by the original feature map to generate a corresponding spatial attention feature map.
Preferably, the fusion mapping module firstly recombines the attention feature map to obtain an original feature vector of a corresponding dimension; then adding corresponding attack angle and Mach number into the original feature vector as a new vector to generate a corresponding fusion feature vector; and finally, mapping the dimensionality of the fused feature vector to be consistent with the original feature vector through the full-connection layer, and recombining the fused feature vector to generate a corresponding high-level feature map.
Preferably, the decoder sequentially decodes the high-level feature map through three convolution filters to generate a flow field prediction image including predicted values of an x-direction velocity field, a y-direction velocity field and a pressure field around the object to be predicted.
Preferably, the training loss is calculated by the following formula:
Cost=λ MSE ×MSE+λ GS ×GS+λ L2 ×L2 regularization
in the formula: cost represents the training loss; MSE represents the mean square error; GS stands for gradient sharpening; l2 regularization Representing L2 regularization; lambda [ alpha ] MSE 、λ GS 、λ L2 A weight coefficient representing the setting;
Figure BDA0003944841380000031
Figure BDA0003944841380000032
Figure BDA0003944841380000033
in the formula: u and V represent the x-and y-components of the velocity field, respectively; m represents the batch size; n is a radical of an alkyl radical x Represents the number of grid points in the x direction; n is a radical of an alkyl radical y Representing the number of grid points in the y-direction; p represents a scalar pressure field;
Figure BDA0003944841380000041
representing the real value of the x-direction velocity field of the ith row and jth column grid point in the ith training data; l representsThe number of layers with trainable weights; n is l Representing the number of trainable weights in level L; theta represents a model parameter to be trained; />
Figure BDA0003944841380000042
Representing the predicted value of the x-direction speed field of the ith grid point of the ith row and the jth column in the ith training data; />
Figure BDA0003944841380000043
Representing the real value of the y-direction velocity field of the ith row and jth column grid point in the ith training data; />
Figure BDA0003944841380000044
Representing the predicted value of the y-direction speed field of the ith grid point of the ith row and the jth column in the ith training data;
Figure BDA0003944841380000045
representing the real value of the pressure field on the ith row and jth column grid points in the ith training data; />
Figure BDA0003944841380000046
Representing the predicted value of the pressure field at the ith row and the jth column grid point in the ith training data; />
Figure BDA0003944841380000047
Representing the gradient of the real value of the pressure in the x direction at the ith row and jth column grid points; />
Figure BDA0003944841380000048
Representing the gradient of the predicted value of the pressure in the x direction at the ith row and jth column grid points;
Figure BDA0003944841380000049
representing the gradient of the real value of the pressure in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800000410
Indicating the pressure in the y-direction at the ith row and jth column grid pointsA gradient to a predicted value; />
Figure BDA00039448413800000411
Representing the gradient of the real value of the speed in the x direction at the ith row and jth column grid points; />
Figure BDA00039448413800000412
Representing the gradient of the predicted value of the speed in the x direction at the jth grid point on the ith row and the jth column; />
Figure BDA00039448413800000413
Representing the gradient of the real value of the speed in the y direction in the x direction at the ith row and jth column grid points;
Figure BDA00039448413800000414
representing the gradient of the predicted value of the speed in the y direction at the ith row and jth column grid point in the x direction; />
Figure BDA00039448413800000415
Representing the gradient of the real value of the speed in the x direction in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800000416
Representing the gradient of the predicted value of the speed in the x direction at the ith row and jth column grid point in the y direction; />
Figure BDA00039448413800000417
Representing the gradient of the real value of the speed in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800000418
The gradient of the predicted value of the velocity in the y direction at the ith row and jth column grid point is shown.
Preferably, the flow field prediction model learns different weights in the training phase to predict the flow field: in each iteration, a batch of training data undergoes a feedforward process, when an output result is inconsistent with an expected value, a back propagation process is carried out, the error between the result and the expected value is solved, the error is returned layer by layer and the error of each layer is calculated, and the training loss is calculated according to the error to update the network weight until the flow field prediction model is converged.
Compared with the prior art, the flow field prediction method based on attention and convolutional neural network codec has the following beneficial effects:
the method can automatically detect the basic characteristics of the predicted object under less manual supervision through the neural network based on the coder-decoder, and can more quickly estimate the speed field and the pressure field around the object compared with the existing Reynolds average N-S method, namely, the prediction speed of the model can be improved, so that the prediction efficiency of the model can be effectively improved, and effective technical support is provided for the wing profile optimization design of the wing.
The codec structure based on the convolutional neural network has better performance in the aspects of image segmentation and reconstruction, and the flow field prediction is based on the reconstruction of the real image of the flow field by the features extracted from the original object, so the codec structure based on the convolutional neural network can well extract the features from the global information, thereby improving the global accuracy of the flow field prediction model. Meanwhile, by the advantages of the convolutional neural network in the aspects of shape representation and scalability, advanced features can be learned from flow field data with strong space and channel correlation, main influence factors of the flow field around the prediction object can be effectively analyzed, and the precision of flow field prediction can be improved.
The codec structure based on the attention mechanism can better pay attention to the part with severe flow field change, and the attention mechanism can extract more important characteristics in flow field prediction from global information, so that the prediction precision of the part with severe flow field change can be effectively improved. Meanwhile, the encoder structure can acquire global information to realize global reference of the model, so that more accurate model prediction precision and faster model convergence speed can be obtained, and the prediction precision and the prediction efficiency of the flow field prediction model can be further improved.
The attack angle and the Mach number are important factors influencing a real image of a flow field to be predicted, but the characteristics of the attack angle and the Mach number cannot be extracted through an encoder based on an attention mechanism and a convolutional neural network, so that the attention characteristic diagram, the attack angle and the Mach number are fused to generate a high-level characteristic diagram, more accurate flow field characteristics can be extracted, and the flow field prediction precision can be further improved.
The training data set generation method effectively widens the training data set, and can ensure the effectiveness of the training data, thereby improving the training effect and the prediction precision of the flow field prediction model.
Because the codec structure has great limitation on the prediction precision of local information, the invention realizes the global reference of the model from two levels of space and channel importance respectively through a space attention mechanism and a channel attention mechanism, thereby better increasing the accuracy of flow field prediction and accelerating the convergence speed of the model.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a flow field prediction method based on attention and convolutional neural network codecs;
FIG. 2 is a network architecture diagram of a flow field prediction model;
FIG. 3 is a predicted corresponding airfoil mesh;
FIG. 4 is a diagram of a distance function of symbols corresponding to the grid of the wing
FIG. 5 is a network architecture diagram of a channel attention module and a spatial attention module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships that the present product is conventionally placed in use, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance. Furthermore, the terms "horizontal", "vertical" and the like do not imply that the components are required to be absolutely horizontal or pendant, but rather may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined. In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The following is further detailed by way of specific embodiments:
example (b):
before specifically explaining the embodiments of the present invention, 4 concepts related to the present embodiment are explained.
1) The convolutional neural network is a feedforward neural network with a deep structure and comprising convolutional calculation.
2) Codec structure means that there is one main encoder and decoder in the whole network. The encoder is mainly used for extracting a feature map from the input, and the decoder is used for performing convolution by using a trainable convolution kernel, so that a dense feature map can be generated, and further feature optimization and task processing can be performed on the features obtained by the input processed by the encoder.
3) The attention mechanism is a special structure embedded in a machine learning model and is used for automatically learning and calculating the contribution of input data to output data.
4) The structured grid refers to a computational fluid dynamics division grid area, all internal points have the same adjacent units, and the structured grid has the advantages of high grid generation speed, good grid quality and simple data structure.
The embodiment discloses a flow field prediction method based on attention and convolutional neural network codec.
As shown in fig. 1, the flow field prediction method based on attention and convolutional neural network codec includes:
s1: obtaining a symbol distance function graph of an object to be predicted;
s2: inputting the symbol distance function diagram into the trained flow field prediction model, and outputting a corresponding flow field prediction image;
referring to fig. 2, when training the flow field prediction model: firstly, inputting a training data set containing a symbol distance function diagram and a corresponding flow field real image into a flow field prediction model; then extracting image features of the symbol distance function graph through an encoder based on an attention mechanism and a convolutional neural network to generate a corresponding attention feature graph; then fusing the attack angle and the Mach number of the attention feature graph and the corresponding symbol distance function graph through a fusion mapping module, and mapping to generate a corresponding high-level feature graph; then, decoding the high-level feature map through a decoder to generate a corresponding flow field prediction image; finally, calculating training loss based on the flow field prediction image and the corresponding flow field real image, and optimizing parameters of a flow field prediction model through the training loss;
s3: and taking the predicted values of the velocity field and the pressure field in the flow field predicted image as the flow field prediction result of the object to be predicted.
The invention extracts the image characteristics of the real image of the flow field by the encoder based on the attention mechanism and the convolutional neural network and generates a high-level characteristic diagram, and then decodes the high-level characteristic diagram output by the encoder by the decoder, so that computational fluid mechanics, a codec structure, the convolutional neural network and the attention mechanism can be organically combined together, and a flow field prediction model for realizing the prediction of aerodynamic force and the flow field around the wing in different wing shapes and operation states is obtained.
The method can automatically detect the basic characteristics of the predicted object under less manual supervision through the neural network based on the coder-decoder, and can more quickly estimate the speed field and the pressure field around the object compared with the existing Reynolds average N-S method, namely, the prediction speed of the model can be improved, so that the prediction efficiency of the model can be effectively improved, and effective technical support is provided for the wing profile optimization design of the wing.
The codec structure based on the convolutional neural network has better performance in the aspects of image segmentation and reconstruction, and the flow field prediction is based on the reconstruction of the real image of the flow field by the features extracted from the original object, so the codec structure based on the convolutional neural network can well extract the features from the global information, thereby improving the global accuracy of the flow field prediction model. Meanwhile, by the advantages of the convolutional neural network in the aspects of shape representation and scalability, advanced features can be learned from flow field data with strong space and channel correlation, main influence factors of the flow field around the prediction object can be effectively analyzed, and the precision of flow field prediction can be improved.
The codec structure based on the attention mechanism can better pay attention to the part with severe flow field change, and the attention mechanism can extract more important characteristics in flow field prediction from global information, so that the prediction precision of the part with severe flow field change can be effectively improved. Meanwhile, the encoder structure can acquire global information to realize global reference of the model, so that more accurate model prediction precision and faster model convergence speed can be obtained, and the prediction precision and the prediction efficiency of the flow field prediction model can be further improved.
The attack angle and the Mach number are important factors influencing a real image of a flow field to be predicted, but the characteristics of the attack angle and the Mach number cannot be extracted through an encoder based on an attention mechanism and a convolutional neural network, so that the attention characteristic diagram, the attack angle and the Mach number are fused to generate a high-level characteristic diagram, more accurate flow field characteristics can be extracted, and the flow field prediction precision can be further improved.
In the specific implementation process, a training data set is constructed through the following steps:
s201: parameterizing a target wing profile, adding disturbance to generate a new wing profile, and taking various wing profiles as prediction objects;
in this embodiment, the existing S805, S809, and S814 airfoils are selected as the target airfoils.
S202: for a single prediction object, generating a corresponding symbol distance function graph by a Cartesian grid method;
generating a graph of the symbol distance function by:
s2021: gridding the corresponding predicted object by Cartesian grid method to generate the wing grid shown in FIG. 3;
s2022: calculating the symbolic distance between a given Cartesian grid point and a boundary point of a predicted object in the wing grid;
s2023: in order to determine whether a given Cartesian grid point is inside or outside the predicted object or only on the surface of the predicted object, searching boundary points of the predicted object, calculating a scalar product between a normal vector at the boundary point nearest to the given Cartesian grid point and a vector from the given Cartesian grid point to the nearest boundary point, and judging a function sign by a scalar product value;
s2024: and generating a symbol distance function diagram shown in FIG. 4 according to the calculated symbol distance and function symbol.
The size of the plot of the symbol distance function is 150 x 150.
S203: calculating the velocity field and the pressure field of a single prediction object under different attack angles and different Mach numbers by a numerical simulation method of a Reynolds average N-S equation to serve as flow field data;
the numerical simulation method of the Reynolds average N-S equation is the existing mature technology.
Wherein the Reynolds average N-S equation is a control equation of flow field average variable, and the related simulation theory is called turbulence mode theory. The theory of turbulent flow mode assumes that the flow field variable in turbulent flow consists of a time-averaged quantity and a pulsating quantity, and processing the N-S equation in this view can obtain the Reynolds average N-S equation. After introducing the Boussinesq assumption again, that is, considering that the turbulent Reynolds stress is proportional to the strain, the turbulence calculation is concluded to be the calculation of the proportionality coefficient between the Reynolds stress and the strain (namely, the turbulent viscosity coefficient). The statistical average is carried out on the control equation, so that the turbulence pulsation of each scale does not need to be calculated, and only the average motion needs to be calculated, thereby reducing the spatial and temporal resolution and reducing the calculation workload. In the solver setup, the unbiased term is calculated using the third-order conservation law monotonic windward format (muslc) with Koren limiter and Roe flux differential splitting. The viscous term employs a second order exact center difference. The Reynolds mean N-S equation closure is the Spalart-Alllmaras turbulence model. The wing surface has no slip boundary condition.
Specifically, calculating the flow field data of the prediction object may be accomplished by existing mature software, including:
firstly, gridding a corresponding prediction object to generate a corresponding wing grid;
the method comprises the steps of defining grid boundaries and nodes, defining an air inlet, an air outlet, an upper wall surface, a lower wall surface, a front wall body and a rear wall body of a prediction object, and defining grid nodes and grid units.
And then, randomly generating an attack angle and a Reynolds number, and taking the attack angle and the Reynolds number as initial attack angles and Mach numbers of a prediction object.
Inputting the wing grids and the initial attack angles and Mach numbers thereof into the conventional fluid mechanics flow solver, and defining iteration times; then setting discrete formats including interpolation formats and the like; parameters relating to the solution of the system of algebraic equations and the speed-pressure coupling correlation algorithm are then set.
And finally, generating a speed field and a pressure field of the prediction object as flow field data by using the conventional fluid mechanics flow solver.
S204: interpolating the flow field data of a single prediction object to a Cartesian grid with a corresponding size (the size is consistent with the size of a symbol distance function diagram) by a scattered data interpolation method of a triangular part, and generating a corresponding real image of the flow field;
the scattered data interpolation method through the triangular part is the existing mature technology, and specifically comprises the following steps:
firstly, a scattered point set needs to be defined, and the scattered point set is uniformly distributed on a flow field calculation result to be interpolated.
The scattered point set forms a plurality of triangles as vertexes, and each point (x) of the scattered point set i ,y i ) Calculating the corresponding quadratic polynomial interpolation:
Figure BDA0003944841380000091
by connecting points (x) i ,y i ) Substituting itself and its 5 nearest neighbors in the dispersed set of points into the equation yields 6 equations for parameter a. Where x, y represent coordinates and z represents the value of the point.
Through 6 equations, [ a ] can be calculated 1 ,a 2 ,a 3 ,a 4 ,a 5 ,a 6 ]. Thereby obtaining an interpolation polynomial of the scattering point.
Then calculating the interpolation z of the interpolation point (x, y), and firstly finding out which triangle the point is positioned in after triangulation; the value is then determined as three interpolation polynomials with points (x, y) corresponding to the vertices of the three triangles:
Figure BDA0003944841380000092
to ensure a continuous transition from one triangle to the next, it is only necessary to ensure that each weight w i The same is zero on the triangle edge opposite the ith vertex. This can be achieved by letting w i Distance d from point (x, y) i Proportional to the k power of (c).
For some k, typically 3, then the weight w i Scaled to d at vertex i i =1。
Therefore, the temperature of the molten metal is controlled,
Figure BDA0003944841380000093
determine w i Thereafter, the value of the interpolation point (x, y) may be determined from the three interpolation polynomials.
The interpolation of the flow field data is realized by the interpolation method, and a corresponding flow field real image is generated.
S205: and repeating the steps S202 to S204, and taking the symbol distance function diagram and the flow field real image corresponding to the prediction object as a group of training data to generate a training data set containing the symbol distance function diagram and the corresponding flow field real image.
The method effectively widens the training data set through the training data set generation method, and can ensure the effectiveness of the training data, thereby improving the training effect and the prediction precision of the flow field prediction model.
In a specific implementation process, the encoder generates the attention feature map through the following steps:
s211: inputting the symbol distance function graph into a convolution layer to carry out convolution filtering processing to generate a corresponding original characteristic graph;
in this example, the fluid image input to the attention-based encoder is a 150 × 150 plot of the distance function of the symbols. The convolution layer is composed of 300 convolution filters 5*5, each convolution is wrapped by a nonlinear Swish activation function, and the characteristic vector of an original characteristic diagram obtained after passing through the convolution filters becomes 30 × 300.
S212: inputting the original feature map into a channel attention module to extract channel importance features and generate a corresponding channel attention feature map;
as shown in connection with fig. 5, the channel attention module generates a channel attention feature map by:
s2121: inputting the original feature map into a maximum pooling layer and an average pooling layer which are parallel to each other, and generating a first feature map and a second feature map which correspond to each other;
in this example, the original profile was changed from 30 × 300 to 1 × 300 in size by the maximum pooling layer and the average pooling layer.
S2122: compressing the channel number of the first characteristic diagram and the second characteristic diagram to be 1/r times of the original channel number through a shared multilayer perceptron, and expanding the channel number back to the original channel number to generate a corresponding first perception characteristic diagram and a corresponding second perception characteristic diagram;
s2123: activating the first perception characteristic diagram and the second perception characteristic diagram through a ReLU activation function, and performing element-by-element addition and sigmoid activation function processing on the activated results to generate a corresponding channel attention machine map;
s2124: and multiplying the channel attention map by the original feature map to generate a corresponding channel attention feature map.
S213: inputting the channel attention feature map into a space attention module to extract space features, and generating a corresponding space attention feature map;
as shown in connection with fig. 5, the spatial attention module generates a spatial attention feature map by:
s2131: inputting the channel attention feature map into a maximum pooling layer to generate a corresponding third feature map;
in this embodiment, the channel attention feature map is input into the max pooling layer to obtain a feature map of 30 × 1.
S2132: inputting the third feature map into the convolutional layer of 7*7 for convolution filtering processing to generate a corresponding channel feature map;
s2133: performing sigmoid activation processing on a channel characteristic diagram to generate a corresponding spatial attention machine diagram;
s2134: and multiplying the spatial attention map by the original feature map to generate a corresponding spatial attention feature map.
In this embodiment, the spatial attention feature map is 30 × 300.
S214: and inputting the spatial attention feature map into the convolutional layer to carry out convolution filtering processing, and generating a corresponding attention feature map.
In this embodiment, the spatial attention feature map is sequentially input into a convolutional layer composed of 300 convolutional filters 5*5 and a convolutional layer composed of 300 convolutional filters 3*3, and each convolution is wrapped by a nonlinear Swish activation function to generate an attention feature map of 2 × 300.
The encoder based on the attention mechanism and the convolutional neural network can learn advanced features from flow field data with strong space and channel correlation, further effectively analyze and predict main influence factors of a flow field around an object, better focus on a part with severe flow field change, realize global reference of a model, and further obtain more accurate model prediction precision and faster model convergence speed.
Because the structure of the coder and the decoder has great limitation on the prediction precision of the local information, the invention realizes the global reference of the model from two layers of space and channel importance respectively through a space attention mechanism and a channel attention mechanism, thereby better increasing the accuracy of flow field prediction and accelerating the convergence speed of the model.
In a specific implementation process, the fusion mapping module firstly recombines the attention feature maps of 2 × 300 to obtain feature vectors of 1 × 1200; then adding the corresponding attack angle and Mach number into the original feature vector to serve as a new vector, and generating 1202 x 1 fusion feature vector; and finally, mapping the dimensionality of the fused feature vector to be consistent with the original feature vector, namely 1200 x 1, through the full-connection layer, and recombining the fused feature vector to generate a corresponding high-level feature map.
In a specific implementation process, the decoder sequentially decodes the high-level feature map through the convolution filter 3*3, the convolution filter 5*5 and the convolution filter 5*5 to generate a flow field predicted image containing predicted values of an x-direction velocity field, a y-direction velocity field and a pressure field around an object to be predicted.
In a specific implementation process, the training loss function is composed of mean square error, gradient sharpening and L2 regularization.
The training loss is calculated by the following formula:
Cost=λ MSE ×MSE+λ GS ×GS+λ L2 ×L2 regularization
in the formula: cost represents the training loss; MSE represents the mean square error; GS stands for gradient sharpening; l2 regularization Representing L2 regularization; lambda [ alpha ] MSE 、λ GS 、λ L2 A weight coefficient representing the setting;
Figure BDA0003944841380000111
Figure BDA0003944841380000112
Figure BDA0003944841380000113
in the formula: u and V represent the x-and y-components of the velocity field, respectively; m represents the batch size; n is x Represents the number of grid points in the x-direction; n is y Representing the number of grid points in the y-direction; p represents a scalar pressure field;
Figure BDA0003944841380000121
representing the real value of the x-direction speed field of the ith row and jth column grid point in the ith training data; l represents trainable weights withThe number of layers; n is a radical of an alkyl radical l Representing the number of trainable weights in level L; theta represents the model parameter to be trained; />
Figure BDA0003944841380000122
Representing the predicted value of the x-direction speed field of the ith grid point of the ith row and the jth column in the ith training data; />
Figure BDA0003944841380000123
Representing the real value of the y-direction velocity field of the ith row and jth column grid point in the ith training data; />
Figure BDA0003944841380000124
Representing the predicted value of the y-direction velocity field of the ith row and jth column grid point in the ith training data;
Figure BDA0003944841380000125
representing the real value of the pressure field on the ith row and jth column grid points in the ith training data; />
Figure BDA0003944841380000126
Representing the predicted value of the pressure field at the ith row and the jth column grid point in the ith training data; />
Figure BDA0003944841380000127
Representing the gradient of the real value of the pressure in the x direction at the ith row and jth column grid points; />
Figure BDA0003944841380000128
Representing the gradient of the predicted value of the pressure in the x direction at the ith row and jth column grid points;
Figure BDA0003944841380000129
representing the gradient of the real value of the pressure in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800001210
Representing the gradient of predicted values of pressure in the y direction at the jth grid point on row i and column j;/>
Figure BDA00039448413800001211
representing the gradient of the real value of the speed in the x direction on the ith row and jth column grid points; />
Figure BDA00039448413800001212
Representing the gradient of the predicted value of the speed in the x direction at the jth grid point on the ith row and the jth column; />
Figure BDA00039448413800001213
Representing the gradient of the real value of the speed in the y direction in the x direction at the ith row and jth column grid points;
Figure BDA00039448413800001214
representing the gradient of the predicted value of the speed in the y direction at the ith row and jth column grid point in the x direction; />
Figure BDA00039448413800001215
Representing the gradient of the real value of the speed in the x direction in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800001216
Representing the gradient of the predicted value of the speed in the x direction at the ith row and jth column grid point in the y direction; />
Figure BDA00039448413800001217
Representing the gradient of the real value of the speed in the y direction at the ith row and jth column grid points; />
Figure BDA00039448413800001218
The gradient of the predicted value of the velocity in the y direction at the ith row and jth column grid point is shown.
In this embodiment, the grids and the grid points refer to the grids and the grid points in the flow field predicted image and the flow field real image which are gridded after the flow field predicted image and the flow field real image are gridded by using a cartesian grid method or other existing methods.
The flow field prediction model learns different weights in the training phase to predict the flow field: in each iteration, a batch of training data undergoes a feedforward process, when an output result (a flow field predicted image) is inconsistent with an expected value (a flow field real image), backward propagation is carried out, the error (training loss) between the result and the expected value is solved, the error of each layer is further returned layer by layer and calculated, the training loss is calculated according to the error to update the network weight until the flow field prediction model converges.
The addition of the mean square error is to enable the flow field image to be more accurate when training is finished, and because the gradient of the mean square error loss is larger when the loss value is higher and is reduced along with the loss approaching 0; the gradient sharpening is added to punish the difference of the gradients in the loss function and solve the problem of lack of sharpness in flow field prediction; the L2 norm is added to prevent the problem that the error is small when the model is trained, but the error is large when the model is tested, the flow field prediction model is complex to fit all training samples, and the model performance is poor when a new sample is actually predicted. Therefore, the training loss function is formed by mean square error, gradient sharpening and L2 regularization, so that the training effect and performance of the flow field prediction model can be improved.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (10)

1. The flow field prediction method based on attention and convolutional neural network codec is characterized by comprising the following steps:
s1: obtaining a symbol distance function graph of an object to be predicted;
s2: inputting the symbol distance function diagram into a trained flow field prediction model, and outputting a corresponding flow field prediction image;
when training the flow field prediction model: firstly, inputting a training data set containing a symbol distance function diagram and a corresponding flow field real image into a flow field prediction model; then extracting image features of the symbol distance function graph through an encoder based on an attention mechanism and a convolutional neural network to generate a corresponding attention feature graph; then the attack angle and the Mach number of the attention feature graph and the corresponding symbol distance function graph are fused through a fusion mapping module, and a corresponding high-level feature graph is generated through mapping; then, decoding the high-level feature map through a decoder to generate a corresponding flow field prediction image; finally, calculating training loss based on the flow field prediction image and the corresponding flow field real image, and optimizing parameters of a flow field prediction model through the training loss;
s3: and taking the predicted values of the velocity field and the pressure field in the flow field predicted image as the flow field prediction result of the object to be predicted.
2. The attention and convolutional neural network codec-based flow field prediction method as claimed in claim 1, wherein the training data set is constructed by:
s201: parameterizing a target wing profile, adding disturbance to generate a new wing profile, and taking various wing profiles as prediction objects;
s202: for a single prediction object, generating a corresponding symbol distance function graph by a Cartesian grid method;
s203: calculating the velocity field and the pressure field of a single prediction object under different attack angles and different Mach numbers by a Reynolds average N-S equation numerical simulation method to serve as flow field data;
s204: interpolating the flow field data of a single prediction object to a Cartesian grid with a corresponding size by a scattered data interpolation method of a triangular part to generate a corresponding real image of the flow field;
s205: and repeating the steps S202 to S204, and taking the symbol distance function graph and the flow field real image corresponding to the prediction object as a group of training data to generate a training data set containing the symbol distance function graph and the corresponding flow field real image.
3. The attention and convolutional neural network codec-based flow field prediction method of claim 2, characterized in that: in step S202, a symbol distance function map is generated by:
s2021: gridding the corresponding prediction object by a Cartesian grid method to generate a corresponding wing grid;
s2022: calculating the symbolic distance between a given Cartesian grid point and a boundary point of a predicted object in the wing grid;
s2023: searching boundary points of a prediction object, calculating a scalar product between a normal vector at the boundary point closest to a given Cartesian grid point and a vector from the given Cartesian grid point to the closest boundary point, and judging a function sign through a scalar product value;
s2024: and generating a symbol distance function graph corresponding to the prediction object according to the calculated symbol distance and function symbol.
4. The attention and convolutional neural network codec-based flow field prediction method of claim 1, wherein the encoder generates the attention feature map by:
s211: inputting the symbol distance function graph into a convolution layer to carry out convolution filtering processing to generate a corresponding original characteristic graph;
s212: inputting the original feature map into a channel attention module to extract channel importance features and generate a corresponding channel attention feature map;
s213: inputting the channel attention feature map into a space attention module to extract space features, and generating a corresponding space attention feature map;
s214: and inputting the spatial attention feature map into the convolution layer to carry out convolution filtering processing, and generating a corresponding attention feature map.
5. The attention and convolutional neural network codec-based flow field prediction method of claim 4, wherein: in step S212, the channel attention module generates a channel attention feature map by:
s2121: inputting the original characteristic diagram into a maximum pooling layer and an average pooling layer which are parallel to each other, and generating a first characteristic diagram and a second characteristic diagram which correspond to each other;
s2122: compressing the channel number of the first characteristic diagram and the second characteristic diagram to be 1/r times of the original channel number through a shared multilayer perceptron, and expanding the channel number back to the original channel number to generate a corresponding first perception characteristic diagram and a corresponding second perception characteristic diagram;
s2123: activating the first perception characteristic diagram and the second perception characteristic diagram through a ReLU activation function, and performing element-by-element addition and sigmoid activation function processing on the activated results to generate a corresponding channel attention machine map;
s2124: multiplying the channel attention map by the original profile to generate a corresponding channel attention map.
6. The attention and convolutional neural network codec-based flow field prediction method of claim 4, wherein: in step S213, the spatial attention module generates a spatial attention feature map by:
s2131: inputting the channel attention feature map into a maximum pooling layer to generate a corresponding third feature map;
s2132: inputting the third feature map into the convolution layer to carry out convolution filtering processing to generate a corresponding channel feature map;
s2133: performing sigmoid activation processing on a channel characteristic diagram to generate a corresponding spatial attention machine diagram;
s2134: and multiplying the spatial attention map by the original feature map to generate a corresponding spatial attention feature map.
7. The attention and convolutional neural network codec-based flow field prediction method of claim 1, wherein: the fusion mapping module firstly recombines the attention feature map to obtain an original feature vector of a corresponding dimension; then adding corresponding attack angle and Mach number into the original feature vector as a new vector to generate a corresponding fusion feature vector; and finally, mapping the dimensionality of the fused feature vector to be consistent with the original feature vector through a full connection layer, and recombining the fused feature vector to generate a corresponding high-level feature map.
8. The attention and convolutional neural network codec-based flow field prediction method of claim 1, wherein: the decoder decodes the high-level characteristic diagram sequentially through the three convolution filters to generate a flow field prediction image containing predicted values of an x-direction velocity field, a y-direction velocity field and a pressure field around the object to be predicted.
9. The attention and convolutional neural network codec-based flow field prediction method of claim 1, wherein: the training loss is calculated by the following formula:
Cost=λ MSE ×MSE+λ GS ×GS+λ L2 ×L2 regularization
in the formula: cost represents the training loss; MSE represents the mean square error; GS stands for gradient sharpening; l2 regularization Representing L2 regularization; lambda [ alpha ] MSE 、λ GS 、λ L2 A weight coefficient representing the setting;
Figure FDA0003944841370000031
Figure FDA0003944841370000032
Figure FDA0003944841370000033
in the formula: u and V represent the x-and y-components of the velocity field, respectively; m represents the batch size; n is x Represents the number of grid points in the x direction; n is y Representing the number of grid points in the y-direction; p represents a scalar pressure field;
Figure FDA0003944841370000034
representing the real value of the x-direction speed field of the ith row and jth column grid point in the ith training data; l represents the number of layers with trainable weights; n is l Representing the number of trainable weights in level L; theta represents the model parameter to be trained; />
Figure FDA0003944841370000035
Representing the predicted value of the x-direction velocity field of the ith row and jth column grid point in the ith training data; />
Figure FDA0003944841370000036
Representing the real value of the y-direction velocity field of the ith row and jth column grid point in the ith training data; />
Figure FDA0003944841370000037
Representing the predicted value of the y-direction speed field of the ith grid point of the ith row and the jth column in the ith training data; />
Figure FDA0003944841370000038
Representing the real value of the pressure field on the ith row and jth column grid points in the ith training data; />
Figure FDA0003944841370000039
Representing the predicted value of the pressure field at the ith row and the jth column grid point in the ith training data; />
Figure FDA00039448413700000310
Representing the gradient of the real value of the pressure in the x direction at the ith row and jth column grid points; />
Figure FDA00039448413700000311
Representing the gradient of the predicted value of the pressure in the x direction at the ith row and jth column grid points;
Figure FDA00039448413700000312
representing the gradient of the real value of the pressure in the y direction at the ith row and jth column grid points; />
Figure FDA00039448413700000313
Representing the gradient of the predicted value of the pressure in the y direction at the ith row and jth column grid points; />
Figure FDA00039448413700000314
Representing the gradient of the real value of the speed in the x direction on the ith row and jth column grid points; />
Figure FDA00039448413700000315
Representing the gradient of the predicted value of the speed in the x direction at the jth grid point on the ith row and jth column in the x direction; />
Figure FDA00039448413700000316
Representing the gradient of the real value of the speed in the y direction in the x direction at the ith row and jth column grid points;
Figure FDA00039448413700000317
representing the gradient of the predicted value of the speed in the y direction at the ith row and jth column grid point in the x direction; />
Figure FDA00039448413700000318
Representing the gradient of the real value of the speed in the x direction on the ith row and jth column grid points in the y direction; />
Figure FDA00039448413700000319
Representing the gradient of the predicted value of the speed in the x direction at the jth grid point on the ith row and jth column in the y direction; />
Figure FDA00039448413700000320
Representing the gradient of the real value of the speed in the y direction at the ith row and jth column grid points; />
Figure FDA0003944841370000041
The gradient of the predicted value of the velocity in the y direction at the ith row and jth column grid point is shown.
10. The attention-and convolutional neural network codec-based flow field prediction method as claimed in claim 9, wherein the flow field prediction model learns different weights to predict the flow field in the training phase: in each iteration, a batch of training data undergoes a feedforward process, when an output result is inconsistent with an expected value, a back propagation process is carried out, the error between the result and the expected value is solved, the error is returned layer by layer and the error of each layer is calculated, and the training loss is calculated according to the error to update the network weight until the flow field prediction model is converged.
CN202211430114.0A 2022-11-15 2022-11-15 Flow field prediction method based on attention and convolutional neural network codec Withdrawn CN115859781A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211430114.0A CN115859781A (en) 2022-11-15 2022-11-15 Flow field prediction method based on attention and convolutional neural network codec
CN202310242725.0A CN116227359A (en) 2022-11-15 2023-03-14 Flow field prediction method based on attention and convolutional neural network codec

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211430114.0A CN115859781A (en) 2022-11-15 2022-11-15 Flow field prediction method based on attention and convolutional neural network codec

Publications (1)

Publication Number Publication Date
CN115859781A true CN115859781A (en) 2023-03-28

Family

ID=85663577

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211430114.0A Withdrawn CN115859781A (en) 2022-11-15 2022-11-15 Flow field prediction method based on attention and convolutional neural network codec
CN202310242725.0A Pending CN116227359A (en) 2022-11-15 2023-03-14 Flow field prediction method based on attention and convolutional neural network codec

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310242725.0A Pending CN116227359A (en) 2022-11-15 2023-03-14 Flow field prediction method based on attention and convolutional neural network codec

Country Status (1)

Country Link
CN (2) CN115859781A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117540664A (en) * 2024-01-10 2024-02-09 中国空气动力研究与发展中心计算空气动力研究所 Two-dimensional flow field prediction and correction method based on graph neural network
CN117540489A (en) * 2023-11-13 2024-02-09 重庆大学 Airfoil pneumatic data calculation method and system based on multitask learning
CN117574029A (en) * 2024-01-19 2024-02-20 中国空气动力研究与发展中心计算空气动力研究所 Compatible method of high-resolution Reynolds stress and Reynolds average Navier-Stokes equation solver

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117540489A (en) * 2023-11-13 2024-02-09 重庆大学 Airfoil pneumatic data calculation method and system based on multitask learning
CN117540664A (en) * 2024-01-10 2024-02-09 中国空气动力研究与发展中心计算空气动力研究所 Two-dimensional flow field prediction and correction method based on graph neural network
CN117540664B (en) * 2024-01-10 2024-04-05 中国空气动力研究与发展中心计算空气动力研究所 Two-dimensional flow field prediction and correction method based on graph neural network
CN117574029A (en) * 2024-01-19 2024-02-20 中国空气动力研究与发展中心计算空气动力研究所 Compatible method of high-resolution Reynolds stress and Reynolds average Navier-Stokes equation solver
CN117574029B (en) * 2024-01-19 2024-04-26 中国空气动力研究与发展中心计算空气动力研究所 Compatible method of high-resolution Reynolds stress and Reynolds average Navier-Stokes equation solver

Also Published As

Publication number Publication date
CN116227359A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
Liu et al. Supervised learning method for the physical field reconstruction in a nanofluid heat transfer problem
CN115859781A (en) Flow field prediction method based on attention and convolutional neural network codec
Georgiou et al. Learning fluid flows
Chen et al. Crom: Continuous reduced-order modeling of pdes using implicit neural representations
Zhang et al. MeshingNet3D: Efficient generation of adapted tetrahedral meshes for computational mechanics
CN109726433B (en) Three-dimensional non-adhesive low-speed streaming numerical simulation method based on curved surface boundary conditions
Miyanawala et al. A novel deep learning method for the predictions of current forces on bluff bodies
Jacob et al. Deep learning for real-time aerodynamic evaluations of arbitrary vehicle shapes
Loeven et al. Airfoil analysis with uncertain geometry using the probabilistic collocation method
Renn et al. Forecasting subcritical cylinder wakes with Fourier Neural Operators
Bonnet et al. An extensible benchmarking graph-mesh dataset for studying steady-state incompressible Navier-Stokes equations
Li et al. Fast flow field prediction of hydrofoils based on deep learning
CN117786286A (en) Fluid mechanics equation solving method based on physical information neural network
Naderibeni et al. Learning solutions of parametric Navier-Stokes with physics-informed neural networks
CN111159956B (en) Feature-based flow field discontinuity capturing method
Olson et al. Turbulence-parameter estimation for current-energy converters using surrogate model optimization
Zhu et al. Hydrodynamic design of a circulating water channel based on a fractional-step multi-objective optimization
Strönisch et al. Flow field prediction on large variable sized 2D point clouds with graph convolution
Shahane et al. Convolutional neural network for flow over single and tandem elliptic cylinders of arbitrary aspect ratio and angle of attack
Hočevar et al. A turbulent-wake estimation using radial basis function neural networks
CN116415482A (en) Wing flow field analysis method based on graph neural network MeshGraphNet
Xu et al. A novel model with an improved loss function to predict the velocity field from the pressure on the surface of the hydrofoil
Domínguez-Vázquez et al. Adjoint-based particle forcing reconstruction and uncertainty quantification
Díaz-Morales et al. Deep learning combined with singular value decomposition to reconstruct databases in fluid dynamics
Zhao et al. Prediction of confined flow field around a circular cylinder and its force based on convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230328