CN115293050A - Method, device and system for establishing fluid flow reduced-order model and storage medium - Google Patents

Method, device and system for establishing fluid flow reduced-order model and storage medium Download PDF

Info

Publication number
CN115293050A
CN115293050A CN202211005481.6A CN202211005481A CN115293050A CN 115293050 A CN115293050 A CN 115293050A CN 202211005481 A CN202211005481 A CN 202211005481A CN 115293050 A CN115293050 A CN 115293050A
Authority
CN
China
Prior art keywords
flow field
model
snapshot
matrix
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211005481.6A
Other languages
Chinese (zh)
Inventor
武频
邱丰
翁龙杰
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202211005481.6A priority Critical patent/CN115293050A/en
Publication of CN115293050A publication Critical patent/CN115293050A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/12Timing analysis or timing optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Fluid Mechanics (AREA)
  • Algebra (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a device and a system for constructing a fluid flow reduced order model and a storage medium, wherein the model is based on intrinsic orthogonal decomposition and transform neural network. The eigen-orthogonal decomposition is used to generate a basis function of the low-dimensional flow field, and the coefficient is used as the low-dimensional flow field characteristic. And (3) constructing a prediction model of the low-dimensional features by using a Transformer neuroalnetwork. The reduced order model presented herein relies solely on the solution of the flow field. Compared with RNN, the transform neural network obtains the correlation information among the flow field sequences through an attention machine mechanism, improves the robustness of the model, and relieves the problem that the error of the model is transmitted backwards in the autoregressive process, so that the change rule of the flow can be better captured. In the whole calculation process, the flow field sequences are calculated in parallel in a matrix form, so that the model has the advantage of on-line calculation time.

Description

Method, device and system for establishing fluid flow reduced-order model and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a method, a device and a system for establishing a fluid flow reduced-order model and a storage medium.
Background
Numerical simulation is widely used as a predictive tool to better understand complex physical processes and aid in engineering design. However, the high-precision and high-reliability simulation results obtained by using the complex dynamic model calculation consume a large amount of calculation resources and calculation time. In this case, a Reduced Order Model (ROM) plays an important role, which uses a simple computing system instead of the original complex computing system, performs simulation within an acceptable time range and a limited storage capacity, and obtains sufficiently reliable results. At present, reduced order models have been widely used in various complex computational fluid mechanics fields, such as environmental science, aerospace, industrial applications, and the like.
Generally, reduced order models can be divided into two categories depending on the construction method. The first type is a system identification based method that relies only on the input and output of the fluid dynamic system, replacing the complex original model with a low order functional model. The second type is a method based on flow field characteristics. This approach maps the full-order system to a low-dimensional space through modal decomposition or feature extraction. The second method can better simulate the details and non-linearity of fluid motion. The reduced-order model based on the flow field characteristics can be divided into an embedded reduced-order model and a non-embedded reduced-order model. The embedded reduced-order model depends on the original control equation and needs to be modified, so that the embedded reduced-order model is difficult to realize and has the problems of instability and nonlinear efficiency. The non-embedded reduced order model is completely decoupled from the original control equation in the construction process, and rapid prediction is realized by capturing basic features in data. The non-embedded reduced order model is easier to implement and improve due to being completely based on data driving, and therefore attracts more attention.
An intrinsic orthogonal decomposition (POD) based ROM is a typical ROM based on flow field characteristics. Since the successful application of POD to the construction of order-reduced models in 1996, researchers have proposed various POD-based order-reduced models, including invasive and non-invasive approaches. Typical invasive ROMs include those based on POD and galaogin projection technology. The non-invasive ROM is constructed mainly by combining POD with kriging, RBF, smolyak, etc. interpolation methods. Their predictive power is determined by the choice of interpolation function and the sample data. In recent years, some scholars have used the auto-encoder and the DMD instead of the POD. Hamidreza Eivazi, kookjin Lee, et al construct a non-invasive ROM by using a self-encoder; the DMD is proposed and used by P.J.Schmid, J.Kou and the like to extract smooth characteristics and construct a non-invasive ROM, and the good simulation effect is also achieved. However, the network parameters from the encoder are excessive, and the model is difficult to train and is easily over-fitted. Therefore, there is no significant advantage over the conventional methods such as POD and DMD.
In recent years, with the development of deep learning, the construction of non-invasive ROM using neural network has become a new research focus. Deep learning is good at mining relationships between data, and has been successfully applied to various fields such as natural language processing, computer vision, and the like. Neural networks have a strong fitting ability, theoretically fitting bounded functions in any one space with any accuracy, which makes it possible to simulate complex flows through neural networks. Deep learning has an impact on fluid dynamics, and more research combines reduced order modeling with deep learning. These studies show that the application of deep learning can make reduced order modeling have better research prospects.
In the process of constructing the non-invasive ROM, deep learning is mainly used for time series modeling and is completed by adopting an interpolation method in the past. The Recurrent Neural Network (RNN), particularly the Long Short Term Memory (LSTM) [28], has good performance in sequence data modeling. Therefore, constructing a reduced order model by combining RNN and POD becomes a major deep learning technique. Mannarino proposes a non-linear aerodynamic ROM (Nonlinear aeroelastic) based on a recurrent neural network. Wang proposed a POD and LSTM based Deep Learning Reduced Order Model (DLROM). DLROMs are capable of capturing complex fluid flows. Experimental results show that the reduced order model LSTM has better prediction capability compared to a combination of POD and Radial Basis Function (RBF). Mohan also proposed a ROM of bi-directional LSTM for use in turbulence control. Except RNN, the time convolutional neural network (TCN) also has good performance in sequence modeling, and Wu proposes a ROM based on POD and TCN with a simpler structure. Other machine learning techniques are also applied to ROM construction, classical ones such as gaussian process regression, LS-SVM, feed forward neural networks, RBF neural networks, etc.
Although RNNs have advantages in sequence modeling, their internal structure is complex and parallel computation is not possible. RNN has the problems of gradient disappearance, long-range dependence and the like in the training process. Convolutional Neural Networks (CNNs) do not have such problems due to the characteristics of the network structure, but CNNs are limited by local acceptance domains in feature extraction. Only local area information can be acquired. Vaswani A and the like describe a Transformer neural network for sequence modeling, and through a Self-attention computing method, the problems of LSTM and TCN are avoided, parallel computing is realized, and a computing domain is expanded from local to global. Recent studies have shown that Transformer has higher computational efficiency by parallel computation and is a more advanced RNN than the recurrent neural network. Compared with the convolution calculation in the CNN, self-attribute in the Transformer captures the information of the global region and is the CNN with stronger function. Currently, transformers have been used for prediction in related studies. Experimental results show that the Transformer has better performance. Therefore, the application provides a new reduced order model on the basis of combining the high efficiency and the robustness of the Transformer.
Disclosure of Invention
The present invention has been made to solve the above-mentioned problems occurring in the prior art. Therefore, what is needed is a method, apparatus, and system for establishing a reduced-order fluid flow model, and a storage medium, which allows for a snapshot of the knowledge obtained through simulation. POD is used for generating basis functions of an optimal representation solution, and a Transformer neural network is used for learning physical dynamics.
The invention specifically adopts the following technical scheme:
according to a first aspect of the present invention, there is provided a method of building a reduced-order fluid flow model including basis functions of POD and a neural network model, the method including:
flow field snapshot X based on continuous time step (i) Constructing a data set D for training a neural network model; wherein the vector X (i) Flow field data at the ith time is represented, and the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,...,X (T) }
Figure BDA0003808495930000041
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure BDA0003808495930000042
representing the value of the nth node in the flow field, subscript n representing the number of grid points in the flow field, and superscript T representing the length of a time step in the data set;
obtaining a basis function through singular value decomposition based on the flow field snapshot matrix M; the flow field snapshot matrix M consists of M flow field snapshots sampled in the flow field snapshot dataset D;
and training the neural network model by using the flow field snapshot data set D based on the basis of the basis function to obtain a fluid flow reduced-order model.
Further, the obtaining a basis function σ through singular value decomposition based on the flow field snapshot matrix M includes:
firstly, zero equalization is carried out on a flow field snapshot matrix, and the specific method comprises the following steps:
flow field snapshot matrix M minus average vector
Figure BDA0003808495930000043
Obtaining a flow field snapshot matrix with zero equalization
Figure BDA0003808495930000044
Figure BDA0003808495930000045
Wherein M is equal to R n×m
Figure BDA0003808495930000046
Is the average vector for each row and is,
singular value decomposition is performed by the following formula:
Figure BDA0003808495930000047
wherein U belongs to R n×n Is that
Figure BDA0003808495930000048
For a feature vector matrix of (V ∈ R) m×m Is that
Figure BDA0003808495930000049
Is e.g. R, e n ×m Is a diagonal matrix, the diagonal elements of sigma are
Figure BDA00038084959300000410
The singular value of (2) represents the energy value contained in the basis function, and the superscript T represents the matrix transposition;
Figure BDA00038084959300000411
wherein v is i Is the ith vector of the V matrix, u i Is the ith vector of the U matrix, λ i Is composed of
Figure BDA00038084959300000412
Eigenvalues of the matrix. The characteristic value lambda is measured i I.e. the singular value Σ i The squares of the first j feature vectors are sequentially selected from the big to the small, and a basis function sigma is obtained:
σ={u 1 ,u 2 ,...,u j }
wherein σ ∈ R j×n Is a set of variable bases. Through j basis functions, the flow field can be greatly saved
Figure BDA0003808495930000051
The information of (2):
Figure BDA0003808495930000052
after generating the basis function, the flow field snapshot X at the t-th moment (t) May be expressed as:
α (t) =σX (t)
wherein alpha is (t) ∈R j×1 ,X (t) ∈R n×1 And σ is a basis function.
Further, the training the neural network model by converting the flow field snapshot dataset D into a low-dimensional flow field snapshot dataset α based on the basis of the basis function to obtain a fluid flow reduced-order model, including:
the flow field prediction problem is described by the following equation:
α (t+1) =f(α (t-k) ,…,α (t-2) ,α (t-1) ,α (t) )
wherein alpha is (t) Representing the low-dimensional flow field characteristics of the flow field snapshot at the time t, wherein k is used for controlling the time span of input of the reduced-order model, the flow field segments at the first k +1 times are input into the function, and f is used for establishing a low-dimensional time sequence characteristic regression model and predicting data at the next time by using low-dimensional historical data.
Further, the neural network model introduces a residual structure for assisting the training and model convergence of the neural network, wherein the data of the first k +1 time instant are organized as two-dimensional data, the rows of which represent the time instants, and the columns are the corresponding modalities.
Further, the neural network model predicts the output through a fully connected layer, and the output is the coefficient of the next time.
Further, the flow field data may include one of pressure, temperature, and velocity.
According to a second aspect of the present invention, there is provided an apparatus for creating a reduced-order fluid flow model, the apparatus comprising:
a data set construction unit configured to flow field snapshot X based on continuous time steps (i) Constructing a data set D for training a neural network model; wherein X (i) Representing flow field data at the ith time, where the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,...,X (T) }
Figure BDA0003808495930000061
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure BDA0003808495930000062
representing the value of the nth node in the flow field, wherein the subscript n represents the number of grid points in the flow field, and the superscript T represents the length of a time step in the data set;
the base function acquisition unit is configured to obtain a base function through singular value decomposition based on the flow field snapshot matrix M; the flow field snapshot matrix M comprises M flow field snapshots sampled from the flow field snapshot dataset D;
and the model training unit is configured to train the neural network model by using the flow field snapshot dataset D based on the basis function to obtain a fluid flow reduced-order model.
According to a third aspect of the present invention, there is provided a system for establishing a reduced-order fluid flow model, the system comprising: a memory for storing a computer program; a processor for executing the computer program to implement the method as described above.
According to a fourth aspect of the present invention, there is provided a non-transitory computer readable storage medium having stored thereon instructions which, when executed by a processor, perform the method as described above.
According to the establishment method, the establishment device, the establishment system and the storage medium of the reduced-order fluid flow model in the various aspects of the invention, at least the following technical effects are achieved:
the method is used for generating the basis function of the low-dimensional flow field through proper orthogonal decomposition, taking the coefficient of the basis function as the characteristic of the low-dimensional flow field, and constructing a prediction model of the low-dimensional characteristic by using a transform neural network. The reduced order model presented herein relies solely on the solution of the flow field. Compared with RNN, the transform neural network obtains the correlation information among the flow field sequences through an attention machine mechanism, improves the robustness of the model, and relieves the problem of backward transfer of errors of the model in the autoregressive process. In the whole calculation process, the flow field sequence is calculated in parallel in a matrix form, so that the method has the advantage of calculation time. And, the present invention uses two-dimensional cylindrical streaming and flow through a certain building group (2D) in the Chongqing city to evaluate a new reduced order model. The experimental result shows that the time cost is reduced by 1 order of magnitude, and the prediction error is reduced by 40%. The self-attention structure in the transform neural network is useful for constructing reduced order models. In addition, the model is not only suitable for a 2D flow field, but also suitable for a 3D complex flow field.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having alphabetic suffixes or different alphabetic suffixes may represent different instances of similar components. The drawings illustrate various embodiments, by way of example and not by way of limitation, and together with the description and claims, serve to explain the inventive embodiments. The same reference numbers will be used throughout the drawings to refer to the same or like parts, where appropriate. Such embodiments are illustrative and not intended to be exhaustive or exclusive embodiments of the present apparatus or method.
FIG. 1 shows a canonical architecture diagram of a transform.
Fig. 2 shows an exemplary diagram of length-long input sequence addition position information.
Fig. 3 shows a self-attention calculation process.
FIG. 4 illustrates a flow chart of a method of building a reduced order fluid flow model according to an embodiment of the invention.
FIG. 5 is a block diagram illustrating a transform neural network model in a reduced order fluid flow model, according to an embodiment of the invention.
Fig. 6 shows a schematic diagram of the calculation region of the flow field of the test case 1.
FIG. 7 illustrates the meshing and boundary layer encryption of test case 1.
Fig. 8 shows the prediction results of the flow field of the ROM at t =2s using different numbers of basis functions for test case 1.
FIG. 9 shows the RMSE between the predicted ROM and the high fidelity numerical simulation solution for test case 1 using different numbers of basis functions.
FIG. 10 shows the predicted results of test case 1 using LSTM ROM and transform ROM.
FIG. 11 shows the RMSE between the numerical solution of test case 1 and the predicted results using LSTM ROM and Transformer ROM.
FIG. 12 shows a grid schematic of the building set for test case 2.
Fig. 13 shows a graph of the prediction results of the flow field at t =400s for the ROM under different basis functions for test case 2.
FIG. 14 shows the RMSE values between the numerical solution of test case 2 and the ROM solution using different basis functions.
FIG. 15 is a graph showing the prediction results of test case 2 using LSTM ROM and Transformer ROM.
FIG. 16 shows the RMSE between the numerical solution of test case 2 and the predicted results using LSTM ROM and Transformer ROM.
Fig. 17 shows a three-dimensional model diagram of test case 3 constructed according to the real scale and size of LSBU.
Fig. 18 shows a three-dimensional unstructured grid map of the LSBU model of test case 3.
FIG. 19 shows a graph of the predicted results of test case 3 using LSTM ROM and Transformer ROM for a thousand time steps of an LSBU streaming scenario.
Fig. 20 shows RMSE values between the prediction results and numerical simulation results of test case 3 using LSTM ROM and Transformer ROM.
Fig. 21 is a block diagram of test case 3 illustrating an apparatus for building a reduced order fluid flow model according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. The following detailed description of embodiments of the invention is provided in connection with the accompanying drawings and the detailed description of embodiments of the invention, but is not intended to limit the invention. The order in which the various steps described herein are described as examples should not be construed as a limitation if there is no requirement for a context relationship between each other, and one skilled in the art would know that sequential adjustments may be made without destroying the logical relationship between each other, rendering the overall process impractical.
Description of related art terms:
transformer neural network:
transformer as a class of seq2seq networks, the encoder is responsible for mapping an input sequence (x 1.. Multidata, xn) to a set of consecutive sequences z = (z 1.. Multidata, zn). For a given z, the decoder generates an output sequence (y 1.., ym) once. At each step, the model is autoregressive. The Transformer follows this overall architecture, unlike traditional CNNs and RNNs, the entire Network is composed entirely of Self-extension and Feed Forward Neural networks. In general, a transform network is composed of two parts, an encoder and a decoder. FIG. 1 shows a canonical architecture of a Transformer.
The Encode is formed by overlapping N identical Encode blocks, and the Decode is formed by overlapping N identical Decode blocks. The function of the Encoder is to compute the correlation of data between input sequences. Each Encoder block includes a Multi-Head orientation layer for calculating global scope information of input data. Because the Self-Attention adopts a parallel computing strategy and processes all input sequence data simultaneously, the computation efficiency of the model is very high. The superposition of multiple layers of encoders can better mine potential connections of data between sequences. The Decoder is used for combining the global relevant information between the previous sequences and outputting the sequence to be predicted.
Add & Norm as an aid. Add represents Residual Connection (Residual Connection) to prevent network degradation and gradient disappearance during training, and Norm represents Layer Normalization to accelerate model convergence. By using position coding, the input sequence still retains position information during parallel computation, which is a significant difference from the LSTM serial computation process. It is the series of advantages of the Transformer neural network, and the proposed reduced order model adopts the Transformer neural network to form the main framework of the model.
Position Encoding (Positional Encoding):
the position code represents the position of the data in the input sequence. Because the order information of the data in the sequence is ignored in the parallel computing mode of the Transformer neural network, the relative or absolute position information of the data in the sequence is saved through position coding. The specific calculation formula of the position information function PE is as follows:
PE (pos,2i) =sin(pos/10000 2i/d )
PE (pos,2i+1) =cos(pos/10000 2i/d )
Figure BDA0003808495930000105
the process of adding location information is shown in fig. 2.
Where pos represents the position in the sequence, length represents the input sequence length, d model The calculation dimension in the Transformer neural network is represented and can be set according to the dimension of the flow field characteristic. The positional encoding is composed of a set of lengths h d model Vector of dimensions indicates that 2i represents the even number of data at line pos in the vector, and 2i +1 represents the odd number of data at line pos in the vector. For input sequence
Figure BDA0003808495930000104
The process of adding position codes can be defined as:
Figure BDA0003808495930000101
wherein
Figure BDA0003808495930000102
Which represents a matrix addition, is performed,
Figure BDA0003808495930000103
indicating a sequence containing position information. The PE can accommodate input sequences of various lengths and easily calculate the absolute position of each datum in the sequence.
Self-Attention module (Self-Attention):
the Self-Attention calculates the weight relationship between each vector and all other vectors in the input sequence in parallel, which is a relationship on the global scale. This is also one reason we choose a Transformer to build a reduced order model. By solving the weight relation of the flow field snapshots at different moments, the association degree between the flow field snapshots at different moments is discovered, so that the prediction precision is improved. Secondly, the whole-course matrixing parallel computation of the Transformer greatly improves the computation efficiency of the reduced order model. The self-attention module is shown in fig. 3.
When the transform neural network carries out Self-Attention calculation, W is randomly initialized Q 、W K 、W V Three network parameter matrices (finally determined after training through data set) and after adding position informationInput vector
Figure BDA0003808495930000111
Matrix multiplication is carried out to obtain a Q (query), a K (key value) and a V (value) matrix, and the Self-Attention calculation is assisted:
Figure BDA0003808495930000112
Figure BDA0003808495930000113
Figure BDA0003808495930000114
Figure BDA0003808495930000115
Figure BDA0003808495930000116
the weight coefficient of Self-orientation is obtained by calculation through three matrixes of Q, K and V, and the calculation formula is as follows:
Figure BDA0003808495930000117
wherein, d model I.e. the dimension representing the input quantity Y. To prevent the excessive inner product of Q and K from causing the bias of the weighting coefficient of Self-orientation to extreme, the calculation is divided by
Figure BDA0003808495930000118
Playing a role of buffering. The Attention function is used to calculate the Attention degree of a single position data in an input sequence relative to all position data, and if the association degree of two elements is higher, the corresponding Attention weight is larger. There is no input direction in the whole calculation processThe sequence of the quantities realizes parallel computation through matrixing.
The Multi-Head orientation is a derivative of the Self-orientation, and the calculation process is consistent with the Self-orientation. As in CNN, which uses multiple convolutional layers to better extract data features, transform also uses multiple Self-orientation (Multi-Head-orientation) to calculate the universal weight relationship between data in input sequence. The specific method is to integrate the calculation results of a plurality of Self-orientations:
O=Linear(Concat(Attention 1 ,...,Attention h ))
calculation results of Multi-Head Attention
Figure BDA0003808495930000124
The method includes the association information between any two units in the input sequence, and allows the model to learn the relevant information in different expression subspaces.
ROM is intended to extrapolate the flow field changes indefinitely, using a small number of flow field snapshots. The drawback of the Transformer network is that as the decoder iterator output sequence grows, the dimension of the attention matrix in the decoder also increases, thereby increasing the amount of computation in the prediction process. The original Transformer is also only suitable for sequence prediction of limited length. Therefore, the model is only used in the encoder module of the Transformer network.
An Encoder: encoder works similarly to feature extraction in CNN, except that information on a global scale is computed. The Encoders superpose a plurality of identical Encoder blocks, so that the information concerned among the vectors is more accurate. Each Encode block involves the computation of the Multi-Head Attention, add & Norm, and Feed Forward layers. Calculation procedure for each Add & Norm layer:
Figure BDA0003808495930000121
calculation of Feed Forward layer:
FeedForward(x)=max(0,xW 1 +b 1 )W 2 +b 2
and (3) parallel computing: each Encoder block and decoder block accepting sequence has length and dimension of d model Of (2) matrix
Figure BDA0003808495930000122
Outputting information coding matrix of same dimension
Figure BDA0003808495930000123
And finally, outputting an inference result through a linear mapping layer (Feed Forward).
The invention provides a method for establishing a reduced-order fluid flow model, which comprises a basis function of intrinsic orthogonal decomposition and a neural network model, and referring to fig. 4, the method comprises steps S100-S300.
Firstly, step S100, flow field snapshot X based on continuous time step (i) Constructing a data set D for training a neural network model; wherein the vector X (i) Representing flow field data at the ith time, where the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,...,X (T) }
Figure BDA0003808495930000131
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure BDA0003808495930000132
representing the value of the nth node in the flow field, the subscript n indicating the number of grid points in the flow field, and the superscript T representing the length of the time step in the data set.
In some embodiments, the flow field data may include one of pressure, temperature, and velocity.
Secondly, step S200, obtaining a basis function through singular value decomposition based on the flow field snapshot matrix M; and the flow field snapshot matrix M consists of M flow field snapshots sampled in the flow field snapshot dataset D.
In some embodiments, flow field data X is implemented using POD (t) Dimension reduction and reconstruction, namely the POD method aims to find a group of base function sequences in a section of continuous flow field snapshot space, and each sequence can completely express all information of one flow field snapshot.
Firstly, zero-averaging a flow field snapshot matrix, and subtracting an average vector from the flow field snapshot matrix M
Figure BDA0003808495930000133
Obtaining a zero-equalized flow field snapshot matrix
Figure BDA0003808495930000134
Figure BDA0003808495930000135
Wherein M ∈ R n×m
Figure BDA0003808495930000136
Is the average vector for each row and is,
singular value decomposition is performed by the following formula:
Figure BDA0003808495930000137
wherein U is E.R n×n Is that
Figure BDA0003808495930000138
For a feature vector matrix of (V ∈ R) m×m Is that
Figure BDA0003808495930000139
Is e.g. R, e n ×m Is a diagonal matrix, the diagonal elements of sigma are
Figure BDA00038084959300001310
The singular value of (a) represents the energy value contained in the basis function, and the superscript T represents the matrix transposition;
Figure BDA0003808495930000141
wherein v is i Is the ith vector of the V matrix, u i Is the ith vector of the U matrix, λ i Is composed of
Figure BDA0003808495930000142
Eigenvalues of the matrix. The characteristic value lambda is measured i I.e. the singular value Σ i The first j eigenvectors are sequentially selected from the square of (A), ordered from large to small, to obtain a basis function sigma:
σ={u 1 ,u 2 ,...,u j }
wherein σ ∈ R j×n Is a set of variant bases. Through j basis functions, the flow field can be greatly saved
Figure BDA0003808495930000143
The information of (1).
Figure BDA0003808495930000144
After generating the basis function, the flow field snapshot X at the t-th moment (t) May be expressed as:
α (t) =σX (t)
wherein alpha is (t) ∈R j×1 ,X (t) ∈R n×1 And σ is a basis function.
Finally, in step S300, based on the basis of the basis function, the flow field snapshot dataset D is converted into low-dimensional flow field snapshot data α, and the neural network model is trained to obtain a fluid flow reduced-order model.
It should be noted that, in some embodiments, the flow field is reconstructed based on the basis functions obtained in step S200, and the coefficient α is used as a low-dimensional feature of the fluid simulation numerical solution.
Specifically, the flow field prediction problem is described by the following formula:
α (t+1) =f(α (t-k) ,…,α (t-2) ,α (t-1 ),α (t) )
wherein alpha is (t) Representing the low-dimensional flow field characteristics of the flow field snapshot at the time t, wherein k is used for controlling the time span of input of the reduced-order model, the flow field segments at the first k +1 times are input into the function, and f is used for establishing a low-dimensional time sequence characteristic regression model and predicting data at the next time by using low-dimensional historical data.
By way of example only, during the training of the network model, the function α of the above function is approximated by infinity (t+1) =f NN(t-k) ,…,α (t-2) ,α (t-1) ,α (t) ) To perform training, f NN Constructed using a Transformer neural network, k is 10. The structure of the transform neural network model in this example is shown in fig. 5.
It should be noted that the structure of the Transformer neural network model in fig. 1 is only an example, and the present invention may be a flow reduced model based on any flow field constructed by the basic framework shown in fig. 5.
In some embodiments, the neural network model introduces a residual structure for assisting in the training and model convergence of the neural network, wherein the data at the first k +1 time instant is organized as two-dimensional data, the rows of which represent the time instants, and the columns are the corresponding modalities.
In some embodiments, a hyper-parameter in the network, such as d model Etc., determined by the dimensions of the input flow field. Finally, the output is predicted using a full-link network. The output is the low dimensional flow field coefficient α at t +1 (t+1)
In summary, the fluid flow reduced order model presented herein consists of an intrinsic orthogonal decomposition and a neural network. In specific application, the method is divided into two stages: a first stage and a second stage. The first phase builds and trains a ROM, and the second phase performs data prediction. The data prediction is carried out based on the constructed and trained ROM, and flow field data at the next moment is predicted in a mode consistent with a training method.
To more clearly illustrate the feasibility and advancement of the present invention, three test cases were used to demonstrate the ability of the fluid flow order-reduced model presented herein, namely two-dimensional flow through a cylinder, two-dimensional flow through a building complex, and three-dimensional flow through the university of southeast london. In all three test cases, we performed numerical experiments using either structured meshes or unstructured triangular meshes with sufficient resolution to ensure an accurate solution. Fluent provides a high-fidelity numerical solution, and the POD basis function generates low-dimensional flow field snapshot data. Singular value decomposition in POD and neural network construction are respectively carried out by using a Scikit-learning library and a Pythroch library. In this demonstration, we compared LSTM-based ROMs with our model. In addition to comparing different deep learning methods, we also use Root Mean Square Error (RMSE) to analyze the prediction accuracy of ROM, which is calculated as:
Figure BDA0003808495930000161
where n is the number of nodes of the grid,
Figure BDA0003808495930000162
is the predicted value of the i-th time reduced model, y i And simulating the solution for the numerical value corresponding to the time i.
Figure BDA0003808495930000163
The same computing equipment, intel E5 2678V3 processor and intel viada RTX 3090 graphics card were used for all experiments.
Test case 1: two-dimensional flow through a cylinder
In the first numerical example, we simulate a two-dimensional flow through a cylinder. Flow field snapshots of 2000 consecutive time steps (Δ t =0.005 s) were chosen as training and testing data sets for our model. The number of the first 80% thereofThe remaining 20% of the data was used as a test to validate the model as a training sample. The calculated area of the flow field is a rectangular area with a length of 1m and a width of 0.5m, the radius of the cylinder is 0.025m, and the specific position is shown in fig. 6. Density rho =1kg/m in rectangular region 3 The gas (2) flows in from the left side of the rectangle and flows out from the right side, and the flow speed is 1m/s. The reynolds number for this problem was calculated as Re =1000. The calculation area was divided into 32119 grid points using Fluent software and grid-encrypted around the cylinder as shown in fig. 7. In the experiment, we only build a reduced order model for the velocity vector, and the model is also applicable to other variables.
To demonstrate the effectiveness of this method, we selected three different numbers of basis functions, 6, 12 and 18 respectively. In particular, the network structure consists of 6 encoder blocks and one fully connected layer. The number of training rounds is 200 and the batch size is 20. The loss function is the mean square error. The optimizer of the neural network is Adam. We normalized the data to the range of [ -1,1 ]. The activation functions are all Relu.
Fig. 8 and 9 show the predictive power of the method at the same time. Fig. 8 shows the prediction results of the flow field at t =2s for ROMs using different basis functions. It is clear that we can observe that the ROM with 18 basis functions achieves the best performance, because more basis functions contain more flow field information and the flow field reconstruction errors caused by POD are smaller. FIG. 9 shows the RMSE between the high-fidelity numerical solution and the ROM solution using different basis functions. It can be seen that the errors in the ROM tend to stabilize throughout the extrapolation of 400 time steps, with no accumulation of errors as the time steps increase, and remain stable throughout the set of test samples. Fig. 8 and 9 show that our method can predict flow fields very well.
LSTM has been widely validated and applied as a classical ROM construction method. Thus, we compared different network architectures to demonstrate the advantages of our approach. As described above, the RNN has an advantage in sequence modeling, but its internal structure is complicated, and it is inefficient to process input data of a model by serial calculation. The self-attention structure in the Transformer network is parallel processing on the input data of the model, and compared with CNN, the model is a global feature calculation. This makes our model computation faster and more suitable for modeling complex time series data.
In a comparison of self-attention and the loop architecture network, all ROMs are based on 18 basis functions and the input data size is the same for all models. The number of training rounds and the size of the batch process are also the same. The results of the experiment are shown in fig. 10 and 11. Fig. 10 shows the predicted results of the two methods. We can observe that both LSTM and Transformer predict fluid flow well overall, but the errors of LSTM are more pronounced, particularly concentrated in the region of large velocity variation ((c) in fig. 10). Although the error of the Transformer network is concentrated in the area, the error is smaller (fig. 10 (e)), which shows that the Transformer network can better capture the flow characteristic of the fluid and learn the change rule of the large and small eddy currents. FIG. 11 is a graph of RMSE between high fidelity numerical solutions and predicted values of two neural network models over 400 time steps of prediction. The ROM of the Transformer structure has more stable performance and smaller error.
As can be seen from table 1, the method proposed herein can effectively reduce the error generated by model prediction, which is reduced by 23.5% compared to LSTM. The global information perception capability brought by the self-attention module enables the model to have high prediction accuracy, but the parameter quantity of the model is large. In terms of computation time, due to matrix-based parallel computation on the input flow field sequence, the prediction time of the model at 400 time steps is reduced by 65% compared with the LSTM. From an algorithmic point of view, this is a computational idea that uses space instead of time. It can be seen in fig. 11 that the prediction error of the LSTM model is mainly concentrated in the region where the flow velocity changes largely after the cylinder and the vortex behind. The self-attention module in the Transformer can avoid forgetting the time sequence information in the LSTM by capturing the global correlation information on the time sequence of the flow field change and distributing a larger attention weight to the flow field at the corresponding moment, so that the error of the part of the region can be obviously reduced by the model.
Table 1: comparison of average RMSE (average RMSE), number of parameters (number of parameters), and prediction time (prediction time)
Figure BDA0003808495930000181
Test case 2: two-dimensional flow through a building complex
Two-dimensional flow through a cylinder is a classic example of the field of computational fluid dynamics. Related physical experiments and numerical simulation calculations have been well studied and are therefore often used for the validation of new models. However, flowing through a two-dimensional cylinder is a simple scenario, and the flow has strong periodicity, which is not enough to reflect the robustness and stability of our model. Thus, in a second example, a two-dimensional flow of a flow through a building complex is simulated. The flow field calculation domain is a rectangular area with the length of 80m and the width of 40m, and refers to a part of a certain square in Chongqing city. The grid of the building complex is shown in fig. 12.
In the rectangular region, the density is ρ =1kg/m 3 The air of (2) is flowed in from the left side of the rectangle and is flowed out from the right side, and the flow velocity is 1m/s. The calculation area was divided into 32980 grid points using Fluent software. Also, a high fidelity numerical solution is obtained by numerical simulation. We selected 2000 consecutive time-steps (Δ t =1 s) snapshots of the flow field as the dataset of our model. The first 80% of the data was used as training samples and the remaining 20% of the data was used to validate the model. A speed vector reduced model is built, data snapshots of the last 10 time steps in a training sample are used as input of a ROM, and test samples of the last 400 time steps in a data set are predicted.
As the flow becomes complex, three different numbers of basis functions are set to 18, 24 and 30 in this numerical example. The network structure was the same as in experiment one. Fig. 13 and 14 show the predictive power of our model. Fig. 13 shows the results predicted by our model under different numbers of basis functions when t =400 s. It is observed that the ROM using 30 basis functions has the best performance because more basis functions contain more flow field information and the error caused by POD is smaller. FIG. 14 shows the RMSE values between the high fidelity numerical solution and the ROM prediction. In the process of 400 time steps, ROM calculation errors under all basis functions tend to be stable, and errors are not obviously accumulated along with the increase of the time step, which shows that the performance of the model is very stable, and the flow field change condition in a complex two-dimensional flow scene can be well predicted.
In a comparative experiment with the LSTM network model. All ROMs were based on 30 basis functions and the inputs to the network model, the epoch values and the batch size were also the same as in experiment one. The results of the experiment are shown in fig. 15 and 16. FIG. 15 shows the numerical simulation results and the predicted ROM results for both network configurations. It can be seen that both LSTM and transducer predict flow fields overall, but the LSTM error is more pronounced (fig. 15 (c)), focusing on areas of intense velocity variation, such as vortices of varying size and shape behind a building complex. Compared with the LSTM, the ROM based on the Transformer structure has obviously reduced error, and the error is mainly distributed in the centers of sporadic several eddy currents. Since the flow in this region is not regular, the flow rate changes very sharply, and therefore it is difficult for the model to learn the flow changes of this portion. Because the Transformer captures the global correlation information among the flow sequences and distributes corresponding attention weights to the flow fields at different moments, the model can obtain more complete flow information. LSTM has a problem of forgetting information when processing time series data, and therefore cannot learn a complete flow change characteristic compared to a Transformer model. FIG. 16 shows the RMSE between the high fidelity numerical solution and the predicted values of two neural network models for 400 time steps, and overall, the ROM performance of the transform structure is better, and the error is smaller and more stable.
Some model parameters for different ROMs on the test set are given in table 2, including the average RMSE, the number of model parameters, and the prediction time. Compared to the LSTM ROM, we proposed a vector ROM with a 41.2% reduction in error. At the predicted time for the 400 time step flow field, the Transformer ROM was reduced by 64% compared to the LSTM ROM. The model needs to calculate the global correlation information between the time series and assign different attention weights, so the parameter number is larger, but the calculation time can be reduced by matrixing and parallel calculating the whole input flow field sequence, which also benefits from the development of the current computer hardware.
Table 2: parameter comparison of different methods
Figure BDA0003808495930000201
The number of parameters for the model in experiment two was slightly increased compared to the model parameters in experiment one due to the increased number of selected basis functions. There is also a slight increase in the prediction time, due to the slight difference in model loading time.
Test case 3:
two-dimensional flow is far from the practical three-dimensional application scenario. To verify the performance of ROM in a real scenario, we applied ROM to a three-dimensional city model of the university of london southern shore (LSBU) and predicted the air flow of the city model. The 3D model of LSBU was constructed by the team of the geosciences and engineering system of the university of Imperial, london, according to the true scale and size of LSBU. The three-dimensional model is shown in fig. 16. The numerical simulation calculation domain size of the three-dimensional model is as follows: 1000 meters long, 1000 meters wide and 250 meters high. There are 60 buildings of different heights and sizes in the 3D model, with the highest building being 81 meters from the ground on top and the lowest building being 6 meters from the ground.
Due to the complexity of the LSBU model, it is difficult to verify the convergence of the mesh. Thus, the mesh of the model uses an adaptive mesh to optimize the initial mesh. And carrying out numerical simulation on the initial grid by adopting a large vortex simulation and grid self-adaption method, and optimizing the grid. After the calculation reaches the quasi-steady state, the grid is fixed. The edge length of the initial grid is set to be 0.3 m at the minimum and 50 m at the maximum, and the number of nodes in the grid is limited to 100 ten thousand at the maximum. The three-dimensional unstructured grid of the LSBU model is shown in fig. 17, and the number of nodes in the three-dimensional grid is 306830. The software for high fidelity numerical simulation of three-dimensional models was fluid, also developed by the university of london's department of science. In order to ensure the reliability of the numerical simulation, the results of the numerical simulation are compared with the results of the wind tunnel experiment. Experimental information and comparative results can be found in references (Xiao D, heaney C E, motet L, et al. A reduced order model for regulating flows in the urea Environment using a machine learning [ J ]. Building and Environment,2019, 148-337.
Song J, fan S, lin W, et al. Natural transfer in cities: the injections of fluid mechanics [ J ]. Building Research & Information,2018, 46 (8): 809-828).
The ROM needs high-precision numerical simulation results as a training and testing data set, so that numerical simulation calculation is carried out on the unstructured grid (figure 18) obtained after adaptive optimization by using the fluid software, and the turbulence model is a large vortex simulation. In fig. 17, a plane in the three-dimensional model in the negative x-axis direction is set as an inlet boundary of the flow field, and a symmetric plane thereof is set as an outlet boundary of the flow field. To more accurately reproduce the atmospheric boundary layer, the inlet boundary conditions were determined using a synthetic vortex method. The fluid velocity at the inlet boundary is set as:
Figure BDA0003808495930000211
where z is the height from the ground, the Reynolds stress u 'u' and the associated length scale L are set to
Figure BDA0003808495930000212
Figure BDA0003808495930000213
The outlet boundary is set to a zero pressure outlet. The bottom of the three-dimensional model and the building's representation are set to zero-velocity fixed-wall boundaries, and since we are not concerned about the far-field flow of the model, the sides and top of the model are set to full-slip boundaries. The fluid is provided as air, its density ρ and viscosityMu is 1.225kg/m respectively 3 And 1.7894X 10 -5 kg/m -1 s。
After the numerical simulation result reaches a quasi-steady state, simulation calculation is continued, and flow field snapshots of continuous 8000 time steps are taken as training and testing data sets of the ROM, wherein the time step size is 340 s and is 3.8883s. The data set was divided into training set and test set according to a 7: 1 ratio, where the first 7000 steps (steps 1 to 7000) were the training set and the last 1000 steps (steps 7001 to 8000) were the test set for model extrapolation prediction.
Since the application scenario of ROM is more complex and changes from 2D to 3D, some hyper-parameters of the POD-Transformer model and the POD-LSTM comparison model need to be modified. To avoid excessive loss of information, both convert a 306830 dimensional snapshot of the high dimensional flow field 350 into 512 dimensional low dimensional subspaces. Due to the complexity of the flow law, the length L of the input flow field sequence of the model is increased, and the experiment is L =30.
The three-dimensional flow field is inconvenient to show, and thus cloud images of horizontal slices at heights of 10 meters, 25 meters, and 40 meters, respectively, are shown. Wherein, the 10 m horizontal segment is close to the ground, almost comprises all buildings, and the air speed is very low; the 25m horizontal section is only provided with a plurality of higher buildings, and the air speed is moderate; the 40m horizontal section is only provided with two very high buildings, and the wind speed is very high and can reach 10 m/s. FIG. 19 shows the predicted results of two reduced order models for one thousand steps of airflow under LSBU conditions. The RMSE values between the predicted results and the numerical simulation results are shown in fig. 20.
As can be seen from the error curves in fig. 20, the overall error of the LSTM ROM is larger than that of the transform ROM, and since the model is too complex, the RMSE curves of both models fluctuate dramatically, but remain stable as a whole, and the error curve of the LSTM has a tendency to increase with the predicted time step. The 1000 th time step predicted flow field cloud of fig. 19 reflects the source of error for both models. By detailed comparison, it can be seen that the Transformer handles flow details much better than the LSTM. By analyzing the flow field predicted by the LSTM, it can be seen that the flow field has large-scale features substantially identical to the results of numerical simulation, but has few small-scale features, as shown in fig. 19 flowing through the top of the two highest buildings at a height of 40 meters. This phenomenon indicates that LSTM does not fully capture the dynamics of the vortices and smooth these vortices. Different from the LSTM, the flow field predicted by the transducer can present most small-scale features, which shows that the transducer can accurately capture the dynamic characteristics of the vortex, and further improves the precision of the model.
The flow through LSBU situation is very complex and therefore it is difficult to maintain a relatively high prediction accuracy. In Table 3, the average RMSE of the prediction process of the Transformer at 1000 time steps is 1.27, and the average prediction error is reduced by 14.8% compared with that of the LSTM. Table 3 also lists the computation times for two ROMs predicting 1000 time step flow fields. Among them, the time taken for the transform prediction was 9.7 seconds, which is 79.6% faster than that of the LSTM. Compared with numerical simulation, the simulation of 1000 steps by the fluid software on the same machine requires 77058 seconds, and the speed predicted by the Transformer is 8111.4 times faster than that of the numerical simulation.
Table 3: model parameters of different models
Figure BDA0003808495930000231
In summary, the model proposed by the present invention extracts low-dimensional features from a high-fidelity numerical solution by using POD, and models the low-dimensional features in a time series by using a Transformer neural network. In order to verify the effectiveness of the model, experiments are carried out under three flow scenes with different complexity, and two network models are compared. The result shows that the fluid flow reduced-order model has higher precision, higher prediction speed and better stability. Our model can satisfy most real-time application scenarios.
An embodiment of the present invention further provides an apparatus for establishing a fluid flow reduced-order model, as shown in fig. 21, the apparatus 200 includes:
a data set construction unit 201 configured to construct a flow field snapshot X based on successive time steps (i) Constructing a model for training a neural networkA data set D of type; wherein X (i) Representing flow field data at the ith time, where the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,...,X (T) }
Figure BDA0003808495930000241
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure BDA0003808495930000242
representing the value of the nth node in the flow field, subscript n representing the number of grid points in the flow field, and superscript T representing the length of a time step in the data set;
a basis function obtaining unit 202 configured to obtain a basis function through singular value decomposition based on the flow field snapshot matrix M; the flow field snapshot matrix M comprises M flow field snapshots sampled from the flow field snapshot dataset D;
and the model training unit 203 is configured to train the neural network model by using the flow field snapshot dataset D based on the basis function, so as to obtain a fluid flow reduced-order model.
It should be noted that the modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described modules may also be disposed in a processor. Wherein the names of the modules do not in some cases constitute a limitation of the module itself.
The device for establishing the reduced-order fluid flow model provided by the embodiment of the invention belongs to the same technical concept as the method explained in the foregoing, and the technical effects thereof are basically consistent, which is not described herein again.
The embodiment of the invention also provides a system for establishing the reduced-order fluid flow model, which comprises:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any of the embodiments of the invention.
Embodiments of the present invention also provide a non-transitory computer readable medium storing instructions that, when executed by a processor, perform a method according to any of the embodiments of the present invention.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the invention with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above-described embodiments, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that features of an unclaimed invention be essential to any of the claims. Rather, inventive subject matter may lie in less than all features of a particular inventive embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that the embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (10)

1. A method for constructing a fluid flow reduced order model, wherein the fluid flow reduced order model comprises an intrinsic orthogonal decomposition model and a neural network model, and the method comprises the following steps:
flow field snapshot X based on continuous time step (i) Constructing a data set D for training a neural network model; wherein X (i) Representing flow field data at the ith time, where the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,…,X (T) }
Figure FDA0003808495920000011
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure FDA0003808495920000012
representing the value of the nth node in the flow field, wherein the subscript n represents the number of grid points in the flow field, and the superscript T represents the length of a time step in the data set;
obtaining a basis function through singular value decomposition based on the flow field snapshot matrix M; the flow field snapshot matrix M consists of M flow field snapshots sampled in the flow field snapshot dataset D;
and training the neural network model by using the flow field snapshot data set D based on the basis of the basis function to obtain a fluid flow reduced-order model.
2. The method according to claim 1, wherein the obtaining the basis function σ through singular value decomposition based on the flow field snapshot matrix M comprises:
flow field snapshot matrix M minus average vector
Figure FDA0003808495920000013
Obtaining a flow field snapshot matrix with zero equalization
Figure FDA0003808495920000014
Figure FDA0003808495920000015
Wherein M is equal to R n×m
Figure FDA0003808495920000016
Is the average vector for each row and is,
singular value decomposition is performed by the following formula:
Figure FDA0003808495920000017
wherein U is E.R n×n Is that
Figure FDA0003808495920000018
For a feature vector matrix of (V ∈ R) m×m Is that
Figure FDA0003808495920000019
Is given by the feature vector matrix of (E ∈ R) n×m Is a diagonal matrix, the diagonal elements of sigma being
Figure FDA00038084959200000110
The singular value of (a) represents the energy value contained in the basis function, and the superscript T represents the matrix transposition;
Figure FDA00038084959200000111
wherein v is i Is the i-th vector, u, of the V matrix i Is the ith vector of the U matrix, λ i Is composed of
Figure FDA0003808495920000021
Eigenvalues of the matrix;
the characteristic value lambda is measured i Sorting from big to small, sequentially selecting the first j eigenvectors to obtain a basisFunction σ:
σ={u 1 ,u 2 ,…,u j }
wherein σ ∈ R j×n Is a group of variation bases, and stores the flow field to a large extent through j basis functions
Figure FDA0003808495920000022
The information of (2):
Figure FDA0003808495920000023
3. the method of claim 2, wherein after generating the basis functions, a snapshot X of the flow field at the t-th time instant (t) Is expressed as:
α (t) =σX (t)
wherein alpha is (t) ∈R j×1 ,X (t) ∈R n×1 And sigma is a basis function,
reconstructing the low-dimensional flow field characteristics back to the original flow field:
X (t) =σ T α (t)
4. the method of claim 3, wherein converting the flow field snapshot dataset D into a low-dimensional flow field snapshot dataset α according to the basis functions, and training the neural network model to obtain a fluid flow reduced-order model comprises:
the flow field prediction problem is described by the following equation:
α (t+1) =f(α (t-k) ,…,α (t-2)(t-1)(t) )
wherein alpha is (t) Representing the low-dimensional flow field characteristics of the flow field snapshot at the time t, wherein k is used for controlling the time span of input of the reduced-order model, the flow field segments at the first k +1 times are input into the function, and f is used for establishing a low-dimensional time sequence characteristic regression model and predicting data at the next time by using low-dimensional historical data.
5. The method of claim 4, wherein the neural network model introduces a residual structure that is used to assist in the training and model convergence of the neural network, wherein the data at the first k +1 time instants are organized as two-dimensional data, the rows of which represent the time instants, and the columns are the corresponding modalities.
6. The method of claim 1, wherein the neural network model predicts the output through a fully connected layer, the output being a coefficient at a next time instant.
7. The method of claim 1, wherein the flow field data may include one of pressure, temperature, and velocity.
8. An apparatus for modeling a reduced order fluid flow, the apparatus comprising:
a data set construction unit configured to flow field snapshot X based on continuous time steps (i) Constructing a data set D for training a neural network model; wherein X (i) Representing flow field data at the ith time, where the flow field snapshot dataset D is represented as:
D={X (1) ,X (2) ,…,X (T) }
Figure FDA0003808495920000031
wherein X (t) Representing a snapshot of the flow field in the dataset at time t, X (T) Representing the last snapshot of the flow field in the dataset,
Figure FDA0003808495920000032
representing the value of the nth node in the flow field, subscript n representing the number of grid points in the flow field, and superscript T representing the length of a time step in the data set;
the base function acquisition unit is configured to obtain a base function through singular value decomposition based on the flow field snapshot matrix M; the flow field snapshot matrix M comprises M flow field snapshots sampled from the flow field snapshot dataset D;
and the model training unit is configured to train the neural network model by using the flow field snapshot data set D based on the basis of the basis function to obtain a fluid flow reduced-order model.
9. A system for modeling a reduced order fluid flow model, comprising: the system comprises:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by a processor, perform the method of any one of claims 1-7.
CN202211005481.6A 2022-08-22 2022-08-22 Method, device and system for establishing fluid flow reduced-order model and storage medium Pending CN115293050A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211005481.6A CN115293050A (en) 2022-08-22 2022-08-22 Method, device and system for establishing fluid flow reduced-order model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211005481.6A CN115293050A (en) 2022-08-22 2022-08-22 Method, device and system for establishing fluid flow reduced-order model and storage medium

Publications (1)

Publication Number Publication Date
CN115293050A true CN115293050A (en) 2022-11-04

Family

ID=83829167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211005481.6A Pending CN115293050A (en) 2022-08-22 2022-08-22 Method, device and system for establishing fluid flow reduced-order model and storage medium

Country Status (1)

Country Link
CN (1) CN115293050A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795683A (en) * 2022-12-08 2023-03-14 四川大学 Wing profile optimization method fusing CNN and Swin transform network
CN115828797A (en) * 2023-02-15 2023-03-21 中国船舶集团有限公司第七一九研究所 Submarine hydrodynamic load rapid forecasting method based on reduced order model
CN116070471A (en) * 2023-04-06 2023-05-05 浙江远算科技有限公司 Wind driven generator simulation acceleration method and system based on reduced order decomposition processing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795683A (en) * 2022-12-08 2023-03-14 四川大学 Wing profile optimization method fusing CNN and Swin transform network
CN115795683B (en) * 2022-12-08 2023-07-21 四川大学 Airfoil optimization method integrating CNN and Swin converter network
CN115828797A (en) * 2023-02-15 2023-03-21 中国船舶集团有限公司第七一九研究所 Submarine hydrodynamic load rapid forecasting method based on reduced order model
CN116070471A (en) * 2023-04-06 2023-05-05 浙江远算科技有限公司 Wind driven generator simulation acceleration method and system based on reduced order decomposition processing

Similar Documents

Publication Publication Date Title
Xu et al. Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics
CN115293050A (en) Method, device and system for establishing fluid flow reduced-order model and storage medium
Geneva et al. Multi-fidelity generative deep learning turbulent flows
CN114724012B (en) Tropical unstable wave early warning method and device based on space-time cross-scale attention fusion
Whalen et al. Toward reusable surrogate models: Graph-based transfer learning on trusses
Wu et al. A non-intrusive reduced order model with transformer neural network and its application
Hutchinson et al. Vector-valued Gaussian processes on Riemannian manifolds via gauge independent projected kernels
CN116703980A (en) Target tracking method and system based on pyramid pooling transducer backbone network
Panda et al. Evaluation of machine learning algorithms for predictive Reynolds stress transport modeling
Du et al. Super Resolution Generative Adversarial Networks for Multi-Fidelity Pressure Distribution Prediction
CN117076931B (en) Time sequence data prediction method and system based on conditional diffusion model
CN112465929B (en) Image generation method based on improved graph convolution network
CN117197632A (en) Transformer-based electron microscope pollen image target detection method
Maruani et al. VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams
CN101567838A (en) Automatic correcting method of function chain neural network
CN114638048A (en) Three-dimensional spray pipe flow field rapid prediction and sensitivity parameter analysis method and device
CN114548400A (en) Rapid flexible full-pure embedded neural network wide area optimization training method
Brittain et al. Multifidelity Aerodynamic Flow Field Prediction Using Conditional Adversarial Networks
CN113011495A (en) GTN-based multivariate time series classification model and construction method thereof
Chu et al. Research on capsule network optimization structure by variable route planning
WO2024071377A1 (en) Information processing device, information processing method, and program
Li et al. Wind turbine wake prediction modelling based on transformer-mixed conditional generative adversarial network
CN117132006B (en) Energy consumption prediction method and system based on energy management system
CN114168320B (en) End-to-end edge intelligent model searching method and system based on implicit spatial mapping
He et al. Model order reduction for parameterized electromagnetic problems using matrix decomposition and deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination