CN115438567A - Data-driven dynamic boundary flow field reconstruction method and device and storage medium - Google Patents

Data-driven dynamic boundary flow field reconstruction method and device and storage medium Download PDF

Info

Publication number
CN115438567A
CN115438567A CN202210897466.0A CN202210897466A CN115438567A CN 115438567 A CN115438567 A CN 115438567A CN 202210897466 A CN202210897466 A CN 202210897466A CN 115438567 A CN115438567 A CN 115438567A
Authority
CN
China
Prior art keywords
data
matrix
flow field
fluid
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210897466.0A
Other languages
Chinese (zh)
Inventor
徐笳森
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202210897466.0A priority Critical patent/CN115438567A/en
Publication of CN115438567A publication Critical patent/CN115438567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a data-driven dynamic boundary flow field reconstruction method, a device and a storage medium, wherein the method comprises the following steps: step 1, simulating the motion process of an object to obtain numerical simulation data; step 2, dividing a data set; step 3, constructing an input/output matrix; step 4, data preprocessing is carried out; step 5, constructing a deep learning network; and 6, carrying out model training and prediction. On the basis of ensuring the accuracy of the prediction result, the method reduces the calculation time by at least one order compared with the traditional CFD method, and greatly reduces the calculation time while ensuring the reliability of the result.

Description

Data-driven dynamic boundary flow field reconstruction method and device and storage medium
Technical Field
The invention belongs to the field of crossing computational fluid dynamics and artificial intelligence, and particularly relates to a data-driven dynamic boundary flow field reconstruction method, a data-driven dynamic boundary flow field reconstruction device and a storage medium.
Background
The problem of the dynamic boundary flow such as flapping of insect wings or swimming of a fish-like body is a hotspot of computational fluid dynamics research, the current solving of the problem is mainly based on the traditional CFD method (such as a dynamic grid method) or a novel numerical method (such as an immersion boundary method, a lattice Boltzmann method, a smooth particle hydrodynamics method and the like), the methods are all based on a first principle, although the dynamic boundary flow problem can be accurately solved, the defects of long calculation time and low efficiency exist, and the requirement of quasi-real-time calculation cannot be met. In recent years, thanks to the rapid development of artificial intelligence and machine learning technologies, it becomes a popular means for replacing time-consuming flow models based on a first principle by significantly reducing computation time while ensuring computation accuracy when constructing a data-driven prediction model, wherein the data-driven flow field rapid reconstruction model mainly aims at unsteady flow of a static object at present, and few researches and applications are performed on a dynamic boundary flow problem, which often has more practical application scenarios (such as a bionic micro-unmanned aerial vehicle, a bionic underwater vehicle, and the like), and therefore it is urgently needed to construct a corresponding data-driven model for the flow problem, and rapidly and accurately reconstruct and predict relevant flow field information.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to overcome the defect of low efficiency of the traditional CFD method in solving the problem of the dynamic boundary flow, provides a convolutional neural network prediction method based on basnet, and can quickly and accurately realize the reconstruction of the dynamic boundary flow field. Input and output data of the neural network are obtained through a high-fidelity immersed boundary solver, a mapping relation between input and output is established through a convolutional neural network based on basnet, and corresponding reconstruction of the whole velocity field can be rapidly and accurately achieved through the trained neural network according to pressure snapshots of the surface of the moving object under any time sequence and shape and position information of the moving object.
The invention particularly provides a data-driven method for reconstructing a dynamic boundary flow field, which is characterized by firstly simulating a dynamic boundary unsteady flow field based on an immersed boundary solver to obtain a flow field time sequence snapshot result under different Reynolds numbers. And then taking the surface pressure, reynolds number, shape and position information of the moving object corresponding to the snapshots as input, and taking the whole velocity field as output. And establishing a mapping relation between input and output by constructing a convolutional neural network based on basnet. Training the neural network and adjusting and optimizing the hyper-parameters through a gradient descent algorithm to minimize a loss function to obtain a satisfactory training result. And finally, quickly and accurately reconstructing the dynamic boundary flow field under other Reynolds numbers by using the trained model. The method specifically comprises the following steps:
step 1: simulating the motion process of an object to obtain numerical simulation data: aiming at the problem of the dynamic boundary flow, an immersion boundary method suitable for the problem is adopted for solving, and a proper grid scale is selected through the verification of the grid independence. Simulating the motion process of the object by adopting an open source immersion boundary solver IB2d aiming at the two-dimensional problem, and calculating to obtain numerical simulation data, wherein the numerical simulation data comprises real object surface pressure distribution, object shape and position information and speed field data of the whole fluid domain, which correspond to the moving object under different Reynolds numbers under a time sequence formed by K (8 < = K < = 15) periods. The immersed boundary solver IB2d simulates the motion process of the object by using a Navier-Stokes equation as follows:
Figure BDA0003769495440000021
Figure BDA0003769495440000022
wherein, the position vector x = (x, y), the time t is an independent variable, and x, y are respectively the abscissa and ordinate of the calculation grid; u (x, t) = (u (x, t), v (x, t)) is fluid velocity, u (x, t) is the magnitude of transverse velocity in the computational grid, and v (x, t) is the magnitude of longitudinal velocity in the computational grid; p (x, t) is the pressure, f (x, t) is the force exerted by a moving object on the fluid at the submerged boundary, pRepresents the density of the fluid, mu represents the dynamic viscosity,
Figure BDA0003769495440000023
represents a Hamiltonian, and delta represents a Laplace operator;
Figure BDA0003769495440000024
represents the first partial derivative of the fluid velocity u (x, t) at time t;
the interaction equation between the moving object and the fluid is:
f(x,t)=∫F(r,t)δ(x-X(r,t))dr (3)
Figure BDA0003769495440000025
wherein r is a Lagrange position, X (r, t) is a Cartesian coordinate of a mark point which is positioned on the moving object as r at time t, and d represents a differential; equation (4) sets the boundary velocity U (X (r, t), t) equal to the local fluid velocity to satisfy the no-slip condition on the submerged structure; f (r, t) is the force per unit area exerted on the fluid by elastic deformation in the submerged structure expressed as:
F(r,t)=F(X(r,t),t) (5)
the function δ in equations (3) and (4) is as follows:
Figure BDA0003769495440000031
where φ is δ h (x) An embedded function, expressed as:
Figure BDA0003769495440000032
wherein h is the fluid grid dimension.
In the step 1, the insect wing is simplified into a rigid line segment, a simplified schematic diagram of the insect wing and a simplified schematic diagram of a motion process are drawn, and pressure distribution of sampling points on the surface of the insect wing and a velocity field of a computational basin where the insect wing is located can be obtained. And then processing the data, wherein the result obtained in the step 2-6 is a set of well-trained neural network, the speed field of the basin where the insect wings are located under any Reynolds number can be quickly and accurately reconstructed through the pressure of the insect wing surface sampling points, and real-time peripheral flow field information feedback can be provided for the flapping wing motion machine imitating the insect wings.
Step 2: the data set is partitioned. And taking 30% of numerical simulation data as a test set for evaluating the prediction performance of the trained model, and dividing the rest data into a training set and a verification set of the neural network according to the proportion of 80% and 20% for training the neural network model.
And 3, step 3: and constructing an input and output matrix. In order to simultaneously acquire the time and space information of the dynamic boundary flow field, the invention designs a novel input matrix. Respectively storing the characteristics of the surface pressure of an object, the Reynolds number and the geometric and position information of the moving object in three two-dimensional matrixes with the same size under a certain time sequence, and forming an input matrix X = [ G ] of a neural network by superposition (9) P (9) Re (9) ]. Wherein G is (9) ,P (9) ,Re (9) Are respectively composed of S (S = x) 2 X is more than or equal to 3 and x is an integer) continuous time (T is selected as the first time, and then a time is selected every T/120, namely T, T + T/120, T +2T/120 \8230 \ 8230; T + 8T/120) corresponding to the moving object, the pressure distribution of the surface of the object and the Reynolds number. The velocity field of the entire watershed at the next time (i.e., T + 9/120T) is stored in a two-dimensional matrix, which is the output matrix.
And 4, step 4: and carrying out data preprocessing. And normalizing the input matrix and the output matrix and the like.
And 5: and constructing a deep learning network. And constructing a prediction model of dynamic boundary flow field reconstruction by utilizing a basnet convolutional neural network. First, the input matrix and the output matrix are converted into images as an input image and an output image, respectively. And then, extracting the features of the input image through coding, and obtaining the high-level semantic features with extracted locality information by using a pooling method. In order to further acquire global information, a transition layer is added between encoding and decoding. The later decoding part is responsible for gradually restoring and amplifying the high-level semantic information, so that the speed field information of the whole watershed is gradually reconstructed, and finally the speed field information is converted into the size of an output image.
Step 6: model training and prediction are carried out: setting the values of epochs (1 epoch is equal to the value of all training data input to the model and completely trained once) and batch (batch divided by the training set), and training the constructed flow field reconstruction model according to the training samples. And for the hyper-parameters, optimizing an activation function to obtain an optimal deep neural network structure. And predicting the flow field in the test set by using the optimal neural network obtained by training, and evaluating the prediction performance of the model.
The step 2 comprises the following steps: simulating X in the data 1 The remaining data, X, are used as test sets 2 As a training set, X 3 As a verification set.
The step 3 comprises the following steps: respectively storing the surface pressure distribution, reynolds number and object shape and position information of a real object in a time sequence into three two-dimensional matrixes with the same size, and forming an input matrix X = [ G ] of a deep learning network by superposition (9) P (9) Re (9) ]Wherein the matrix G (9) ,P (9) ,Re (9) The method comprises the steps that the shape and position information of an object, the surface pressure distribution of a real object and the Reynolds number corresponding to S continuous moments are respectively formed; and storing the speed field of the whole watershed at the next moment in a two-dimensional matrix, wherein the two-dimensional matrix is an output matrix.
In step 3, matrix G (9) ,P (9) ,Re (9) Is/are as follows The concrete structure is respectively as follows:
Figure BDA0003769495440000041
Figure BDA0003769495440000042
Figure BDA0003769495440000043
at [ t, t +8 δ t]In the interval, δ t represents a time interval, G i Representing a binary matrix of the shape and the position of the object at the time of t + i delta t, wherein t is any time, i =0,1, 8230, 8, the grid where the object is located is 1, and the rest fluid regions are 0; p i A diagonal matrix formed by pressure snapshots of the surface sampling points of the moving object at the time of t + i delta t, and the number N of the surface pressure sampling points of the moving object i Is P i Number of rows of (A), and P i Is also equal to N i ;Re i Representing the reynolds number of the basin at time t + i δ t.
Step 4 comprises the following steps:
the input matrix is preprocessed using the following formula:
Figure BDA0003769495440000051
X ji is the pressure value of the ith pressure sampling point on the surface of the moving object at the j time point, 0<=j<=K*T,X min Is the minimum value of the surface pressure of the moving object in each sample, X max The maximum value of the surface pressure of the moving object in each sample is obtained;
Figure BDA0003769495440000052
the normalized pressure value is the pressure value of the ith pressure sampling point on the surface of the moving object at the jth moment;
the output matrix is preprocessed by the following formula:
Figure BDA0003769495440000053
Y i to calculate the flow velocity at the ith grid point in the flow field, 1<=i<H is the grid dimension; v is the characteristic velocity of the flow field, i.e. massMaximum speed of body motion;
Figure BDA0003769495440000054
to calculate the dimensionless flow velocity at the ith grid point in the flow domain.
The step 5 comprises the following steps: converting the input matrix and the output matrix into images which are respectively used as the input and the output of a deep learning network, then extracting the characteristics of the input images through an Encode layer, and obtaining high-level semantic characteristics of extracted local characteristics by using a Pooling Pooling method; a Bridge transition layer is added between an Encode coding layer and a Decode reverse coding layer; and gradually restoring and amplifying the high-level semantic features through a Decode part so as to gradually reconstruct the flow field image, and finally integrating the flow field image into an output image in a Resize integration layer.
The step 6 comprises the following steps: setting values of epochs and batch, wherein the epochs represents the number of single training iterations of all batches in forward and backward propagation, the batch represents the number of times of updating weights of single network training, the constructed deep learning network is trained according to a training set, the deep learning network is optimized by using a mean square error loss function and a self-adaptive version of a random gradient descent algorithm, and a loss function J is as follows:
Figure BDA0003769495440000055
wherein N is the total number of data in the training set,
Figure BDA0003769495440000056
a flow field reconstruction result y corresponding to the preprocessed fluid simulation data in the ith sample in the training set obtained in the step 4 i And 4, obtaining a flow field result after pretreatment in the ith sample in the training set obtained in the step 4.
The invention also provides a data-driven dynamic boundary flow field reconstruction device, which comprises:
the data acquisition module is used for simulating the motion process of the object and acquiring numerical simulation data;
a data dividing module for dividing the data set;
the input and output matrix construction module is used for constructing an input and output matrix;
the data preprocessing module is used for preprocessing data;
the deep learning network construction module is used for constructing a deep learning network;
and the model training and predicting module is used for performing model training and prediction.
The invention also provides a storage medium storing a computer program or instructions which, when executed, implement the method.
Has the advantages that: the invention provides a method for reconstructing a flow field flowing at a dynamic boundary by a neural network, aiming at overcoming the defect of low efficiency when the traditional CFD method is used for solving the problem of the dynamic boundary. The method can learn the summary rule from a large amount of real dynamic boundary flow field snapshot data, and quickly and accurately reconstruct the flow field information at unknown time. Compared with the traditional CFD method, the method greatly reduces the calculation time while ensuring the reliable result.
The flow field reconstruction generally adopts a traditional convolutional neural network, but the convolutional neural network required to be constructed in the dynamic boundary flow is too complex and is easy to generate overfitting. And the basnet prediction module adopts an Encode-Decode network, and can simultaneously acquire high-level global information and low-level detail information. The last layer of each decoder is supervised by a real value, and the last layer of the encoder utilizes a residual module, so that the over-fitting problem can be effectively solved.
The result shows that the flow field reconstruction method at least reduces the calculation time by one order compared with the traditional CFD method on the basis of ensuring the accuracy of the prediction result, so that the method is expected to be used for the research of insect wing flapping or fish-like body swimming problems, and has certain theoretical research significance and practical application value.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is an overall flow chart of the invention.
FIG. 2 is a simplified schematic view of an insect wing
FIG. 3 is a diagram of simplified insect wing motion trajectories in a computational grid
Fig. 4 is a general framework diagram of a neural network.
Fig. 5 is a diagram of a neural network architecture.
Fig. 6a is a schematic diagram of the Conv _ BN _ Relu (1) calculation module.
FIG. 6b is a schematic diagram of a BasicBlock calculation module
FIG. 6c is a block diagram of the BasicBlock _ Down _ computing block
FIG. 6d is a schematic diagram of Conv _ BN _ Relu (2) calculation module
Fig. 7a is a comparison graph of the real and predicted speed at the grid point of the calculated watershed where the insect wing is located at the time T + T/4.
FIG. 7b is a graph comparing the real and predicted speed at the calculated watershed grid point where the insect wing is located at time T + 2T/4.
FIG. 7c is a graph comparing the true and predicted velocities at the calculated watershed grid point where the insect wing is located at time T + 3T/4.
FIG. 7d is a graph comparing the real and predicted velocities at the calculated watershed grid point where the insect wing is located at time T + T.
FIG. 8 is a gray scale plot of the computed basin true and predicted velocity distributions over a period.
Detailed Description
As shown in fig. 1, the present invention provides a data-driven dynamic boundary flow field reconstruction method, which comprises the following steps:
step 1: aiming at the dynamic boundary flow problem of the embodiment, the translational motion of the insect wing, which is a typical case of flapping-wing motion, is selected as a research case, and an immersion boundary method (IB 2 d) suitable for the problem is adopted for solving. IB2d models the two-dimensional hydrodynamic boundary flow problem of this example using the Navier-Stokes equation.
Figure BDA0003769495440000071
Figure BDA0003769495440000072
Wherein, the position vector x = (x, y), the time t is an independent variable, and x, y are respectively the abscissa and ordinate of the calculation grid. u (x, t) = (u (x, t), v (x, t)) is the fluid velocity, u (x, t) is the magnitude of the transverse velocity in the computational grid, and v (x, t) is the magnitude of the longitudinal velocity in the computational grid. p (x, t) is the pressure, f (x, t) is the force exerted by the moving object on the submerged boundary on the fluid, p represents the density of the fluid, μ represents the dynamic viscosity,
Figure BDA0003769495440000073
represents the hamiltonian and Δ represents the laplacian.
Figure BDA0003769495440000074
Represents the first partial derivative of the fluid velocity u (x, t) at time t; equation (1) is obtained according to the momentum conservation law of the fluid, and equation (2) meets the incompressible condition. The IB2d far field is set as a periodic boundary condition and the flow field calculation area is square.
The interaction equation between the moving object and the fluid is:
f(x,t)=∫F(r,t)δ(x-X(r,t))dr (3)
Figure BDA0003769495440000075
the lagrange position r and the time t are independent variables, X (r, t) is a Cartesian coordinate of a mark point on a moving object, the mark point is positioned at r, and the mark point represents a differential value. Equation (4) sets the boundary velocity U (X (r, t), t) equal to the local fluid velocity to satisfy the no-slip condition on the submerged structure. F (r, t) is the force per unit area exerted on the fluid by elastic deformation in the submerged structure. As a function of lagrangian position r and time t, it can be expressed as:
F(r,t)=F(X(r,t),t) (5)
the following is the delta function of equations (3), (4):
Figure BDA0003769495440000081
wherein φ is δ h (x) An embedded function, expressed as:
Figure BDA0003769495440000082
wherein h is the fluid grid dimension.
In the framework of the immersion boundary method IB2d, the velocity and pressure value of the flow field at each moment are obtained by solving equations (1) to (4) in a coupling manner.
The appropriate grid dimensions are selected through the verification of the grid independence, and in the present embodiment, 512 × 512 grids are selected as the calculation grids. Simulating the translational motion process of the insect wing by adopting an open source immersion boundary solver IB2d aiming at the two-dimensional problem, wherein the motion equation of the insect wing is a sine wave function
Figure BDA0003769495440000083
Where t represents the movement time. In the simulation process, the insect wings are simplified into rigid segments, and a simplified schematic diagram of the insect wings is shown in fig. 2. The length of the line segment is 1m, and the inclination angle is 45 degrees. The grid calculated by the insect wing movement has the movement track shown in figure 3. And calculating to obtain the real insect wing surface pressure distribution, the insect wing shape and position information and the speed field data of the fluid domain where the insect wing is located, which correspond to the insect wing under the time sequence consisting of 10 periods under different Reynolds numbers.
Step 2: the data set is partitioned. And taking 30% of insect wing motion simulation data as a test set for evaluating the prediction performance of the trained model, and dividing the rest data into a training set and a verification set of the neural network according to the proportion of 80% and 20% for training the neural network model.
And step 3: and constructing an input and output matrix. To simultaneously acquire the time and space information of the surrounding flow field during the movement of insect wingsThe invention designs a novel input matrix. Respectively storing the characteristics of the surface pressure distribution, reynolds number, shape and position information of the real insect wing under a certain time sequence into three two-dimensional matrixes with the same size, and forming an input matrix X = [ G ] of a neural network by superposition (9) ,P (9) ,Re (9) ]. Wherein G (9) ,P (9) ,Re (9) The method comprises the following steps of forming the shape and position information of the insect wing, the surface pressure distribution of the real insect wing and the Reynolds number, wherein the shape and position information of the insect wing correspond to nine continuous moments (a certain moment T is selected as a first moment, and then the moments T are selected every T/120, namely T + T/120, T +2T/120, 8230; T + 8T/120). And storing the speed field of the watershed where the insect wings are located at the next moment (namely T + 9T/120) in a two-dimensional matrix, wherein the matrix is an output matrix.
Matrix G (9) ,P (9) ,Re (9) The concrete structures of (A) are respectively as follows:
Figure BDA0003769495440000091
Figure BDA0003769495440000092
Figure BDA0003769495440000093
at [ t, t +8 δ t](t is an arbitrary time) interval, G i The binary matrix representing the shape and position of the insect wing at time t + i δ t, i =0,1 \8230; \8230, 8, the grid where the insect wing is located is 1, and the remaining fluid regions are 0. At each G i In the matrix, the position information of the insect wings at different moments is different, and the attack angles of the insect wings at different inclination angles are different. P is i Representing the pressure at the sampling point on the insect wing surface at time t + i deltat (i =0,1 \ 8230; 8). A diagonal matrix formed by pressure snapshots at the t + i delta t moments and the number N of insect wing surface pressure sampling points i Is P i The number of rows and columns; re i Representing the reynolds number of the basin at time t + i δ t (i =0,1 \ 8230; \8230;, 8).
And 4, step 4: and (4) preprocessing data. And normalizing the input matrix and the output matrix and the like.
Preprocessing of an input matrix:
Figure BDA0003769495440000094
X ji is j (0)<=j<= 10T) time of insect wing surface ith (0)<=i<Pressure value of = 81) pressure sampling points, X min For the minimum pressure of the training sample, X max Is the maximum pressure of the training sample.
Figure BDA0003769495440000095
Is j (0)<=j<= 10T) time of the ith (0) th surface of the insect wing<=i<= 81) pressure values normalized by pressure sampling points.
Preprocessing of an output matrix:
Figure BDA0003769495440000101
Y i to calculate the ith (1) in the domain<=i<= 7569), and V is the characteristic velocity of the flow field, i.e. the maximum speed of the wing movement.
Figure BDA0003769495440000102
To calculate the ith (1) in the domain<=i<= 7569) dimensionless flow rate at grid points.
And 5: and constructing a deep learning network. And (3) constructing a flow field reconstruction neural network around the insect wing by using a prediction model of the basnet deep convolution network. Firstly, an input matrix and an output matrix are converted into images which are used as input and output of the deep learning network respectively. And then extracting the characteristics of the input image through an Encode layer, and obtaining high-level semantic characteristics of extracted local information by using a Pooling method (Pooling method). In order to further acquire global information, a Bridge layer is added between the Encode and the Decode. The later Decode part is responsible for gradually restoring and amplifying the high-level semantic information so as to gradually reconstruct the flow field image, and finally, the flow field image is integrated into the size of an output image in the Resize layer. The overall framework of the deep learning network is shown in fig. 4 and 5.
The Encode part has an input convolutional layer and 6 encoding layers composed of basic residual modules. The input convolutional layer and the first 4 coding layers basically use the layers in ResNet-34, and then 2 additional coding layers are added, the input convolutional layer comprises a Conv _ BN _ Relu (1) calculation module, the algorithm flow chart of the module is shown in FIG. 6a, and the matrix A 0 Performing convolution operation of 2 x 2 and batch normalization processing of BatchNorm to obtain matrix A 1 . The calculation function is as in equation (13):
A′=Relu(BatchNorm(Conv 2×2,padding=(1,1) (A))) (13)
wherein A' is the output matrix of the calculation module, A is the input matrix of the calculation module, conv 2×2,padding=(1,1) This indicates that a convolution operation with a convolution kernel (kernel) of 2 × 2 is performed once, and edge padding is performed with padding = (1, 1), and the upper, lower, left, and right sides are each padded in one row or one column. BatchNorm indicates that one batch normalization was performed, relu indicates that one Relu function activation was performed.
The Encoder _1 coding layer is composed of three BasicBlock calculation modules, the algorithm flow of the modules is shown in FIG. 6b, and the calculation module functions are as follows (14):
A′=BatchNorm(Conv 3×3,padding=(1,1) (Relu(BatchNorm(Conv 2×2,padding=(1,1) (A)))))+A (14)
wherein Conv 3×3,padding=(1,1) This represents performing a convolution operation with a convolution kernel of 3 × 3 once, and filling the edges with padding = (1, 1). A. The 1 Obtaining a matrix A after an encoder _1 coding layer 2
Matrix A 2 A is obtained by an encoder _2, encoder _3, encoder _4encoding layer 5 And (4) a matrix. The encorder _2 and encorder _4are composed of one BasicBlock _ downsample and two BasicBlock computing modules, and the encorder _3 is composed of one BasicBlock computing moduleBasicBlock _ download sample and five BasicBlock calculation modules.
The basic block _ downlink sample computation module flow diagram is shown in fig. 6 c. The calculation function is as in equation (15):
A′=BatchNorm(Conv 3×3,padding=(1,1) (Relu(BatchNorm(Conv 3×3,stride=2,padding=(1,1) (A)))))+BatchNorm(Conv 1×1,stride=2 (A)) (15)
wherein A' is the output matrix of the calculation module, A is the input matrix of the calculation module, conv 3×3,stride=2,padding=(1,1) This represents performing a convolution operation with a convolution kernel of 3 × 3 and an operation step (stride) of 2, and padding the edges with padding = (1, 1). BatchNorm indicates that one batch normalization was performed. Relu means that Relu function activation is performed once. Conv 3×3,padding=(1,1) This represents performing a convolution operation with a convolution kernel of 3 × 3 once, and filling the edges with padding = (1, 1). Conv 1×1,stride=2 This represents performing a convolution operation with a convolution kernel of 1 × 1 and an operation step of 2.
The encoder _5 and the econder _6are composed of a maximum pooling Maxpool operation with a one-time pooling core of 2 x 2 and a calculation step of 2 and two BasicBlock calculation modules. encoder _5, econder_6 pairs A 5 After matrix processing, a matrix B is obtained.
The Bridge layer consists of three Conv _ BN _ Relu (2) calculation modules, the flow chart of the modules is shown in FIG. 6d, and the calculation function is as follows (16):
A′=Relu(BatchNorm(Conv 3×3,padding=(2,2),dilation=(2,2) (A))) (16)
wherein Conv (3×3),padding=(2×2),dilation=(2,2) This indicates that the linear expansion convolution operation is performed once with an expansion convolution kernel of 3 × 3 and a row and column expansion ratio (scaling) of 2, and edge padding of padding = (2, 2) is used.
Decode and Encode are symmetric. The input of the decoding layer is the superposition of the sampling characteristic graphs on the same dimension output by the front coding layer and the previous layer in the third dimension, and the final output matrix can consider the characteristics of low-level semantics and high-level semantics.
Output of Bridge layerThe superposition of the result with the matrix B yields the matrix C 6 。C 6 The matrix C is obtained by decoder _6 processing 5 . decoder _6 is composed of a Conv _ BN _ Relu (1), two Conv _ BN _ Relu (2) and a row-column amplification factor (scale _ factor) which are all 2, and the up-sampling algorithm is the up-sampling operation of the bilinear. A. The 5 And C 5 The matrix obtained by superposition is processed by three decoding layers with the same structure, namely decoder _5, decoder _4and decoder _3, to obtain a matrix C 2 . The three layers of matrixes are formed by three Conv _ BN _ Relu (1) and an upsampling operation with the amplification factor of 2, and a bilinear upsampling algorithm is selected. Finally, an output matrix C is obtained through convolution operation of decoder _2 (comprising three Conv _ BN _ Relu (1) calculation modules) and a convolution kernel with the number of 3 x 3 and edge filling padding = (1, 1) 0
Step 6: model training and prediction: setting the values of epochs (1 epoch is equal to that all training data are input into the model and are completely trained once) and batch (batch of the training set), and training the constructed insect wing motion flow field reconstruction model according to the training samples. The neural network is optimized by using a mean square error loss function and a self-adaptive version of a random gradient descent algorithm, wherein the loss function is
Figure BDA0003769495440000121
Wherein, N is the total number of the data,
Figure BDA0003769495440000122
a flow field reconstruction result y corresponding to the preprocessed fluid simulation data in the ith sample in the network model training data set obtained in the step 4 i And 4, preprocessing a flow field result in the ith sample in the network model training data set obtained in the step 4.
And for the hyper-parameters, activating a function to optimize to obtain an optimal deep neural network structure.
The results show that: the invention adopts Reynolds numbers equal to 40, 70, 90, 120 and 150 as training groups and Reynolds numbers equal to 60 and 100 as test groups. Training the neural network, and testing the velocity distribution diagram of the real and predicted flow field of the flow field in which the insect wings are positioned in one period of the group, wherein the gray value images are shown in the figures 7a, 7b, 7c, 7d and 8.
The method can quickly and accurately reconstruct the velocity field of the drainage basin where the insect wing is located under any Reynolds number through the pressure of the surface sampling point of the insect wing, and can provide real-time information feedback of the peripheral flow field for the flapping-wing motion machine imitating the insect wing.
The present invention provides a method, an apparatus and a storage medium for reconstructing a data-driven dynamic boundary flow field, and a plurality of methods and approaches for implementing the technical solution, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of modifications and embellishments can be made without departing from the principle of the present invention, and these modifications and embellishments should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (10)

1. The data-driven dynamic boundary flow field reconstruction method is characterized by comprising the following steps of:
step 1, simulating the motion process of an object to obtain numerical simulation data;
step 2, dividing a data set;
step 3, constructing an input/output matrix;
step 4, data preprocessing is carried out;
step 5, constructing a deep learning network;
and 6, carrying out model training and prediction.
2. The method of claim 1, wherein step 1 comprises: simulating the motion process of the object by adopting an immersion boundary solver IB2d aiming at the two-dimensional problem, and calculating to obtain numerical simulation data, wherein the numerical simulation data comprises real object surface pressure distribution, object shape and position information and velocity field data of the whole fluid domain, which correspond to the moving object under different Reynolds numbers under a time sequence formed by a K period, and the immersion boundary solver IB2d simulates the motion process of the object by using a Navier-Stokes equation, and the following steps are shown:
Figure FDA0003769495430000011
Figure FDA0003769495430000012
wherein, the position vector x = (x, y), the time t is an independent variable, and x, y are respectively the abscissa and ordinate of the calculation grid; u (x, t) = (u (x, t), v (x, t)) is fluid velocity, u (x, t) is the magnitude of the transverse velocity in the computational grid, and v (x, t) is the magnitude of the longitudinal velocity in the computational grid; p (x, t) is the pressure, f (x, t) is the force exerted by the moving object on the submerged boundary on the fluid, p represents the density of the fluid, μ represents the dynamic viscosity,
Figure FDA0003769495430000013
represents a Hamiltonian, and Δ represents a Laplace operator;
Figure FDA0003769495430000014
represents the first partial derivative of the fluid velocity u (x, t) at time t;
the interaction equation between the moving object and the fluid is:
f(x,t)=∫F(r,t)δ(x-X(r,t))dr (3)
Figure FDA0003769495430000015
wherein r is a Lagrange position, X (r, t) is a Cartesian coordinate of a mark point which is positioned on the moving object as r at time t, and d represents a differential; equation (4) sets the boundary velocity U (X (r, t), t) equal to the local fluid velocity to satisfy the no-slip condition on the submerged structure; f (r, t) is the force per unit area exerted on the fluid by elastic deformation in the submerged structure expressed as:
F(r,t)=F(X(r,t),t) (5)
the function δ in equations (3) and (4) is as follows:
Figure FDA0003769495430000021
where φ is δ h (x) An embedded function, expressed as:
Figure FDA0003769495430000022
wherein h is the fluid grid dimension.
3. The method of claim 2, wherein step 2 comprises: simulating X in numerical simulation data 1 The data of (2) as a test set, X in the remaining data 2 As a training set, X 3 As a verification set.
4. The method of claim 3, wherein step 3 comprises: respectively storing the surface pressure distribution, reynolds number and object shape and position information of a real object in a time sequence into three two-dimensional matrixes with the same size, and forming an input matrix X = [ G ] of a deep learning network by superposition (9) P (9) Re (9) ]Wherein the matrix G (9) ,P (9) ,Re (9) The method comprises the following steps of respectively forming shape and position information of an object, surface pressure distribution of a real object and Reynolds number corresponding to S continuous moments; and storing the speed field of the whole watershed at the next moment in a two-dimensional matrix, wherein the two-dimensional matrix is an output matrix.
5. The method of claim 4, wherein in step 3, the matrix G (9) ,P (9) ,Re (9) Is The concrete structure is respectively as follows:
Figure FDA0003769495430000023
Figure FDA0003769495430000024
Figure FDA0003769495430000031
at [ t, t +8 δ t]In the interval, δ t represents a time interval, G i The method comprises the steps of representing a binary matrix of the shape and the position of an object at the time t + i delta t, wherein t is any time, i =0,1, 8230, 8, the grid where the moving object is located is 1, and the rest fluid regions are 0; p is i A diagonal matrix formed by pressure snapshots of the surface sampling points of the moving object at the time of t + i delta t, and the number N of the surface pressure sampling points of the moving object i Is P i And P is i Is also equal to N i ;Re i Representing the reynolds number of the basin at time t + i δ t.
6. The method of claim 5, wherein step 4 comprises:
the input matrix is preprocessed using the following formula:
Figure FDA0003769495430000032
X ji is the pressure value of the ith pressure sampling point on the surface of the moving object at the j time point, 0<=j<=K*T,X min Is the minimum value of the surface pressure of the moving object in each sample, X max The maximum value of the surface pressure of the moving object in each sample is obtained;
Figure FDA0003769495430000033
for the ith pressure of the surface of the moving object at the jth momentThe normalized pressure value of the force sampling point;
the output matrix is preprocessed by the following formula:
Figure FDA0003769495430000034
Y i to calculate the flow velocity at the ith grid point in the flow Domain, 1<=i<H is a grid dimension; v is the characteristic speed of the flow field, namely the maximum speed of the object movement; y is i norm To calculate the dimensionless flow rate at the ith grid point in the flow domain.
7. The method of claim 6, wherein step 5 comprises: converting the input matrix and the output matrix into images which are used as the input and the output of a deep learning network respectively, extracting the characteristics of the input images through an Encode layer, and obtaining high-level semantic characteristics of extracted local characteristics by utilizing a Pooling Pooling method; a Bridge transition layer is added between the Encode coding layer and the Decode anti-coding layer; and gradually restoring and amplifying the high-level semantic features through a Decode part so as to gradually reconstruct the flow field image, and finally integrating the flow field image into an output image size in a Resize integration layer.
8. The method of claim 7, wherein step 6 comprises: setting values of epochs and batch, wherein the epochs represents the number of single training iterations of all batches in forward and backward propagation, the batch represents the number of times of updating weights of single network training, the constructed deep learning network is trained according to a training set, the deep learning network is optimized by using a mean square error loss function and a self-adaptive version of a random gradient descent algorithm, and a loss function J is as follows:
Figure FDA0003769495430000041
wherein N is the total number of data in the training set,
Figure FDA0003769495430000042
a flow field reconstruction result y corresponding to the preprocessed fluid simulation data in the ith sample in the training set obtained in the step 4 i And 4, obtaining a flow field result after pretreatment in the ith sample in the training set obtained in the step 4.
9. The data-driven dynamic boundary flow field reconstruction device is characterized by comprising:
the data acquisition module is used for simulating the motion process of the object and acquiring numerical simulation data;
a data dividing module for dividing the data set;
the input and output matrix construction module is used for constructing an input and output matrix;
the data preprocessing module is used for preprocessing data;
the deep learning network construction module is used for constructing a deep learning network;
and the model training and predicting module is used for carrying out model training and prediction.
10. A storage medium, storing a computer program or instructions which, when executed, implement the method of any one of claims 1 to 8.
CN202210897466.0A 2022-07-28 2022-07-28 Data-driven dynamic boundary flow field reconstruction method and device and storage medium Pending CN115438567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210897466.0A CN115438567A (en) 2022-07-28 2022-07-28 Data-driven dynamic boundary flow field reconstruction method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210897466.0A CN115438567A (en) 2022-07-28 2022-07-28 Data-driven dynamic boundary flow field reconstruction method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115438567A true CN115438567A (en) 2022-12-06

Family

ID=84241726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210897466.0A Pending CN115438567A (en) 2022-07-28 2022-07-28 Data-driven dynamic boundary flow field reconstruction method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115438567A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116050303A (en) * 2023-03-06 2023-05-02 中国空气动力研究与发展中心计算空气动力研究所 Periodic boundary condition applying method under CFD parallel computing
CN116562330A (en) * 2023-05-15 2023-08-08 重庆交通大学 Flow field identification method of artificial intelligent fish simulation system
CN118347697A (en) * 2024-06-17 2024-07-16 西北工业大学 Stress prediction method and system for underwater fixed platform in internal wave

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116050303A (en) * 2023-03-06 2023-05-02 中国空气动力研究与发展中心计算空气动力研究所 Periodic boundary condition applying method under CFD parallel computing
CN116050303B (en) * 2023-03-06 2023-06-27 中国空气动力研究与发展中心计算空气动力研究所 Periodic boundary condition applying method under CFD parallel computing
CN116562330A (en) * 2023-05-15 2023-08-08 重庆交通大学 Flow field identification method of artificial intelligent fish simulation system
CN116562330B (en) * 2023-05-15 2024-01-12 重庆交通大学 Flow field identification method of artificial intelligent fish simulation system
CN118347697A (en) * 2024-06-17 2024-07-16 西北工业大学 Stress prediction method and system for underwater fixed platform in internal wave

Similar Documents

Publication Publication Date Title
CN115438567A (en) Data-driven dynamic boundary flow field reconstruction method and device and storage medium
Xu et al. Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics
US12050845B2 (en) Estimating physical parameters of a physical system based on a spatial-temporal emulator
Calzolari et al. Deep learning to replace, improve, or aid CFD analysis in built environment applications: A review
US20210064802A1 (en) Method and System for Increasing the Resolution of Physical Gridded Data
CN104063714B (en) A kind of for fast face recognizer video monitoring, based on CUDA parallel computation and rarefaction representation
Gupta et al. A hybrid partitioned deep learning methodology for moving interface and fluid–structure interaction
CN114638048A (en) Three-dimensional spray pipe flow field rapid prediction and sensitivity parameter analysis method and device
Miyanawala et al. A novel deep learning method for the predictions of current forces on bluff bodies
Lan et al. Scaling up bayesian uncertainty quantification for inverse problems using deep neural networks
Li et al. Fast flow field prediction of hydrofoils based on deep learning
Dalton et al. Emulation of cardiac mechanics using Graph Neural Networks
Peng et al. Spatial convolution neural network for efficient prediction of aerodynamic coefficients
Xu et al. Comparative studies of predictive models for unsteady flow fields based on deep learning and proper orthogonal decomposition
Yu et al. Simulation of unsteady flow around bluff bodies using knowledge-enhanced convolutional neural network
US20220202348A1 (en) Implementing brain emulation neural networks on user devices
Mitra et al. CreativeAI: Deep learning for graphics
Raj et al. Comparison of reduced order models based on dynamic mode decomposition and deep learning for predicting chaotic flow in a random arrangement of cylinders
Song et al. Reconstruction of RANS model and cross-validation of flow field based on tensor basis neural network
CN116992607A (en) Structural topology optimization method, system and device
Pan et al. A high resolution Physics-informed neural networks for high-dimensional convection–diffusion–reaction equations
KR20230147498A (en) Apparatus and method for performing learning for target lesion detection
Padmanabha et al. A Bayesian multiscale deep learning framework for flows in random media
Takbiri-Borujeni et al. A data-driven proxy to Stoke's flow in porous media
Liu Prediction of Capillary Pressure and Relative Permeability Curves using Conventional Pore-scale Displacements and Artificial Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination