CN115455838B - High-spatial-resolution flow field reconstruction method for time-course data - Google Patents

High-spatial-resolution flow field reconstruction method for time-course data Download PDF

Info

Publication number
CN115455838B
CN115455838B CN202211177018.XA CN202211177018A CN115455838B CN 115455838 B CN115455838 B CN 115455838B CN 202211177018 A CN202211177018 A CN 202211177018A CN 115455838 B CN115455838 B CN 115455838B
Authority
CN
China
Prior art keywords
time
course
sample
layer
flow field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211177018.XA
Other languages
Chinese (zh)
Other versions
CN115455838A (en
Inventor
战庆亮
白春锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202211177018.XA priority Critical patent/CN115455838B/en
Publication of CN115455838A publication Critical patent/CN115455838A/en
Application granted granted Critical
Publication of CN115455838B publication Critical patent/CN115455838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Abstract

The invention discloses a high spatial resolution flow field reconstruction method facing time-course data, which comprises the following steps: acquiring a time-course sample signal and a time-course sample coordinate corresponding to the time-course sample signal in a flow field to be reconstructed so as to acquire a time-course sample signal set and a time-course sample coordinate set; constructing a deep learning network model based on full convolution calculation; acquiring an optimal deep learning network model; and calculating time-course signals at all unknown measuring points of the flow field to be reconstructed so as to reconstruct the flow field to be reconstructed. According to the invention, the limited number of sample measuring points are selected in the flow field to be reconstructed, the one-dimensional time-course sample signals of the limited number of sample measuring points are obtained, and the time-course signals of any measuring point in the flow field are obtained through the optimal deep learning network model, so that the requirements on the acquisition of the flow field are greatly reduced. Meanwhile, the required input data volume is small, the data type is simple, the model keeps the time sequence information of the sample measuring point, and the method is suitable for high-precision reconstruction of complex unsteady flow fields.

Description

High-spatial-resolution flow field reconstruction method for time-course data
Technical Field
The invention relates to the field of high-resolution generation and reconstruction of flow fields, in particular to a high-spatial-resolution flow field reconstruction method for time-course data.
Background
The accurate numerical reconstruction of the turbulent flow field is one of the hydrodynamic front problems which need to be solved urgently, however, flow structures with different dimensions exist in the flow field under high Reynolds number, the characteristics are very complex, and the reconstruction difficulty of the flow field is high. The existing dimension reduction model of the flow field is always an effective method for researching flow reconstruction, such as eigenvalue decomposition method, dynamic modal decomposition method and the like, but the nonlinear turbulence characteristics are difficult to describe completely by adopting linear transformation, and the method based on matrix decomposition has great difficulty in researching nonlinear strong turbulence problems.
In the aspect of research of a characterization model of a flow field, a deep learning method is also a research hot spot, such as a modal decomposition method for snapshot of the flow field based on a two-dimensional convolutional neural network and an unsteady flow automatic coding model. These studies make it possible to obtain spatiotemporal high resolution results from either partial data or low resolution data. However, no matter what method is adopted to establish the association between the sparse measurement point and the whole field data, the model needs a high-resolution flow field snapshot as input information of supervised learning in the training process, however, in practical experiments, the whole field snapshot is often difficult to obtain, more single-point time-course signals severely limit the application of the model in turbulent high-resolution reconstruction.
Disclosure of Invention
The invention provides a high-spatial-resolution flow field reconstruction method for time-course data, which aims to overcome the technical problem.
In order to achieve the above object, the technical scheme of the present invention is as follows:
a time-course data-oriented high-spatial-resolution flow field reconstruction method comprises the following steps:
step 1: acquiring a time-course sample signal and a time-course sample coordinate corresponding to the time-course sample signal in a flow field to be reconstructed so as to acquire a time-course sample signal set and a time-course sample coordinate set;
step 2: constructing a deep learning network model based on full convolution calculation;
step 3: acquiring an optimal deep learning network model according to the time-course sample signal set, the time-course sample coordinate set and the deep learning network model based on full convolution calculation;
and 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed.
Further, in the step 1, the method for acquiring the time-course sample signal set and the time-course sample coordinate set is as follows:
step 1.1: determining the range of a flow field to be reconstructed, and randomly selecting a plurality of sample measuring points in the range;
step 1.2: acquiring a time-course sample signal of the sample measuring point and a time-course sample coordinate corresponding to the time-course sample signal;
step 1.3: forming a time-course sample signal set by a plurality of time-course sample signals; and forming a time-course sample coordinate set corresponding to the time-course sample signal set by a plurality of time-course sample coordinates.
Further, the deep learning network model includes: an input layer, a full connection layer, a convolution layer, a coding layer, a deconvolution layer and an output layer;
the input layer comprises a coordinate input layer and a time-course input layer; the training time course sample coordinate set and the training time course sample signal set are respectively input;
the full-connection layer is used for receiving output data of the coordinate input layer to perform full-connection calculation, one side of the full-connection layer is connected with the coordinate input layer, and the other side of the full-connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time-course input layer and performing convolution calculation; one side of the convolution layer is connected with the time-interval input layer, and the other side of the convolution layer is connected with the coding layer;
the coding layer is used for receiving output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer to carry out deconvolution operation, and the deconvolution layer is connected with the output layer.
Further, in the step 3, the method for obtaining the optimal deep learning network model is as follows:
step 3.1: acquiring a training set and a testing set according to the time-course sample signal set and the time-course sample coordinate set;
step 3.2: inputting a training time course sample signal set in the training set to the time course input layer, and simultaneously inputting a training time course sample coordinate set in the training set to a coordinate input layer, training the deep learning network model, and obtaining a trained deep learning network model;
step 3.3: and testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model converges, the trained deep learning network model at the moment is an optimal deep learning network model; otherwise, repeating the steps 3.1-3.3.
Further, in the step 3.3, a loss function for determining whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the depth network model at the nth sample measurement point at time s, +.>The real sample time interval signal of the nth sample measuring point at the time s is obtained; k represents the number of samples in the sample set, n represents the sample number, s represents the time of the time-lapse signal in the samples, and R represents the length of the time-lapse signal.
The beneficial effects are that: according to the high-spatial-resolution flow field reconstruction method for time-course data, a limited number of sample measuring points are selected in the flow field to be reconstructed, one-dimensional time-course sample signals of the limited number of sample measuring points are obtained, time-course signals of any measuring point in the flow field are obtained through an optimal deep learning network model, and the requirement on flow field acquisition is greatly reduced, so that the method is a method with practical operability; the invention has small input data quantity, adopts the deep learning network model of full convolution calculation to extract and classify the characteristics of the time-course sample signal, and reserves the time sequence information of the sample measuring point, thus having high reconstruction precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a high spatial resolution flow field reconstruction method for time-course data in an embodiment of the invention;
FIG. 2 is a schematic diagram of a flow field time course measurement point arrangement in an embodiment of the invention;
FIG. 3 is a schematic diagram of a deep learning model structure according to the present invention;
FIG. 4 is an error plot of a model in an embodiment of the present invention;
FIG. 5 is a graph showing the results of a real sample and a model reconstruction sample in an embodiment of the present invention;
FIG. 6 is a diagram of known flow field information at some instant in time in an embodiment of the present invention;
FIG. 7 is a diagram of a model reconstruction flow field for a transient in an embodiment of the invention;
fig. 8 shows the actual flow field information at a certain instant in an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment provides a high spatial resolution flow field reconstruction method for time-course data, as shown in fig. 1, comprising the following steps:
step 1, measuring and acquiring a time-course sample signal and a time-course sample coordinate corresponding to the time-course sample signal in a flow field to be reconstructed by adopting a time-course sensor so as to acquire a time-course sample signal set and a time-course sample coordinate set;
in the step 1, the method for acquiring the time-course sample signal set and the time-course sample coordinate set is as follows:
step 1.1: according to the application scene, determining the range of the flow field to be reconstructed, and randomly selecting a plurality of sample measuring points in the range; specifically, the range of the flow field to be reconstructed is empirically set by a person skilled in the art; for example, a certain range around a certain object is given by combining experience, and the range of the flow field to be reconstructed is selected reasonably, so that the subsequent prediction accuracy can be improved. The time-course sample coordinates corresponding to the time-course sample signals are positions in a relative coordinate system established by taking a flow field to be reconstructed as a reference; as shown in fig. 2, the range of the flow field to be reconstructed includes flow field measuring points of the upstream and downstream parts of the object, and the measuring points cover important regions of interest;
step 1.2: acquiring a time-course sample signal of the sample measuring point and a time-course sample coordinate corresponding to the time-course sample signal;
step 1.3: forming a time-course sample signal set by a plurality of time-course sample signals; forming a time-course sample coordinate set corresponding to the time-course sample signal set by a plurality of time-course sample coordinates;
specifically, in this embodiment, a wind tunnel test method, a water tunnel test method or a numerical simulation method is optionally adopted, and a time-course sample signal of a sample measurement point is obtained at the sample measurement point through a time-course sensor; selecting a time-course sample signal at a certain sample measuring point, and recording the time-course sample coordinates at the moment; and moving the position of the time-course sensor, acquiring a time-course sample signal of another sample measuring point, and recording the time-course sample coordinates at the moment. And repeating the steps to obtain all time-course sample signals in the flow field to be reconstructed to form a time-course sample signal set, and simultaneously obtaining a corresponding time-course coordinate set. In this example, a specific sample site arrangement is shown in fig. 2, with a total of 7200 sensors arranged,
step 2: constructing a deep learning network model based on full convolution calculation (FCN deep learning network model based on full convolution calculation);
specifically, in the deep learning network model based on full convolution calculation in this embodiment, a set of time-course sample signals and a set of time-course sample coordinates corresponding to the set of time-course sample signals are put into the deep learning network model, so as to obtain predicted time-course signals. The function of the deep learning network model is to extract the number of codes according to the number of the columns, and further obtain the output of the deep learning network model.
The deep learning network model based on the full convolution calculation comprises the following steps: an input layer, a full connection layer, a convolution layer, a coding layer, a deconvolution layer and an output layer;
the input layer comprises a coordinate input layer and a time-course input layer; the training time course sample coordinate set and the training time course sample signal set are respectively input;
the full-connection layer is used for receiving output data of the coordinate input layer to perform full-connection calculation, one side of the full-connection layer is connected with the coordinate input layer, and the other side of the full-connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time-course input layer and performing convolution calculation; one side of the convolution layer is connected with the time-interval input layer, and the other side of the convolution layer is connected with the coding layer;
the coding layer is used for receiving output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer to carry out deconvolution operation, and the deconvolution layer is connected with the output layer.
Specifically, as shown in fig. 3; the input layer comprises a coordinate input layer and a time input layer, and a one-dimensional training time sample signal set and a training time sample coordinate set corresponding to the one-dimensional training time sample signal set are input through the time input layer and the coordinate input layer respectively; carrying out full-connection calculation on the output of the coordinate input layer through the full-connection layer 1, and carrying out full-connection calculation on the output of the full-connection layer 1 through the full-connection layer 2 again; the output of the full connection layer 2 is input to the coding layer; meanwhile, carrying out convolution operation on training time-course sample signals of a time-course input layer through a convolution layer 1; carrying out convolution calculation on the output data of the convolution layer 1 through the convolution layer 2 again, and inputting the output data of the convolution layer 2 into the coding layer; the coding layer receives the output data of the full-connection layer 2 and the output data of the convolution layer 2 at the same time, after the operation is carried out by the coding layer, deconvolution calculation is carried out on the output of the coding layer by the deconvolution layer 1, then deconvolution calculation is carried out on the output data of the deconvolution layer 1 by the deconvolution layer 2 again, and finally the output of the deep learning network model is output by the output layer; the prediction time course sample signal after the network model is deeply learned;
specifically, the calculation methods of the full connection layer, the convolution layer, the coding layer and the deconvolution layer in this embodiment are all existing, and only the function of obtaining the output result of the output layer through the input data of the input layer according to the network structure related to the present invention is realized.
Step 3: training the deep learning network model according to the time-course sample signal set and the time-course sample coordinate set to obtain an optimal deep learning network model;
preferably, step 3.1: acquiring a training set and a testing set according to the time-course sample signal set and the time-course sample coordinate set;
specifically, a test time-course sample signal set and a training time-course sample signal set are obtained through the time-course sample signal set, and a test time-course sample coordinate set corresponding to the test time-course sample signal set and a training time-course sample coordinate set corresponding to the training time-course sample signal set are obtained through the time-course sample coordinate set;
in the example, 50% of samples in 7200 samples are randomly selected as training time course sample signal sets and used as time course input layer variables of the model; the corresponding training time course sample coordinate set is used as a coordinate input layer variable of the model;
step 3.2: inputting a training time course sample signal set in the training set to the time course input layer, and simultaneously inputting a training time course sample coordinate set in the training set to a coordinate input layer, training the deep learning network model, and obtaining a trained deep learning network model; specifically, the relation between the time-course sample coordinate set and the time-course sample signal set is obtained, and all parameters representing the functional relation between the time-course sample coordinate set and the time-course sample signal set are parameters of the deep learning network model.
Step 3.3: and testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model converges, the trained deep learning network model at the moment is an optimal deep learning network model; otherwise, repeating the steps 3.1-3.3.
Specifically, the test time course sample coordinate set in the test set is input into the trained deep learning network model, and the output predicted time course sample signal is compared with the real time course signal in the test set to judge whether convergence exists or not;
preferably, in step 3.3, a loss function for determining whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the depth network model at the nth sample measurement point at time s, +.>The real sample time interval signal of the nth sample measuring point at the time s is obtained; k represents the number of samples in the sample set, n represents the number of samples, s represents the time of the time-course signal in the samples, and R represents the length of the time-course signal;
specifically, the loss function is a target of calculation based on a deep learning network model of full convolution calculation, and the deep learning network model is converged through forward iteration and reverse iteration to obtain a trained deep learning network model; in the process, the deep learning network model can automatically extract the association features between the time-course sample signal set and the time-course sample coordinate set, perform feature dimension reduction and reconstruction, and simultaneously establish the relationship between the time-course sample signal and the sample measuring point coordinates; when L is smaller than a set value, acquiring characteristics of a relation between time-course sample signal sets through a time-course sample coordinate set as characteristic parameters of a deep learning network model; reducing the difference between the reconstruction identification result and the true value;
in this example, a total of 450 iterations were performed, the results of which are shown in FIG. 4. The loss function is small enough (1 e-4), the precision requirement is met, and the training of the deep learning network model is finished;
specifically, the reverse iteration is in the prior art, specifically, the error of the existing model is continuously corrected to the front end of the model, so that the reconstruction accuracy of the new model is better, the new model is continuously repeated, a better model can be obtained, and the iteration needs to be repeated. Firstly, forward iteration is carried out to obtain the difference between a reconstruction result and a true value, and then reverse iteration is carried out to reduce the existing difference; the iteration is repeated until the difference is sufficiently small. Typically 300 iterations. The model comprises two parts, one part is shown in fig. 3 and corresponds to a frame; another part is the parameters in this framework, which are to be obtained by the iteration described above, according to the specific example, i.e. the feature parameters of the deep-learning network model. The iterative acquisition of these parameters is based on a set of data, i.e. the input of a model, after fixing the parameters, the model shown in fig. 3 is used to reconstruct the time-course curve of the unknown points in combination with these parameters.
And 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed. To obtain a high spatial resolution flow field.
Specifically, the coordinates of each measuring point in the flow field to be reconstructed are input into the optimal deep learning network model through the coordinates of all measuring points in the flow field to be reconstructed, namely, time-course signals of all measuring points in the flow field to be reconstructed can be obtained. And according to the time-course signals of the measuring points, obtaining a flow field with high spatial resolution, and realizing the reconstruction of the flow field to be reconstructed.
According to the embodiment, through a small amount of time-course sample signals in the flow field to be reconstructed, flow field data at more positions in the flow field to be reconstructed, namely, the time-course signals of all measuring points in the flow field to be reconstructed, are obtained, the flow field of the high-resolution time-course signals is obtained, and the reconstruction of the flow field is realized.
In another embodiment of the present invention, 10000 points are randomly generated in the flow field to be reconstructed, and flow field data are predicted at 100000 points; taking the coordinates of 100000 points as a coordinate input layer of the model to obtain model output at the positions, namely flow field time course data predicted by the optimal deep learning network model;
in this embodiment, the real data and model prediction data results of 6 measuring points are randomly selected, as shown in fig. 5, and meanwhile, the loss between the prediction time-course sample signal and the real time-course sample signal output by the optimal deep learning network model at each measuring point position is calculated, the calculation method is the same as that of formula (1), the error reaches 1e-4 orders of magnitude, and the result shows that the model successfully predicts the flow field;
the 100000 data are taken as values at the same time, the known data are shown in fig. 6, the flow field transient cloud image after high-resolution reconstruction is shown in fig. 7, the real cloud image is shown in fig. 8, and the high-resolution reconstruction is successfully performed by the method. The method can reconstruct and obtain the flow field data with high spatial resolution according to a few flow field time-course information and has high accuracy.
The step 4 further comprises the following steps:
step 5: and acquiring a time-course signal at any position in the fluid according to the reconstructed flow field so as to analyze the flow field.
And acquiring time-course signals at any position where the measuring points cannot be arranged according to the reconstructed flow field, and providing time-course data for subsequent flow field analysis and control. Such as aerodynamic signals of wing detours, water flow signals around the vessel, wake signals of wind turbines, etc. If a flow field exists near the airplane, the sensor is insufficient, only 7200 points are measured, and 5000 training points are adopted to obtain parameters of the deep learning network model. Then testing the deep learning network model by using the rest 2200 to obtain an optimal deep learning network model; because the 2200 models did not participate in the training, if the model could be predicted very accurately, this indicates that the model was successful. The data of 100000000000000 non-measured points are predicted by a successful model, so that a large data set which is wanted by us is obtained. The method can be used for carrying out aerodynamic shape optimization design, flow field detection and control and the like of the aircraft.
The beneficial effects are that:
(1) According to the invention, a one-dimensional time-course signal is adopted to reconstruct a flow field, and the method is different from the traditional image processing method for carrying out feature reconstruction on image data, so that the required input data volume is small, and the constructed FCN deep learning network model based on full convolution calculation is high in calculation speed; realizing low-dimensional characterization and time course reconstruction of the flow field to be reconstructed;
(2) The FCN deep learning network model based on full convolution calculation is adopted to perform feature extraction and classification on the time-series data, so that the time-series information of the samples is reserved, and the method is high in identification precision and is a novel high-precision method;
(3) The FCN deep learning network model based on full convolution calculation can predict time-course signals at any position in the range of the flow field, greatly improves the spatial resolution of the time-course signals, does not need to solve a fluid dynamics equation, and has high calculation speed.
(4) The method has low dependence on the type of the known data, does not need a classification label and a characteristic label of a sample, can directly calculate the measured data, and is convenient for engineering application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (4)

1. A time-course data-oriented high-spatial-resolution flow field reconstruction method is characterized by comprising the following steps:
step 1: acquiring a time-course sample signal and a time-course sample coordinate corresponding to the time-course sample signal in a flow field to be reconstructed so as to acquire a time-course sample signal set and a time-course sample coordinate set;
step 2: constructing a deep learning network model based on full convolution calculation;
the deep learning network model includes: an input layer, a full connection layer, a convolution layer, a coding layer, a deconvolution layer and an output layer;
the input layer comprises a coordinate input layer and a time-course input layer; the method comprises the steps of respectively inputting a training time sample coordinate set and a training time sample signal set;
the full-connection layer is used for receiving output data of the coordinate input layer to perform full-connection calculation, one side of the full-connection layer is connected with the coordinate input layer, and the other side of the full-connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time-course input layer and performing convolution calculation; one side of the convolution layer is connected with the time-interval input layer, and the other side of the convolution layer is connected with the coding layer;
the coding layer is used for receiving output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer to carry out deconvolution operation, and is connected with the output layer;
step 3: acquiring an optimal deep learning network model according to the time-course sample signal set, the time-course sample coordinate set and the deep learning network model based on full convolution calculation;
and 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed.
2. The method for reconstructing a high spatial resolution flow field for time-course data according to claim 1, wherein in step 1, the method for acquiring a time-course sample signal set and a time-course sample coordinate set is as follows:
step 1.1: determining the range of a flow field to be reconstructed, and randomly selecting a plurality of sample measuring points in the range;
step 1.2: acquiring a time-course sample signal of the sample measuring point and a time-course sample coordinate corresponding to the time-course sample signal;
step 1.3: forming a time-course sample signal set by a plurality of time-course sample signals; and forming a time-course sample coordinate set corresponding to the time-course sample signal set by a plurality of time-course sample coordinates.
3. The high spatial resolution flow field reconstruction method for time-course data according to claim 1, wherein in the step 3, the method for obtaining the optimal deep learning network model is as follows:
step 3.1: acquiring a training set and a testing set according to the time-course sample signal set and the time-course sample coordinate set;
step 3.2: inputting a training time course sample signal set in the training set to the time course input layer, and simultaneously inputting a training time course sample coordinate set in the training set to a coordinate input layer, training the deep learning network model, and obtaining a trained deep learning network model;
step 3.3: and testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model converges, the trained deep learning network model at the moment is an optimal deep learning network model; otherwise, repeating the steps 3.1-3.3.
4. The method for reconstructing a high spatial resolution flow field for time-course data according to claim 3, wherein in step 3.3, a loss function for determining whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the depth network model at the nth sample measurement point at time s, +.>The real sample time interval signal of the nth sample measuring point at the time s is obtained; k represents the number of samples in the sample set, n represents the sample number, s represents the time of the time-lapse signal in the samples, and R represents the length of the time-lapse signal.
CN202211177018.XA 2022-09-26 2022-09-26 High-spatial-resolution flow field reconstruction method for time-course data Active CN115455838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211177018.XA CN115455838B (en) 2022-09-26 2022-09-26 High-spatial-resolution flow field reconstruction method for time-course data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211177018.XA CN115455838B (en) 2022-09-26 2022-09-26 High-spatial-resolution flow field reconstruction method for time-course data

Publications (2)

Publication Number Publication Date
CN115455838A CN115455838A (en) 2022-12-09
CN115455838B true CN115455838B (en) 2023-09-01

Family

ID=84307474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211177018.XA Active CN115455838B (en) 2022-09-26 2022-09-26 High-spatial-resolution flow field reconstruction method for time-course data

Country Status (1)

Country Link
CN (1) CN115455838B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116127844B (en) * 2023-02-08 2023-10-31 大连海事大学 Flow field time interval deep learning prediction method considering flow control equation constraint

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800516A (en) * 2019-01-24 2019-05-24 电子科技大学 A kind of porous material flow field model building method based on DCGAN
CN110222828A (en) * 2019-06-12 2019-09-10 西安交通大学 A kind of Unsteady Flow method for quick predicting based on interacting depth neural network
CN111027626A (en) * 2019-12-11 2020-04-17 西安电子科技大学 Flow field identification method based on deformable convolution network
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN111476572A (en) * 2020-04-09 2020-07-31 财付通支付科技有限公司 Data processing method and device based on block chain, storage medium and equipment
CN111932239A (en) * 2020-09-18 2020-11-13 腾讯科技(深圳)有限公司 Service processing method, device, node equipment and storage medium
CN113822201A (en) * 2021-09-24 2021-12-21 大连海事大学 Deep learning method for underwater object shape recognition based on flow field velocity component time course
CN113901927A (en) * 2021-10-12 2022-01-07 大连海事大学 Underwater object shape recognition method based on flow field pressure time course

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN109800516A (en) * 2019-01-24 2019-05-24 电子科技大学 A kind of porous material flow field model building method based on DCGAN
CN110222828A (en) * 2019-06-12 2019-09-10 西安交通大学 A kind of Unsteady Flow method for quick predicting based on interacting depth neural network
CN111027626A (en) * 2019-12-11 2020-04-17 西安电子科技大学 Flow field identification method based on deformable convolution network
CN111476572A (en) * 2020-04-09 2020-07-31 财付通支付科技有限公司 Data processing method and device based on block chain, storage medium and equipment
CN111932239A (en) * 2020-09-18 2020-11-13 腾讯科技(深圳)有限公司 Service processing method, device, node equipment and storage medium
CN113822201A (en) * 2021-09-24 2021-12-21 大连海事大学 Deep learning method for underwater object shape recognition based on flow field velocity component time course
CN113901927A (en) * 2021-10-12 2022-01-07 大连海事大学 Underwater object shape recognition method based on flow field pressure time course

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于时程深度学习的复杂流场流动特性表征方法;战庆亮等;《物理学报》;第1-13页 *

Also Published As

Publication number Publication date
CN115455838A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN110222828B (en) Unsteady flow field prediction method based on hybrid deep neural network
CN112099110B (en) Ocean internal wave forecasting method based on machine learning and remote sensing data
Sun et al. Development of a physics-informed doubly fed cross-residual deep neural network for high-precision magnetic flux leakage defect size estimation
CN113568055B (en) Aviation transient electromagnetic data inversion method based on LSTM network
Wang et al. Data-driven CFD modeling of turbulent flows through complex structures
CN115455838B (en) High-spatial-resolution flow field reconstruction method for time-course data
CN110633790A (en) Method and system for measuring residual oil quantity of airplane oil tank based on convolutional neural network
CN112180369B (en) Depth learning-based sea surface wind speed inversion method for one-dimensional synthetic aperture radiometer
CN109085643A (en) The early substep joint inversion method to wave
CN111611541B (en) Method and system for calculating data-free area precipitation data based on Copula function
CN111090907B (en) Flight test transition judgment method
CN113901927B (en) Underwater object shape recognition method based on flow field pressure time course
CN111707439A (en) Hyperbolic fitting method for compressible fluid turbulence measurement test data
Güemes et al. Super-resolution GANs of randomly-seeded fields
CN113642255A (en) Photovoltaic power generation power prediction method based on multi-scale convolution cyclic neural network
CN115510732B (en) Shelter infrared characteristic simulation rapid algorithm based on deep learning
CN113984880B (en) Method and device for generating three-dimensional profile for pipeline metal loss defect
CN112749470B (en) Layout optimization fitting method for structural deformation sensor
CN115546498B (en) Flow field time-varying data compression storage method based on deep learning
Bai et al. On the efficiency of a cfd-based full convolution neural network for the postprocessing of field data
Güemes Jiménez et al. Super-resolution generative adversarial networks of randomly-seeded fields
CN117454807B (en) Multi-scale CFD numerical simulation method based on optical equipment protective cover
CN116595381B (en) Reservoir layered water temperature simulation method and system
CN116432556A (en) Wing surface pressure reconstruction method, electronic equipment and storage medium
CN108595484B (en) Marine atmosphere waveguide data acquisition and visualization processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant