CN115455838A - Time-course data-oriented high spatial resolution flow field reconstruction method - Google Patents
Time-course data-oriented high spatial resolution flow field reconstruction method Download PDFInfo
- Publication number
- CN115455838A CN115455838A CN202211177018.XA CN202211177018A CN115455838A CN 115455838 A CN115455838 A CN 115455838A CN 202211177018 A CN202211177018 A CN 202211177018A CN 115455838 A CN115455838 A CN 115455838A
- Authority
- CN
- China
- Prior art keywords
- time
- sample
- course
- layer
- flow field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013135 deep learning Methods 0.000 claims abstract description 70
- 238000004364 calculation method Methods 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims description 37
- 238000012360 testing method Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 7
- 238000005259 measurement Methods 0.000 abstract description 4
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000012530 fluid Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/28—Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2113/00—Details relating to the application field
- G06F2113/08—Fluids
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/14—Force analysis or force optimisation, e.g. static or dynamic forces
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Fluid Mechanics (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pure & Applied Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Algebra (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Image Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a time-course data-oriented high spatial resolution flow field reconstruction method, which comprises the following steps: acquiring a time-course sample signal in a flow field to be reconstructed and a time-course sample coordinate corresponding to the time-course sample signal to acquire a time-course sample signal set and a time-course sample coordinate set; constructing a deep learning network model based on full convolution calculation; obtaining an optimal deep learning network model; and calculating time-course signals at all unknown measurement points of the flow field to be reconstructed so as to reconstruct the flow field to be reconstructed. According to the method, the limited number of sample measuring points are selected from the flow field to be reconstructed, the one-dimensional time-course sample signals of the limited number of sample measuring points are obtained, the time-course signal of any measuring point in the flow field is obtained through the optimal deep learning network model, and the requirement for flow field acquisition is greatly reduced. Meanwhile, the required input data volume is small, the data type is simple, the model reserves the time sequence information of the sample measuring point, and the method is suitable for high-precision reconstruction of a complex unsteady flow field.
Description
Technical Field
The invention relates to the field of high-resolution generation and reconstruction of flow fields, in particular to a high-spatial-resolution flow field reconstruction method for time-course data.
Background
The accurate numerical reconstruction of the turbulent flow field is one of the fluid mechanics leading edge problems which need to be solved urgently, however, the flow structure with different scales exists in the flow field under high Reynolds number, the characteristics are very complex, and the reconstruction difficulty of the flow field is large. The existing dimension reduction model of the flow field is always an effective method for researching flow reconstruction, such as an eigenvalue decomposition method, a dynamic modal decomposition method and the like, however, due to the fact that nonlinear turbulence characteristics are difficult to completely describe by adopting linear transformation, the matrix decomposition-based methods have great difficulty in researching the nonlinear strong turbulence problem.
In the aspect of the research of the characterization model of the flow field, a deep learning method is also a research hotspot, for example, a modal decomposition method based on a two-dimensional convolutional neural network to flow field snapshots and an unsteady flow automatic coding model. These studies make it possible to obtain spatio-temporal high-resolution results from partial data or low-resolution data. However, no matter what method is adopted to establish the association between the sparse measurement point and the whole field data, the model needs a high-resolution flow field snapshot as input information for supervised learning in the training process, however, in an actual experiment, the whole field snapshot is often difficult to obtain, and more time-course signals of single points severely limit the application of the model in the high-resolution reconstruction of turbulence.
Disclosure of Invention
The invention provides a time-course data-oriented high-spatial-resolution flow field reconstruction method, which aims to overcome the technical problem.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a time-course data-oriented high spatial resolution flow field reconstruction method comprises the following steps:
step 1: acquiring a time-course sample signal in a flow field to be reconstructed and a time-course sample coordinate corresponding to the time-course sample signal to acquire a time-course sample signal set and a time-course sample coordinate set;
and 2, step: constructing a deep learning network model based on full convolution calculation;
and 3, step 3: acquiring an optimal deep learning network model according to the time course sample signal set, the time course sample coordinate set and the deep learning network model based on the full convolution calculation;
and 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed.
Further, in step 1, the method for acquiring the time-course sample signal set and the time-course sample coordinate set is as follows:
step 1.1: determining the range of a flow field to be reconstructed, and randomly selecting a plurality of sample measuring points in the flow field;
step 1.2: acquiring a time course sample signal of the sample measuring point and a time course sample coordinate corresponding to the time course sample signal;
step 1.3: forming a time-course sample signal set by the time-course sample signals; and forming a time course sample coordinate set corresponding to the time course sample signal set by the time course sample coordinates.
Further, the deep learning network model comprises: an input layer, a full connection layer, a convolution layer, an encoding layer, a deconvolution layer and an output layer;
the input layer comprises a coordinate input layer and a time course input layer; the training time interval sample coordinate set and the training time interval sample signal set are respectively input;
the full connection layer is used for receiving the output data of the coordinate input layer and performing full connection calculation, one side of the full connection layer is connected with the coordinate input layer, and the other side of the full connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time course input layer and carrying out convolution calculation; one side of the convolutional layer is connected with the time schedule input layer, and the other side of the convolutional layer is connected with the coding layer;
the coding layer is used for receiving the output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer and carrying out deconvolution operation, and the deconvolution layer is connected with the output layer.
Further, in the step 3, the method for obtaining the optimal deep learning network model includes:
step 3.1: acquiring a training set and a test set according to the time course sample signal set and the time course sample coordinate set;
step 3.2: inputting the training time course sample signal set in the training set to the time course input layer, inputting the training time course sample coordinate set in the training set to the coordinate input layer, training the deep learning network model, and obtaining the trained deep learning network model;
step 3.3: testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model is converged, the trained deep learning network model is the optimal deep learning network model; otherwise, the step 3.1 to the step 3.3 are repeatedly executed.
Further, in step 3.3, a loss function for determining whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the depth network model at the nth sample point at time s,the real sample time-course signal of the nth sample measuring point at the time s is obtained; k represents the number of samples in the sample set, n represents the sample number, s represents the time of the time interval signal in the sample, and R represents the length of the time interval signal.
Has the advantages that: according to the time-course-data-oriented high-spatial-resolution flow field reconstruction method, a limited number of sample measuring points are selected from a flow field to be reconstructed, one-dimensional time-course sample signals of the limited number of sample measuring points are obtained, a time-course signal of any measuring point in the flow field is obtained through an optimal deep learning network model, the requirement on flow field acquisition is greatly reduced, and the method is a method with practical operability; the method has the advantages that the required input data amount is small, the time-course sample signals are subjected to feature extraction and classification by adopting the deep learning network model of the full convolution calculation, and the time sequence information of the sample measuring points is reserved, so that the reconstruction precision is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can obtain other drawings based on the drawings without inventive labor.
FIG. 1 is a flow chart of a time-course data-oriented high spatial resolution flow field reconstruction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of arrangement of flow field time-course measuring points in the embodiment of the present invention;
FIG. 3 is a schematic diagram of a deep learning model according to the present invention;
FIG. 4 is an error curve of a model in an embodiment of the invention;
FIG. 5 shows the results of real samples and model reconstructed samples in an embodiment of the present invention;
FIG. 6 is a graph of known flow field information at a given instant in an embodiment of the present invention;
FIG. 7 illustrates a model reconstructed flow field information at a certain instant in an embodiment of the present invention;
fig. 8 shows real flow field information at a certain instant in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The embodiment provides a time-course-data-oriented high spatial resolution flow field reconstruction method, as shown in fig. 1, including the following steps:
in step 1, the method for acquiring the time course sample signal set and the time course sample coordinate set includes the following steps:
step 1.1: determining the range of a flow field to be reconstructed according to an application scene, and randomly selecting a plurality of sample measuring points in the flow field; specifically, the range of the flow field to be reconstructed therein is set empirically by a person skilled in the art; for example, a certain range around a certain object is given by combining the range with experience, and the subsequent prediction precision can be improved by selecting a reasonable range of the flow field to be reconstructed. The time-course sample coordinate corresponding to the time-course sample signal is a position in a relative coordinate system established by taking a flow field to be reconstructed as a reference; as shown in fig. 2, the range of the flow field to be reconstructed includes the upstream and downstream flow field measuring points of the object, and the measuring points cover the important attention area;
step 1.2: acquiring a time course sample signal of the sample measuring point and a time course sample coordinate corresponding to the time course sample signal;
step 1.3: forming a time-course sample signal set by a plurality of time-course sample signals; forming a time course sample coordinate set corresponding to the time course sample signal set by the time course sample coordinates;
specifically, in the embodiment, any one of a wind tunnel test method, a water tunnel test method and a numerical simulation method is adopted, and a time course sample signal of a sample measuring point is obtained at the sample measuring point through a time course sensor; selecting a time course sample signal at a certain sample measuring point, and recording the time course sample coordinate at the moment; and moving the position of the time course sensor, acquiring a time course sample signal of another sample measuring point, and recording the time course sample coordinate at the moment. And repeating the steps to obtain all time-course sample signals in the flow field to be reconstructed, forming a time-course sample signal set and simultaneously obtaining a time-course coordinate set corresponding to the time-course sample signal set. In this example, the specific arrangement of sample measuring points is shown in fig. 2, a total of 7200 sensors are arranged,
step 2: constructing a deep learning network model based on full convolution calculation (FCN deep learning network model based on full convolution calculation);
specifically, in the deep learning network model based on the full convolution calculation in this embodiment, the time course sample signal set and the time course sample coordinate set corresponding to the time course sample signal set are put into the deep learning network model, so as to obtain the predicted time course signal. The deep learning network model has the function that the codes of the column number can be extracted according to the column number, and then the output of the deep learning network model is obtained.
The deep learning network model based on the full convolution calculation comprises: an input layer, a full connection layer, a convolution layer, an encoding layer, an anti-convolution layer and an output layer;
the input layer comprises a coordinate input layer and a time course input layer; the training time interval sample coordinate set and the training time interval sample signal set are respectively input;
the full connection layer is used for receiving the output data of the coordinate input layer and performing full connection calculation, one side of the full connection layer is connected with the coordinate input layer, and the other side of the full connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time course input layer and carrying out convolution calculation; one side of the convolution layer is connected with the time-course input layer, and the other side of the convolution layer is connected with the coding layer;
the coding layer is used for receiving the output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer and carrying out deconvolution operation, and the deconvolution layer is connected with the output layer.
Specifically, as shown in fig. 3; the input layer comprises a coordinate input layer and a time course input layer, and a one-dimensional training time course sample signal set and a training time course sample coordinate set corresponding to the one-dimensional training time course sample signal set are input through the time course input layer and the coordinate input layer respectively; performing full-connection calculation on the output of the coordinate input layer through the full-connection layer 1, and performing full-connection calculation on the output of the full-connection layer 1 again through the full-connection layer 2; inputting the output of the full connection layer 2 to the coding layer; meanwhile, carrying out convolution operation on training time course sample signals of a time course input layer through a convolution layer 1; carrying out convolution calculation again on the output data of the convolution layer 1 through the convolution layer 2, and inputting the output data of the convolution layer 2 into the coding layer; the coding layer simultaneously receives the output data of the full-connection layer 2 and the output data of the convolution layer 2, after the coding layer is operated, the deconvolution calculation is carried out on the output of the coding layer through the deconvolution layer 1, then the deconvolution calculation is carried out on the output data of the deconvolution layer 1 through the deconvolution layer 2 again, and finally the output of the deep learning network model is output through the output layer; namely, predicting a time course sample signal after passing through a deep learning network model;
specifically, the calculation methods of the full connection layer, the convolutional layer, the coding layer, and the deconvolution layer in the present embodiment are all conventional, and here, only the function of obtaining the output result of the output layer through the input data of the input layer is realized according to the network structure according to the present invention.
And step 3: training the deep learning network model according to the time course sample signal set and the time course sample coordinate set to obtain an optimal deep learning network model;
preferably, step 3.1: acquiring a training set and a test set according to the time course sample signal set and the time course sample coordinate set;
specifically, a test time course sample signal set and a training time course sample signal set are obtained through the time course sample signal set, and a test time course sample coordinate set corresponding to the test time course sample signal set and a training time course sample coordinate set corresponding to the training time course sample signal set are obtained through the time course sample coordinate set;
in the example, 50% of 7200 samples are randomly selected as a training time interval sample signal set and used as a time interval input layer variable of the model; taking a corresponding training time course sample coordinate set as a coordinate input layer variable of the model;
step 3.2: inputting the training time course sample signal set in the training set to the time course input layer, inputting the training time course sample coordinate set in the training set to the coordinate input layer, training the deep learning network model, and obtaining the trained deep learning network model; specifically, the relationship between the time-course sample coordinate set and the time-course sample signal set is obtained, and all parameters representing the functional relationship between the time-course sample coordinate set and the time-course sample signal set are parameters of the deep learning network model.
Step 3.3: testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model is converged, the trained deep learning network model is the optimal deep learning network model; otherwise, the step 3.1 to the step 3.3 are repeatedly executed.
Specifically, the test time interval sample coordinate set in the test set is input to the trained deep learning network model, and a predicted time interval sample signal output by the deep learning network model is compared with a real time interval signal in the test set to judge whether convergence occurs or not;
preferably, in step 3.3, the loss function for determining whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the depth network model at the nth sample point at time s,the real sample time-course signal when the nth sample measuring point is at the time s is obtained; k represents the number of samples in the sample set, n represents the sample number, s represents the time of a time interval signal in the sample, and R represents the length of the time interval signal;
specifically, the loss function is a target calculated by a deep learning network model based on full convolution calculation, and the deep learning network model is converged through forward iteration and reverse iteration to obtain a trained deep learning network model; in the process, the deep learning network model can automatically extract the correlation characteristics between the time course sample signal set and the time course sample coordinate set, perform characteristic dimension reduction and reconstruction, and establish the relationship between the time course sample signal and the sample measuring point coordinates; when L is smaller than a set value, acquiring characteristics of the relation between time-course sample signal sets through a time-course sample coordinate set as characteristic parameters of a deep learning network model; reducing the difference between the reconstruction recognition result and the true value;
in this example, a total of 450 iterations were performed, with the results shown in FIG. 4. The loss function is small enough (1 e-4) to meet the precision requirement, so that the training of the deep learning network model is completed;
specifically, reverse iteration is the prior art, and specifically, an error of an existing model is continuously corrected to the front end of the model, so that a new model has better reconstruction accuracy and can be obtained continuously by repeating the process, and iteration needs to be repeated. Firstly, forward iteration is carried out to obtain the difference between a reconstruction result and a true value, and then reverse iteration is carried out to reduce the existing difference; iterations are repeated until the difference is sufficiently small. Generally 300 iterations are sufficient. The model comprises two parts, one part is shown in figure 3 and is equivalent to a frame; the other part is parameters in the framework, which are obtained by the iteration according to the specific example, namely the characteristic parameters of the deep learning network model. The iterative acquisition of these parameters is performed based on a set of data, i.e. the input of the model, and after fixing the parameters, the time course curve of the unknown measurement point is reconstructed by using the model shown in fig. 3 and combining the parameters.
And 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed. To obtain a high spatial resolution flow field.
Specifically, the coordinates of each measuring point in the flow field to be reconstructed are input into the optimal deep learning network model through the coordinates of all the measuring points in the flow field to be reconstructed, so that time-course signals of all the measuring points in the flow field to be reconstructed can be obtained. And according to the time-course signals of the measuring points, a flow field with high spatial resolution is obtained, and the reconstruction of the flow field to be reconstructed is realized.
In this embodiment, flow field data at more positions in the flow field to be reconstructed, that is, time-course signals of all measurement points in the flow field to be reconstructed, can be obtained through a small number of time-course sample signals in the flow field to be reconstructed, so that a flow field of a high-resolution time-course signal is obtained, and the reconstruction of the flow field is realized.
In another embodiment of the invention, 10000 points are randomly generated in the flow field to be reconstructed, and the flow field data is predicted at the 100000 points; taking the coordinates of the 100000 points as a coordinate input layer of the model to obtain model outputs at the positions, namely flow field time-course data predicted by the optimal deep learning network model;
in the embodiment, the results of the real data and the model prediction data of 6 measuring points are randomly selected, as shown in fig. 5, the loss between the prediction time range sample signal and the real time range sample signal output by the optimal deep learning network model at each measuring point position is calculated at the same time, the calculation method is the same as formula (1), the error thereof also reaches 1e-4 orders of magnitude, and the result shows that the model successfully predicts the flow field;
the values of 100000 data at the same time are taken, the known data is shown in fig. 6, the flow field transient cloud image after high resolution reconstruction is shown in fig. 7, and the real cloud image is shown in fig. 8, so that the method successfully performs high resolution reconstruction. The complex flow field in the embodiment can be reconstructed to obtain the flow field data with high spatial resolution according to a small amount of flow field time-course information by adopting the method, and the accuracy is high.
After the step 4, the method further comprises the following steps:
and 5: and acquiring a time-course signal at any position in the flow according to the reconstructed flow field so as to analyze the flow field.
And acquiring a time-course signal at any position where the measuring points cannot be arranged according to the reconstructed flow field, and providing time-course data for subsequent flow field analysis and control. Such as aerodynamic signals of wing flow, water flow signals around ships, wake signals of wind driven generators, etc. If a flow field exists near the airplane, the sensors are not enough, only 7200 points are measured, and parameters of the deep learning network model are obtained by adopting 5000 training modes. Then testing the deep learning network models by using the rest 2200 to obtain an optimal deep learning network model; since the 2200 models were not trained, this indicated that the models were successful if they predicted well. Using a successful model to predict 100000000000000 data without stations yields the large data set we want. The method can be used for performing the aerodynamic shape optimization design, flow field detection and control and the like of the airplane.
Has the beneficial effects that:
(1) The method adopts the one-dimensional time-course signal to reconstruct the flow field, is different from the traditional image processing method which carries out characteristic reconstruction aiming at image data, has small required input data amount, and is high in calculation speed due to the constructed FCN deep learning network model based on the full convolution calculation; realizing low-dimensional representation and time-course reconstruction of the flow field to be reconstructed;
(2) The FCN deep learning network model based on the full convolution calculation is adopted to perform feature extraction and classification on the time sequence data, and time sequence information of samples is reserved, so that the method is high in identification precision and is a high-precision new method;
(3) The FCN deep learning network model based on the full convolution calculation can predict time-course signals at any position in a flow field range, greatly improves the spatial resolution of the time-course signals, does not need to solve a fluid dynamics equation, and is high in calculation speed.
(4) The method has low dependence on the known data types, does not need the classification labels and the characteristic labels of the samples, can directly calculate the measured data, and is convenient for engineering application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (5)
1. A time-course data-oriented high spatial resolution flow field reconstruction method is characterized by comprising the following steps:
step 1: acquiring a time-course sample signal in a flow field to be reconstructed and time-course sample coordinates corresponding to the time-course sample signal to acquire a time-course sample signal set and a time-course sample coordinate set;
and 2, step: constructing a deep learning network model based on full convolution calculation;
and step 3: acquiring an optimal deep learning network model according to the time course sample signal set, the time course sample coordinate set and the deep learning network model based on the full convolution calculation;
and 4, acquiring time-course signals of all measuring points of the flow field to be reconstructed according to the optimal deep learning network model so as to reconstruct the flow field to be reconstructed.
2. The method for reconstructing the time-lapse-data-oriented high-spatial-resolution flow field according to claim 1, wherein in the step 1, a method for acquiring a time-lapse sample signal set and a time-lapse sample coordinate set is as follows:
step 1.1: determining the range of a flow field to be reconstructed, and randomly selecting a plurality of sample measuring points in the flow field;
step 1.2: acquiring a time course sample signal of the sample measuring point and a time course sample coordinate corresponding to the time course sample signal;
step 1.3: forming a time-course sample signal set by the time-course sample signals; and forming a time course sample coordinate set corresponding to the time course sample signal set by using a plurality of time course sample coordinates.
3. The method for reconstructing a high spatial resolution flow field oriented to time-lapse data according to claim 1, wherein the deep learning network model comprises: an input layer, a full connection layer, a convolution layer, an encoding layer, a deconvolution layer and an output layer;
the input layer comprises a coordinate input layer and a time course input layer; the training time interval sample coordinate set and the training time interval sample signal set are respectively used for inputting;
the full connection layer is used for receiving the output data of the coordinate input layer and performing full connection calculation, one side of the full connection layer is connected with the coordinate input layer, and the other side of the full connection layer is connected with the coding layer;
the convolution layer is used for receiving the output data of the time course input layer and carrying out convolution calculation; one side of the convolutional layer is connected with the time schedule input layer, and the other side of the convolutional layer is connected with the coding layer;
the coding layer is used for receiving the output data of the full-connection layer and the convolution layer to carry out coding operation, and the other side of the coding layer is connected with the deconvolution layer;
the deconvolution layer is used for receiving the output of the coding layer and carrying out deconvolution operation, and the deconvolution layer is connected with the output layer.
4. The time-lapse data-oriented high spatial resolution flow field reconstruction method according to claim 1, wherein in the step 3, the method for obtaining the optimal deep learning network model is as follows:
step 3.1: acquiring a training set and a test set according to the time course sample signal set and the time course sample coordinate set;
step 3.2: inputting the training time course sample signal set in the training set to the time course input layer, inputting the training time course sample coordinate set in the training set to the coordinate input layer, training the deep learning network model, and obtaining the trained deep learning network model;
step 3.3: testing the trained deep learning network model through the test time course sample signal set and the test time course sample coordinate set:
if the output of the trained deep learning network model is converged, the trained deep learning network model is the optimal deep learning network model; otherwise, the step 3.1 to the step 3.3 are repeatedly executed.
5. The time-lapse-data-oriented high-spatial-resolution flow field reconstruction method according to claim 4, wherein in the step 3.3, a loss function for judging whether the output of the trained deep learning network model converges is as follows:
wherein ,for the output data of the deep network model at the nth sample point at time s,the real sample time-course signal when the nth sample measuring point is at the time s is obtained; k represents the number of samples in the sample set, n represents the sample number, s represents the time of the time interval signal in the sample, and R represents the length of the time interval signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177018.XA CN115455838B (en) | 2022-09-26 | 2022-09-26 | High-spatial-resolution flow field reconstruction method for time-course data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177018.XA CN115455838B (en) | 2022-09-26 | 2022-09-26 | High-spatial-resolution flow field reconstruction method for time-course data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115455838A true CN115455838A (en) | 2022-12-09 |
CN115455838B CN115455838B (en) | 2023-09-01 |
Family
ID=84307474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211177018.XA Active CN115455838B (en) | 2022-09-26 | 2022-09-26 | High-spatial-resolution flow field reconstruction method for time-course data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115455838B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116127844A (en) * | 2023-02-08 | 2023-05-16 | 大连海事大学 | Flow field time interval deep learning prediction method considering flow control equation constraint |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800516A (en) * | 2019-01-24 | 2019-05-24 | 电子科技大学 | A kind of porous material flow field model building method based on DCGAN |
CN110222828A (en) * | 2019-06-12 | 2019-09-10 | 西安交通大学 | A kind of Unsteady Flow method for quick predicting based on interacting depth neural network |
CN111027626A (en) * | 2019-12-11 | 2020-04-17 | 西安电子科技大学 | Flow field identification method based on deformable convolution network |
WO2020093042A1 (en) * | 2018-11-02 | 2020-05-07 | Deep Lens, Inc. | Neural networks for biomedical image analysis |
CN111476572A (en) * | 2020-04-09 | 2020-07-31 | 财付通支付科技有限公司 | Data processing method and device based on block chain, storage medium and equipment |
CN111932239A (en) * | 2020-09-18 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Service processing method, device, node equipment and storage medium |
CN113822201A (en) * | 2021-09-24 | 2021-12-21 | 大连海事大学 | Deep learning method for underwater object shape recognition based on flow field velocity component time course |
CN113901927A (en) * | 2021-10-12 | 2022-01-07 | 大连海事大学 | Underwater object shape recognition method based on flow field pressure time course |
-
2022
- 2022-09-26 CN CN202211177018.XA patent/CN115455838B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020093042A1 (en) * | 2018-11-02 | 2020-05-07 | Deep Lens, Inc. | Neural networks for biomedical image analysis |
CN109800516A (en) * | 2019-01-24 | 2019-05-24 | 电子科技大学 | A kind of porous material flow field model building method based on DCGAN |
CN110222828A (en) * | 2019-06-12 | 2019-09-10 | 西安交通大学 | A kind of Unsteady Flow method for quick predicting based on interacting depth neural network |
CN111027626A (en) * | 2019-12-11 | 2020-04-17 | 西安电子科技大学 | Flow field identification method based on deformable convolution network |
CN111476572A (en) * | 2020-04-09 | 2020-07-31 | 财付通支付科技有限公司 | Data processing method and device based on block chain, storage medium and equipment |
CN111932239A (en) * | 2020-09-18 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Service processing method, device, node equipment and storage medium |
CN113822201A (en) * | 2021-09-24 | 2021-12-21 | 大连海事大学 | Deep learning method for underwater object shape recognition based on flow field velocity component time course |
CN113901927A (en) * | 2021-10-12 | 2022-01-07 | 大连海事大学 | Underwater object shape recognition method based on flow field pressure time course |
Non-Patent Citations (3)
Title |
---|
QINGLIANG ZHAN ET AL.: "FLUID FEATURE ANALYSIS BASED ON TIME HISTORY DEEP LEARNING", 《CHINESE JOURNAL OF THERRETICAL AND APPLIED MECHANICS》, vol. 54, no. 3, pages 822 - 828 * |
YONGFENG XING ET AL.: "An Encoder-Decoder Network Based FCN Architecture for Semantic Segmentation", 《WIRELESS COMMUNICATIONS AND MOBILE COMPUTING》, vol. 2020, pages 1 - 9 * |
战庆亮等: "基于时程深度学习的复杂流场流动特性表征方法", 《物理学报》, pages 1 - 13 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116127844A (en) * | 2023-02-08 | 2023-05-16 | 大连海事大学 | Flow field time interval deep learning prediction method considering flow control equation constraint |
CN116127844B (en) * | 2023-02-08 | 2023-10-31 | 大连海事大学 | Flow field time interval deep learning prediction method considering flow control equation constraint |
Also Published As
Publication number | Publication date |
---|---|
CN115455838B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112099110B (en) | Ocean internal wave forecasting method based on machine learning and remote sensing data | |
CN110222828B (en) | Unsteady flow field prediction method based on hybrid deep neural network | |
CN106950276B (en) | Pipeline defect depth inversion method based on convolutional neural network | |
CN111161218A (en) | High-resolution remote sensing image change detection method based on twin convolutional neural network | |
CN111639747B (en) | GNSS-R sea surface wind speed inversion method and system based on BP neural network | |
CN113901927B (en) | Underwater object shape recognition method based on flow field pressure time course | |
CN115561181B (en) | Water quality inversion method based on unmanned aerial vehicle multispectral data | |
CN115455838B (en) | High-spatial-resolution flow field reconstruction method for time-course data | |
CN111815561B (en) | Pipeline defect and pipeline assembly detection method based on depth space-time characteristics | |
CN111122162A (en) | Industrial system fault detection method based on Euclidean distance multi-scale fuzzy sample entropy | |
CN113642255A (en) | Photovoltaic power generation power prediction method based on multi-scale convolution cyclic neural network | |
CN117077526A (en) | Digital twinning-based radar TR module temperature prediction method | |
CN116188943A (en) | Solar radio spectrum burst information detection method and device | |
CN118296949A (en) | PIV flow field pressure prediction method and system based on deep neural network | |
Güemes et al. | Super-resolution GANs of randomly-seeded fields | |
CN103679757A (en) | Behavior segmentation method and system specific to human body movement data | |
CN112528869A (en) | Phase-free data imaging method based on complex neural network | |
CN116432556A (en) | Wing surface pressure reconstruction method, electronic equipment and storage medium | |
CN115031794B (en) | Novel gas-solid two-phase flow measuring method based on multi-feature graph convolution | |
CN115014451A (en) | Gas-solid two-phase flow measuring method of multi-network characteristic fusion model | |
CN114063063A (en) | Geological disaster monitoring method based on synthetic aperture radar and point-like sensor | |
CN115546498B (en) | Flow field time-varying data compression storage method based on deep learning | |
CN118112663B (en) | Aviation transient electromagnetic rapid imaging method and system | |
CN117454708B (en) | Rail internal defect detection method based on thermal perception neural network | |
CN117609942B (en) | Estimation method and system for tropical cyclone movement path |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |