CN115204530A - Oil reservoir prediction method based on Fourier neural operator and cyclic neural network - Google Patents

Oil reservoir prediction method based on Fourier neural operator and cyclic neural network Download PDF

Info

Publication number
CN115204530A
CN115204530A CN202211125454.2A CN202211125454A CN115204530A CN 115204530 A CN115204530 A CN 115204530A CN 202211125454 A CN202211125454 A CN 202211125454A CN 115204530 A CN115204530 A CN 115204530A
Authority
CN
China
Prior art keywords
fourier
operator
neural
distribution
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211125454.2A
Other languages
Chinese (zh)
Other versions
CN115204530B (en
Inventor
龚斌
黄虎
石欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Shuzhi Energy Technology Shenzhen Co ltd
Original Assignee
Zhongke Shuzhi Energy Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Shuzhi Energy Technology Shenzhen Co ltd filed Critical Zhongke Shuzhi Energy Technology Shenzhen Co ltd
Priority to CN202211125454.2A priority Critical patent/CN115204530B/en
Publication of CN115204530A publication Critical patent/CN115204530A/en
Application granted granted Critical
Publication of CN115204530B publication Critical patent/CN115204530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Entrepreneurship & Innovation (AREA)

Abstract

The embodiment of the invention discloses an oil reservoir prediction method based on a Fourier neural operator and a recurrent neural network, which comprises the following steps: acquiring current permeability distribution of a target area; extracting a first feature of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolutional layer, a plurality of first Fourier operators and a second convolutional layer which are sequentially connected; evolving the time dimension based on the first features through a trained convolution long-time and short-time memory neural network to obtain second features of a plurality of time points in the future, wherein each second feature is used for representing the oil reservoir distribution feature of the corresponding time point; and restoring each second characteristic into the oil reservoir distribution of the corresponding time point through a trained decoder, wherein the decoder comprises a third convolution layer, a plurality of second Fourier nerve operators and a fourth convolution layer which are connected in sequence. The embodiment improves the prediction accuracy of the large oil field.

Description

Oil reservoir prediction method based on Fourier neural operator and cyclic neural network
Technical Field
The embodiment of the invention relates to the field of reservoir simulation, in particular to a reservoir prediction method based on a Fourier neural operator and a recurrent neural network.
Background
The oil reservoir prediction means that the future oil reservoir change trend is predicted according to the current oil reservoir data, and effective guidance is provided for oil field development and oil reservoir exploitation. Most of the current reservoir predictions use some commercial software or deep learning based network models.
The commercial software has the problems of slow calculation speed, expensive payment and the like. The existing oil reservoir prediction model based on deep learning mainly aims at that the small oil reservoir effect is good, and the oil reservoir scale and the number of oil wells are limited; and the existing model mainly depends on convolution to calculate, so that the precision in oil reservoir prediction is limited and the generalization is not high.
Disclosure of Invention
The embodiment of the invention provides an oil reservoir prediction method based on a Fourier neural operator and a cyclic neural network, which improves the oil reservoir prediction precision of a large-scale oil field.
In a first aspect, an embodiment of the present invention provides a reservoir prediction method based on a fourier neural operator and a recurrent neural network, including:
acquiring current permeability distribution of a target area;
extracting a first characteristic of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolutional layer, a plurality of first Fourier operators and a second convolutional layer which are sequentially connected;
carrying out evolution on the time dimension based on the first characteristics by a trained convolution long-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing the oil deposit distribution characteristics of the corresponding time point;
and restoring each second characteristic into the oil reservoir distribution at the corresponding time point through a trained decoder, wherein the decoder comprises a third convolutional layer, a plurality of second Fourier neural operators and a fourth convolutional layer which are sequentially connected.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the above-described fourier neural operator and recurrent neural network-based reservoir prediction method.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the above-mentioned reservoir prediction method based on fourier nerve operators and a recurrent neural network.
According to the embodiment of the invention, the oil reservoir control equation is simulated by combining the Fourier operator and the neural network, and the derivation process of solving the Fourier operator is completely carried out according to the solution of partial differential equation, so that the method is more consistent with the process of numerical solution and has higher prediction precision; and because the convolution used in the data space is less, the calculation speed is faster; in addition, the method can predict the pressure and the saturation at the same time, and does not need to predict the pressure and the saturation twice, so that the prediction efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a reservoir prediction model based on Fourier neural operators and a recurrent neural network according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an encoder according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a decoder according to an embodiment of the present invention.
FIG. 4 is a flowchart of a reservoir prediction method based on Fourier neural operators and a recurrent neural network according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a working condition of an actual example according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The embodiment of the invention provides an oil reservoir prediction method based on FNO (Fourier Neural Operator) and a recurrent Neural network. To illustrate the method, a reservoir prediction model that implements the method is preferably described. Fig. 1 is a schematic diagram of a reservoir prediction model provided in an embodiment of the present invention, which is used for simulating reservoir distribution changes under different oil permeability conditions, so as to simulate the influence of geological changes on the mining process.
As shown in fig. 1, the model includes: an encoder, a ConvSLTM (Convolutional Long Short Term Memory) Convolutional Long-and-Short time Memory neural network and a decoder. Inputting the current permeability distribution m representing the current geological condition into a model, firstly extracting features through an encoder, then carrying out time evolution through a convolution time memory neural network, and finally reconstructing pressure distribution and saturation distribution through a decoder.
Specifically, the encoder includes a first convolution layer, a plurality of first fourier nerve operators, and a second convolution layer connected in sequence, as shown in fig. 2.
ConvLSTM was developed from fully-connected LSTM, and the output of fully-connected LSTM through the fully-connected network is used as the input of LSTM, so that the data passing through the fully-connected network can be well represented in time series by means of the recurrent neural network LSTM, but the capability of the fully-connected network in capturing the spatial characteristics of the data is still insufficient, and the generation of ConvLSTM solves the problem well. ConvLSTM has a similar chain-like structure to a standard recurrent neural network, while controlling information transfer through a gating unit. The ConvLSTM gate control unit has four interaction layers including an input gate, an output gate, a forgetting gate, and a tanh layer. The gate is used for controlling how much information of the previous step can be transmitted to the current time step, and simultaneously determining how much current information is distributed to be transmitted to the next time step. The forgetting gate layer determines which parts in the hidden state of the previous step are transmitted to the current step; the input gate layer decides which new information needs to be added to the current state; the tanh layer creates a new candidate value that can be added to the current state; the final output value is determined by the output gate and the current state value. The ConvLSTM network and its variants can efficiently extract spatial and temporal information.
The decoder includes a third convolutional layer, a plurality of second fourier operators, and a fourth convolutional layer, which are sequentially connected, as shown in fig. 3. In the decoding process, the output of the last layer of the second FNO of each layer is spliced with the output of the first FNO of each layer in the encoder and is used as the input of the second FNO of each layer. As shown in FIG. 1, it is assumed that the first FNO and the second FNO have three layers, respectively, and the first FNO of the three layers outputs the characteristic F 1 、F 2 And F 3 Then F will be 3 After being spliced with the output of the ConvLSTM, the output is input into a first layer and a second layer of FNO; f is to be 2 After being spliced with the output of the first layer of second FNO, the output is input into the second layer of second FNO; f is to be 3 And after being spliced with the output of the second FNO of the second layer, the output of the second FNO of the third layer is input into the second FNO of the third layer.
The functions and arrangement of the modules will be described in detail in the following embodiments.
Based on the model schematic, fig. 4 is a flowchart of an oil reservoir prediction method based on a fourier neural operator and a recurrent neural network, which is provided by the embodiment of the invention and is suitable for predicting the oil reservoir distribution under certain geological conditions. The method is executed by an electronic device, and specifically includes the following steps, as shown in fig. 4:
and S110, acquiring the current permeability distribution of the target area.
The target area refers to a geographical area to be studied, such as a certain oil field. The permeability distribution refers to the distribution of permeability composition at each location within the target area. As shown in FIG. 5, one 191 is provided
Figure 443752DEST_PATH_IMAGE001
313
Figure 141319DEST_PATH_IMAGE001
The 3D practical example of 7 is the permeability distribution and well site distribution along the X direction. Where PERMX represents permeability and black dots represent well sites. The current permeability distribution refers to a data volume composed of permeability distributions at the current time point.
More specifically, assume that the target area has X number of two-dimensional geographic grids
Figure 186635DEST_PATH_IMAGE001
Y, X and Y represent the number of grids in two perpendicular directions parallel to the ground, respectively, and then the current permeability distribution has a dimension of [ X, Y,1]]The three-dimensional data volume of (1). When N calculations are predicted simultaneously, the current permeability distribution has a dimension of [ N, X, Y,1]]N is a natural number. In particular, X and Y are arbitrary natural numbers, and are not limited by the dimensions of the model training samples.
And S120, extracting the first characteristic of the current permeability distribution through the trained encoder.
The main role of the encoder is to reduce the dimensionality (i.e., dimensionality) of the input data and to extract features. Firstly, inputting the current permeability distribution into the first convolution layer for feature extraction to obtain an initial feature. Optionally, the first winding layer comprises 32 and 3 winding layers
Figure 113003DEST_PATH_IMAGE001
3, after the current permeability distribution is calculated by the first convolution layer, the obtained initial characteristic dimension is [ M, X, Y,64 ]]。
Then, inputting the initial features into the plurality of first Fourier operators, gradually performing dimension reduction in a Fourier space, and extracting depth features, wherein an iterative process of each first Fourier operator is as follows:
Figure 458534DEST_PATH_IMAGE002
wherein W and
Figure 546575DEST_PATH_IMAGE003
are all the weight parameters that are trained well,
Figure 95500DEST_PATH_IMAGE004
input data representing a current first fourier neural operator,lrepresent
Figure 559979DEST_PATH_IMAGE004
Is indexed by the dimension of the object to be scanned,
Figure 760016DEST_PATH_IMAGE005
presentation pair
Figure 284538DEST_PATH_IMAGE004
As a result of the fourier transform being performed,
Figure 819294DEST_PATH_IMAGE006
an index representing a wave of a different frequency,
Figure 821885DEST_PATH_IMAGE007
to represent
Figure 876429DEST_PATH_IMAGE005
Is indexed by the dimension of the object to be scanned,
Figure 571852DEST_PATH_IMAGE008
to represent
Figure 344636DEST_PATH_IMAGE005
The total number of dimensions of (a) is,
Figure 885339DEST_PATH_IMAGE009
which represents the inverse of the fourier transform,
Figure 810701DEST_PATH_IMAGE010
representing an activation function representing input data of a current first Fourier neural operator;
Figure 677025DEST_PATH_IMAGE011
is 1 in 64 steps of 2
Figure 937105DEST_PATH_IMAGE001
1 convolution, and the dimension of each first Fourier neural operator output feature is 1/2 of the input feature.
Specifically, the implementation of the fourier neural operator is divided into three steps: transforming the data into Fourier space by Fourier transform, and taking the higher frequency data in the Fourier spaceAnd calculating the wave bands, and mapping the data to an original space by adopting Fourier inverse change. In the embodiment, when inverse fourier transform is used, the output size of the inverse fourier transform is directly changed to 1/2 of the original size, so that the features can be evolved in the fourier space while the dimension of the input data is reduced. The calculation of each fourier neural operator is as follows: by using
Figure 281499DEST_PATH_IMAGE012
Iterating the numerical solution to solve a differential equation, specifically, the iterative equation is as follows:
Figure 45056DEST_PATH_IMAGE013
(1)
wherein,xrepresenting input variables, D represents the dimension of the variables, and the input of each layer of FNO is
Figure 597129DEST_PATH_IMAGE014
Thus can be used forxAs a virtual variable, the number of variables,
Figure 78926DEST_PATH_IMAGE015
a linear differential operator is represented by a linear differential operator,
Figure 227010DEST_PATH_IMAGE016
it is indicated that one of the parameters,
Figure 110653DEST_PATH_IMAGE017
expressing unknown variables the solution to differential equations is converted into a solver function by equation (1)
Figure 318780DEST_PATH_IMAGE018
Figure 38606DEST_PATH_IMAGE019
(2)
Wherein,
Figure 724802DEST_PATH_IMAGE020
is a parameter of
Figure 728530DEST_PATH_IMAGE016
The neural network operator of (2) needs to be learned from the data. If operator pairs are removed
Figure 107559DEST_PATH_IMAGE017
Is dependent on, order
Figure 563948DEST_PATH_IMAGE021
Then the above equation becomes the convolution calculation:
Figure 568682DEST_PATH_IMAGE022
(3)
since the convolution can be multiplied after fourier transform and then inverse transformed back to the original space, equation (3) evolves:
Figure 895758DEST_PATH_IMAGE023
(4)
wherein,
Figure 711267DEST_PATH_IMAGE024
representing a fourier transform. In fact, now that
Figure 654952DEST_PATH_IMAGE025
Is a parameter to be learned, and
Figure 948530DEST_PATH_IMAGE026
the space learning is not directly learned in the Fourier space, but is followed by Fourier transform, so that a convolution operator defined in the Fourier space is directly used
Figure 412004DEST_PATH_IMAGE027
In place of the above formula
Figure 867256DEST_PATH_IMAGE028
Figure 298237DEST_PATH_IMAGE029
(5)
The final equation (5) can be converted to equation (6):
Figure 129927DEST_PATH_IMAGE030
(6)
for the
Figure 697175DEST_PATH_IMAGE031
Figure 323328DEST_PATH_IMAGE032
Figure 490873DEST_PATH_IMAGE033
Is about the frequency domain
Figure 126254DEST_PATH_IMAGE034
As a function of (c). And for a periodic function, expanding the periodic function by using Fourier series, and selecting a limited number of series expansion terms to approximate the whole process by truncation of the series in the data calculation process. That is, a finite pair of frequency waves is selected in the frequency domain
Figure 282429DEST_PATH_IMAGE035
And (6) optimizing. Assume that the number of selections is:
Figure 610642DEST_PATH_IMAGE036
(7)
parameterization directly with truncated Fourier frequency
Figure 485057DEST_PATH_IMAGE037
Finally, finally
Figure 924129DEST_PATH_IMAGE037
Become (a)
Figure 951122DEST_PATH_IMAGE038
) Is not less than the tensor sum
Figure 450236DEST_PATH_IMAGE039
Is irrelevant (i.e. the
Figure 811947DEST_PATH_IMAGE040
Figure 789130DEST_PATH_IMAGE041
The hyper-parameters of the model need to be determined at the time of model construction. The final equation (6) becomes:
Figure 185477DEST_PATH_IMAGE042
(8)
due to the fact thatxAs a virtual variable, equation (8) becomes:
Figure 589913DEST_PATH_IMAGE043
this is an iterative process of fourier neural operators, in which,
Figure 688188DEST_PATH_IMAGE011
and
Figure 469062DEST_PATH_IMAGE027
are weight parameters obtained by gradient descent optimization. From the above calculations, it can be seen that the calculation of the fourier neural operator is more consistent with the solution of the partial differential equation, and therefore can achieve higher accuracy with fewer features as an encoder.
It is worth mentioning that the present embodiment performs down-sampling of dimensions while performing evolution in the fourier space.
Figure 719915DEST_PATH_IMAGE011
Are 64 convolution operators of size 1x1, step size 2, with parameter size 64,
Figure 295253DEST_PATH_IMAGE044
the size of the recovered space becomes 1/2 of the original size. Thus, after each Fourier operator operation, the output becomes 1/2 of the original.
In one embodiment, the current permeability profile is assumed to be [ M, X, Y,1]]M represents the number of examples, X and Y represent the number of grids of the target area in two perpendicular directions parallel to the ground, respectively; as shown in fig. 2, there are 3 of the first fourier operators. The first coiled layer comprises 32 and 3
Figure 631556DEST_PATH_IMAGE001
3, after the current permeability distribution is calculated by the first convolution layer, the obtained initial characteristic dimension is [ M, X, Y,64 ]](ii) a After the initial features pass through a first Fourier neural operator, the dimension of the output features is [ M, X/2,Y/2,64]The number of parameters of the model is
Figure 216121DEST_PATH_IMAGE045
Figure 72213DEST_PATH_IMAGE046
Figure 818452DEST_PATH_IMAGE047
Representing the maximum fourier series selected in the first fourier neural operator,
Figure 642052DEST_PATH_IMAGE048
. After the output characteristic of the first Fourier neural operator passes through the second first Fourier neural operator, the dimension of the output characteristic is [ M, X/4,Y/4,64]The number of parameters of the model is
Figure 764729DEST_PATH_IMAGE049
Figure 990174DEST_PATH_IMAGE050
Represents a second first Fourier operatorMaximum fourier series taken. After the output characteristic of the second first Fourier neural operator passes through the third first Fourier neural operator, the output depth characteristic dimension is [ M, X/8,Y/8,64]The number of parameters of the model is
Figure 422161DEST_PATH_IMAGE051
Figure 998636DEST_PATH_IMAGE052
Representing the maximum fourier series selected in the third first fourier neural operator,
Figure 659424DEST_PATH_IMAGE053
and after the depth feature is obtained, inputting the depth feature into the second convolution layer for feature fusion to obtain the first feature of the current permeability distribution. Optionally, the second convolutional layer comprises 32 and 3 convolutional layers
Figure 739376DEST_PATH_IMAGE001
3, the number of parameters of the second convolution layer is 32
Figure 92996DEST_PATH_IMAGE001
3
Figure 625609DEST_PATH_IMAGE001
3。
S130, carrying out time dimension evolution on the basis of the first characteristics through a trained convolution long-time and short-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing the oil reservoir distribution characteristics of the corresponding time point.
The main roles of ConvLSTM are: based on the first feature, the time dimension is then evolved. Since the first feature after dimensionality reduction still contains the geospatial information, the ConvLSTM is adopted for feature extraction, and T (T represents the number of future time points needing to be calculated) different features are obtained and used for recovering the pressure and the saturation. For ease of distinction and description, the features of the encoder output are referred to as first features and the features of the ConvLSTM output are referred to as second features.
And S140, restoring each second characteristic into the oil reservoir distribution at the corresponding time point through the trained decoder.
The reservoir distribution refers to the distribution of reservoir data components at each location within the target region. Wherein the reservoir data at each location includes at least one of: oil phase pressure, oil phase saturation, water phase pressure, and water phase saturation. The oil reservoir distribution at a certain time point is a three-dimensional data body with the dimensionality [ X, Y, M ], wherein M represents the channel number of the oil reservoir data, and X and Y respectively represent the geography grid dimensionality. When N calculations are predicted simultaneously, the current reservoir distribution is a four-dimensional data volume with the size [ N, X, Y, M ].
The decoder is mainly used for up-sampling the second characteristics corresponding to each time point obtained by ConvLSTM to obtain the final oil reservoir distribution. Optionally, first, each second feature is input into the third convolution layer to perform feature extraction. As shown in FIG. 3, the first convolution layer includes 64 3 with step size 1
Figure 840821DEST_PATH_IMAGE001
3 convolution kernels for feature extraction of the second feature inputs, the number of parameters of the first convolution layer being 64
Figure 775279DEST_PATH_IMAGE001
3
Figure 299801DEST_PATH_IMAGE001
3。
Then, inputting the output features of the third convolutional layer into the plurality of second fourier neural operators, and performing dimension lifting and feature evolution in a fourier space step by step, wherein an iterative process of each second fourier neural operator includes: and (3) adopting interpolation up-sampling to change the dimension of the output characteristic into 2 times of the input characteristic, and adopting Fourier transform and inverse Fourier transform to evolve the up-sampled characteristic. As shown in fig. 3, 3 second fourier operators are used for feature dimension promotion and evolution. In each Fourier operator, firstly, an interpolation up-sampling mode is adopted to change the input into 2 times of the original input, then, 64 convolution kernels with the step length of 2 and the 1x1 are adopted to carry out convolution calculation, and the quantity of the parameters of the part is 64.
Accordingly, in the fourier section,
Figure 319710DEST_PATH_IMAGE054
the size of the recovered space also becomes 2 times of the original size. Thus, after each second fourier neural operator operation, the overall output becomes 2 times the original output. Also exemplified in the above embodiments, the permeability profile is [ M, X, Y,1 [ ]]Each second feature is [ M, X/8,Y/8,32]M represents the number of examples. The fourth convolutional layer comprises 64 3 with step size 1
Figure 322301DEST_PATH_IMAGE001
3 convolution kernel with parameter number of 64
Figure 376844DEST_PATH_IMAGE001
3
Figure 321535DEST_PATH_IMAGE001
3, after each second feature passes through the fourth convolution layer, the dimension of the output feature is [ M, X/8,Y/8,64]. After the output characteristic of the fourth convolution layer passes through the first Fourier nerve operator, the dimension of the output characteristic is [ M, X/4,Y/4,64]The number of parameters of the model is
Figure 828740DEST_PATH_IMAGE055
Figure 635022DEST_PATH_IMAGE056
Figure 809652DEST_PATH_IMAGE057
Representing the maximum fourier series selected in the first and second fourier operators,
Figure 410397DEST_PATH_IMAGE058
. The output characteristic of the first and second Fourier operatorsAfter the second Fourier operator, the output characteristic dimension is [ M, X/2,Y/2,64]The number of parameters of the model is
Figure 421210DEST_PATH_IMAGE059
Figure 765603DEST_PATH_IMAGE060
Representing the maximum fourier series selected in the second fourier neural operator,
Figure 794739DEST_PATH_IMAGE061
. The output characteristic dimension of the second Fourier operator is [ M, X, Y,64 ] after the output characteristic of the second Fourier operator passes through the third second Fourier operator]The number of parameters of the model is
Figure 566386DEST_PATH_IMAGE062
Figure 48183DEST_PATH_IMAGE063
Representing a selected maximum Fourier series of said third second Fourier neural operator,
Figure 196268DEST_PATH_IMAGE064
and finally, inputting the output characteristics of the plurality of second Fourier operators into the fourth convolution layer for characteristic extraction to obtain the oil reservoir distribution of the corresponding time point. The fourth convolution layer comprises 2 and 3 layers
Figure 329178DEST_PATH_IMAGE001
3, number of parameters is 2
Figure 537305DEST_PATH_IMAGE001
3
Figure 240819DEST_PATH_IMAGE001
3。
In addition, the fourier nerve operators in the encoder and the decoder are multiple, so that the down-sampling, the feature extraction and the up-sampling are performed step by step, and the excessive information loss is avoided.
In the embodiment, the reservoir control equation is simulated by using a mode of combining the Fourier operator and the neural network, and because the derivation process of solving the Fourier operator is completely carried out according to the solution of the partial differential equation, the method is more consistent with the process of numerical solution and has higher prediction precision; and because the convolution used in the data space is less, the calculation speed is faster; in addition, the method can predict the pressure and the saturation at the same time, and does not need to predict the pressure and the saturation twice, so that the prediction efficiency is further improved. Based on the simulation result of the full physical model, the average relative error of the method in the aspects of pressure and saturation prediction is 4.1 percent and 2.1 percent, 0.2s is needed for completing one-time prediction, and the prediction results of the pressure and the saturation are obtained at one time.
In particular, the present embodiment implements data upsampling and downsampling while evolving features in fourier space using fourier transforms with different input and output dimensions in the encoder and decoder. The method is particularly suitable for scenes taking certain geological condition distribution (embodied as permeability distribution) as a prediction premise, and the permeability fields are different due to the change of geological conditions and need to be adjusted through terrain fitting, so that the characteristic extraction, the up-sampling and the down-sampling processes are integrated into the same operation, the calculated amount can be reduced, the fitting of different geological condition distribution is facilitated, and the application range of the model is improved.
Optionally, resNet or densnet is also included before and after ConvLSTM to improve model performance. For example, based directly on the last layer output of the fourier, a convolution of 3 × 3 is used for the calculation, and 3 layers of ReseNet or densnet are used.
On the basis of the above embodiment and the following embodiment, the present embodiment refines the training process of the entire model. Optionally, before the trained encoder extracts the first feature of the current permeability distribution, the method further includes the following steps:
first, a plurality of examples of different permeability distributions are randomly generated, and LandSim simulation is adoptedThe software simulates the examples and extracts the pressure distribution, saturation distribution and well oil and water production. As shown at 191 in FIG. 5
Figure 192594DEST_PATH_IMAGE001
313
Figure 930743DEST_PATH_IMAGE001
7, a plurality of samples with different permeabilities are randomly generated on the basis of the 3D actual sample, and software simulation and data extraction are carried out.
The extracted data is then normalized and divided into a training set and a test set. Optionally, the data is normalized by a maximum and minimum method, and the normalized data is normalized according to the ratio of 8:2 into training and test sets.
Meanwhile, in order to better adjust the importance of each part of output, the present embodiment divides the total loss function into two parts, namely pressure loss and saturation, calculates the pressure loss by using a 1 norm, calculates the saturation loss by using a 2 norm, and constructs the following loss function:
Figure 44193DEST_PATH_IMAGE065
Figure 235003DEST_PATH_IMAGE066
wherein,
Figure 475622DEST_PATH_IMAGE067
denotes saturation loss, N t Denotes the number of samples, n b Represents the total number of grids per sample, T represents the number of future time points,
Figure 333857DEST_PATH_IMAGE068
representing the true saturation of grid j at time point t,
Figure 883787DEST_PATH_IMAGE069
represents grid j at timePredicted saturation at the intermediate point t;
Figure 561893DEST_PATH_IMAGE070
it is indicated that the pressure loss is,
Figure 855471DEST_PATH_IMAGE071
representing the true pressure at time point t of grid j,
Figure 302633DEST_PATH_IMAGE072
represents the predicted pressure at time t for grid j;
Figure 538311DEST_PATH_IMAGE073
which represents the overall loss of the power,
Figure 703713DEST_PATH_IMAGE074
and
Figure 535403DEST_PATH_IMAGE075
representing weights, 4 and 1 can be chosen, respectively.
And finally, training a deep learning model formed by the encoder, the convolution long-time and short-time memory neural network and the decoder by using the training samples and the loss function. Specifically, the number of epochs is 100 by adopting an ADAM optimizer, the initial learning rate is 0.0002, and the learning rate is attenuated to 0.8 times of the original learning rate every 10 epochs.
And after the training is finished, the model parameters are stored as corresponding model files. In the prediction stage, a model is loaded, then the initial permeability field of the oil reservoir is used as input, the distribution of pressure and saturation at a series of moments is obtained through model calculation, and further the relevant information of the well is calculated.
According to the embodiment, large-scale actual oil field data are used as samples, an oil reservoir prediction model based on the Fourier neural operator and the circulating neural network is trained, and model accuracy is improved. The model uses less convolution in the data space and the training speed is faster. Compared with other technical schemes, the model can achieve better precision and higher speed.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 6, the electronic device includes a processor 50, a memory 51, an input device 52, and an output device 53; the number of processors 50 in the device may be one or more, and one processor 50 is taken as an example in fig. 6; the processor 50, the memory 51, the input device 52 and the output device 53 in the apparatus may be connected by a bus or other means, as exemplified by the bus connection in fig. 6.
The memory 51 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method, device, and storage medium for determining parameters and predicting concentration of orthotropic plates of steel box girders in the embodiments of the present invention. The processor 50 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory 51, namely, the method, the device and the storage medium for determining the steel box girder orthotropic plate parameters and predicting the concentration are realized.
The memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 51 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 51 may further include memory located remotely from the processor 50, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 52 is operable to receive input numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 53 may include a display device such as a display screen.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method, the equipment and the storage medium for determining the steel box girder orthotropic plate parameters and predicting the concentration of the steel box girder orthotropic plate of any embodiment are realized.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the technical solutions of the embodiments of the present invention.

Claims (10)

1. A reservoir prediction method based on Fourier neural operators and a recurrent neural network is characterized by comprising the following steps:
acquiring current permeability distribution of a target area;
extracting a first characteristic of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolutional layer, a plurality of first Fourier operators and a second convolutional layer which are sequentially connected;
carrying out evolution on the time dimension based on the first characteristics by a trained convolution long-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing the oil deposit distribution characteristics of the corresponding time point;
and restoring each second characteristic into the oil reservoir distribution of the corresponding time point through a trained decoder, wherein the decoder comprises a third convolution layer, a plurality of second Fourier nerve operators and a fourth convolution layer which are connected in sequence.
2. The method of claim 1, wherein the reservoir profile comprises at least one of: oil phase pressure distribution, oil phase saturation distribution, water phase pressure distribution and water phase saturation distribution.
3. The method of claim 1, wherein the permeability distribution and the reservoir distribution refer to a two-dimensional geographic distribution, wherein the number of two-dimensional geographic grids is any number.
4. The method of claim 1, wherein extracting, by the trained encoder, the first feature of the current permeability distribution comprises:
inputting the current permeability distribution into the first convolution layer for feature extraction to obtain initial features;
inputting the initial features into the plurality of first Fourier nerve operators, gradually reducing dimensions in a Fourier space and extracting depth features, wherein the iteration process of each first Fourier nerve operator is as follows:
Figure 792500DEST_PATH_IMAGE001
wherein W and
Figure 434965DEST_PATH_IMAGE002
are all the weight parameters that are trained well,
Figure 353243DEST_PATH_IMAGE003
input data representing a current first fourier neural operator,lto represent
Figure 254203DEST_PATH_IMAGE004
Is indexed by the dimension of the object to be scanned,
Figure 941536DEST_PATH_IMAGE005
presentation pair
Figure 519017DEST_PATH_IMAGE003
As a result of the fourier transform being performed,kan index representing a wave of a different frequency,
Figure 721328DEST_PATH_IMAGE006
to represent
Figure 176711DEST_PATH_IMAGE007
Is indexed by the dimension of the object to be scanned,
Figure 452972DEST_PATH_IMAGE008
to represent
Figure 952086DEST_PATH_IMAGE007
The total number of dimensions of (a) is,
Figure 579377DEST_PATH_IMAGE009
which represents the inverse of the fourier transform,
Figure 805827DEST_PATH_IMAGE010
it is shown that the activation function is,
Figure 202174DEST_PATH_IMAGE011
input data representing a current first fourier neural operator;
Figure 872189DEST_PATH_IMAGE012
is 1 in 64 steps of 2
Figure 721197DEST_PATH_IMAGE013
1 convolution, wherein the dimension of each first Fourier neural operator output feature is 1/2 of the input feature;
and inputting the depth features into the second convolution layer for feature fusion to obtain the first features of the current permeability distribution.
5. The method of claim 4, wherein the current permeability distribution is a tensor of [ M, X, Y,1], M representing the number of examples, X and Y representing the number of grids of the target area in two perpendicular directions parallel to the ground, respectively; there are 3 of the first Fourier operators;
the first coiled layer comprises 32 and 3
Figure 502071DEST_PATH_IMAGE013
3, after the current permeability distribution is calculated by the first convolution layer, the obtained initial characteristic dimension is [ M, X, Y,64 ]];
After the initial features pass through a first Fourier neural operator, the dimension of the output features is [ M, X/2,Y/2,64]Wherein the number of parameters of the first Fourier operator is
Figure 769235DEST_PATH_IMAGE014
Figure 344573DEST_PATH_IMAGE015
Representing a selected maximum Fourier series of said first Fourier neural operator,
Figure 946456DEST_PATH_IMAGE016
after the output characteristic of the first Fourier neural operator passes through the second first Fourier neural operator, the dimension of the output characteristic is [ M, X/4,Y/4,64]Wherein the second first Fourier neural operator has a number of parameters of
Figure 531021DEST_PATH_IMAGE017
Figure 636380DEST_PATH_IMAGE018
Figure 382619DEST_PATH_IMAGE019
Representing the maximum Fourier series selected in said second first Fourier neural operator,
Figure 455486DEST_PATH_IMAGE020
after the output characteristic of the second first Fourier neural operator passes through the third first Fourier neural operator, the output depth characteristic dimension is [ M, X/8,Y/8,64]Wherein the number of parameters of the third first Fourier neural operator is
Figure 843742DEST_PATH_IMAGE021
Figure 803608DEST_PATH_IMAGE022
Representing the maximum fourier series selected in the third first fourier neural operator,
Figure 720749DEST_PATH_IMAGE023
the second convolution layer comprises 32 and 3
Figure 297223DEST_PATH_IMAGE013
3, the number of parameters of the second convolutional layer is 32
Figure 708744DEST_PATH_IMAGE013
3
Figure 523117DEST_PATH_IMAGE013
3。
6. The method of claim 1, wherein the restoring, by the trained decoder, each second feature to the reservoir distribution at the corresponding time point comprises:
inputting each second feature into the third convolutional layer for feature extraction;
inputting the output features of the third convolutional layer into the plurality of second fourier neural operators, and performing dimension lifting and feature evolution gradually in a fourier space, wherein an iterative process of each second fourier neural operator comprises: adopting interpolation up-sampling to change the dimension of the output characteristic into 2 times of the input characteristic, and adopting Fourier transform and inverse Fourier transform to evolve the up-sampled characteristic;
and inputting the output characteristics of the plurality of second Fourier nerve operators into the fourth convolution layer for characteristic extraction, so as to obtain the oil reservoir distribution of the corresponding time point.
7. The method of claim 6, wherein the permeability distribution is a tensor of [ M, X, Y,1], each second feature is a tensor of [ M, X/8,Y/8,32], M represents an arithmetic number, and X and Y represent the number of meshes of the target area in two perpendicular directions parallel to the ground, respectively; 3 of the second Fourier operators;
the fourth convolutional layer comprises 64 3 with step size 1
Figure 876737DEST_PATH_IMAGE013
3 convolution kernel with parameter number of 64
Figure 674929DEST_PATH_IMAGE013
3
Figure 139409DEST_PATH_IMAGE013
3, after each second feature passes through the fourth convolution layer, the dimension of the output feature is [ M, X/8,Y/8,64];
After the output characteristic of the fourth convolution layer passes through the first Fourier neural operator, the dimension of the output characteristic is [ M, X/4,Y/4,64 ]]Wherein the number of parameters of the first and second Fourier operators is
Figure 73867DEST_PATH_IMAGE024
Figure 113236DEST_PATH_IMAGE025
Figure 867565DEST_PATH_IMAGE026
Representing a selected maximum Fourier series of said first second Fourier neural operator,
Figure 401314DEST_PATH_IMAGE027
the output characteristic dimension of the first Fourier neural operator and the output characteristic dimension of the second Fourier operator are [ M, X/2,Y/2,64 ] after the output characteristic of the first Fourier neural operator and the second Fourier operator pass through]Wherein the number of parameters of the second Fourier operator is
Figure 190279DEST_PATH_IMAGE028
Figure 620123DEST_PATH_IMAGE029
Figure 143640DEST_PATH_IMAGE030
Representing a selected maximum Fourier series in said second Fourier neural operator,
Figure 949922DEST_PATH_IMAGE031
the output characteristic dimension of the second Fourier operator is [ M, X, Y,64 ] after the output characteristic of the second Fourier operator passes through the third second Fourier operator]Wherein the third second Fourier operator has a parameter number of
Figure 593393DEST_PATH_IMAGE032
Figure 725297DEST_PATH_IMAGE033
Representing the maximum selected in the third second Fourier neural operatorA Fourier series of the series of Fourier transforms,
Figure 454218DEST_PATH_IMAGE034
the fourth convolution layer comprises 2 and 3 layers
Figure 64191DEST_PATH_IMAGE013
3, number of parameters is 2
Figure 827748DEST_PATH_IMAGE013
3
Figure 114241DEST_PATH_IMAGE013
3。
8. The method of claim 1, wherein prior to said extracting, by the trained encoder, the first feature of the current permeability distribution, further comprising:
randomly generating a plurality of examples with different permeability distributions, simulating the examples by using LandSim simulation software, and extracting pressure distribution, saturation distribution and oil and water production of a well;
normalizing the extracted data, and dividing the normalized data into a training set and a test set;
the following loss function was constructed:
Figure 596038DEST_PATH_IMAGE035
Figure 9702DEST_PATH_IMAGE036
Figure 627765DEST_PATH_IMAGE037
wherein,
Figure 570314DEST_PATH_IMAGE038
denotes the loss of saturation, N t Denotes the number of samples, n b The total number of grids per sample is represented,Tthe number of future points in time is indicated,
Figure 273827DEST_PATH_IMAGE039
representing the true saturation of grid j at time point t,
Figure 241915DEST_PATH_IMAGE040
representing the predicted saturation of grid j at time point t;
Figure 714484DEST_PATH_IMAGE041
it is indicated that the pressure loss is,
Figure 93513DEST_PATH_IMAGE042
representing the true pressure at time point t of grid j,
Figure 549902DEST_PATH_IMAGE043
represents the predicted pressure at time t for grid j;
Figure 39789DEST_PATH_IMAGE044
which represents the overall loss of the power,
Figure 632445DEST_PATH_IMAGE045
representing a weight;
and training a deep learning model formed by the encoder, the convolution long-time and short-time memory neural network and the decoder by using the training samples in the training set and the testing set and the loss function.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method for reservoir prediction based on fourier neural operators and cyclic neural networks as recited in any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for reservoir prediction based on fourier neural operators and recurrent neural networks as claimed in any one of claims 1 to 8.
CN202211125454.2A 2022-09-16 2022-09-16 Oil reservoir prediction method based on Fourier neural operator and cyclic neural network Active CN115204530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211125454.2A CN115204530B (en) 2022-09-16 2022-09-16 Oil reservoir prediction method based on Fourier neural operator and cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211125454.2A CN115204530B (en) 2022-09-16 2022-09-16 Oil reservoir prediction method based on Fourier neural operator and cyclic neural network

Publications (2)

Publication Number Publication Date
CN115204530A true CN115204530A (en) 2022-10-18
CN115204530B CN115204530B (en) 2023-05-23

Family

ID=83572133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211125454.2A Active CN115204530B (en) 2022-09-16 2022-09-16 Oil reservoir prediction method based on Fourier neural operator and cyclic neural network

Country Status (1)

Country Link
CN (1) CN115204530B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
CN110717270A (en) * 2019-10-10 2020-01-21 南京特雷西能源科技有限公司 Oil and gas reservoir simulation method based on data
CN112541572A (en) * 2020-11-25 2021-03-23 中国石油大学(华东) Residual oil distribution prediction method based on convolutional encoder-decoder network
CN113052371A (en) * 2021-03-16 2021-06-29 中国石油大学(华东) Residual oil distribution prediction method and device based on deep convolutional neural network
CN114152977A (en) * 2020-09-07 2022-03-08 中国石油化工股份有限公司 Reservoir parameter prediction method and device based on geological feature constraint and storage medium
CN114282725A (en) * 2021-12-24 2022-04-05 山东大学 Construction of transient oil reservoir agent model based on deep learning and oil reservoir prediction method
CN114462262A (en) * 2021-12-07 2022-05-10 中国海洋石油集团有限公司 History fitting prediction method based on dual dimensionality of time and space
CN114492213A (en) * 2022-04-18 2022-05-13 中国石油大学(华东) Wavelet neural operator network model-based residual oil saturation and pressure prediction method
CN114492211A (en) * 2022-04-15 2022-05-13 中国石油大学(华东) Residual oil distribution prediction method based on autoregressive network model
CN114693005A (en) * 2022-05-31 2022-07-01 中国石油大学(华东) Three-dimensional underground oil reservoir dynamic prediction method based on convolution Fourier neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
CN110717270A (en) * 2019-10-10 2020-01-21 南京特雷西能源科技有限公司 Oil and gas reservoir simulation method based on data
CN114152977A (en) * 2020-09-07 2022-03-08 中国石油化工股份有限公司 Reservoir parameter prediction method and device based on geological feature constraint and storage medium
CN112541572A (en) * 2020-11-25 2021-03-23 中国石油大学(华东) Residual oil distribution prediction method based on convolutional encoder-decoder network
CN113052371A (en) * 2021-03-16 2021-06-29 中国石油大学(华东) Residual oil distribution prediction method and device based on deep convolutional neural network
CN114462262A (en) * 2021-12-07 2022-05-10 中国海洋石油集团有限公司 History fitting prediction method based on dual dimensionality of time and space
CN114282725A (en) * 2021-12-24 2022-04-05 山东大学 Construction of transient oil reservoir agent model based on deep learning and oil reservoir prediction method
CN114492211A (en) * 2022-04-15 2022-05-13 中国石油大学(华东) Residual oil distribution prediction method based on autoregressive network model
CN114492213A (en) * 2022-04-18 2022-05-13 中国石油大学(华东) Wavelet neural operator network model-based residual oil saturation and pressure prediction method
CN114693005A (en) * 2022-05-31 2022-07-01 中国石油大学(华东) Three-dimensional underground oil reservoir dynamic prediction method based on convolution Fourier neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
王宇: "致密油藏压裂水平井产能模型与应用研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑(月刊)》 *
薛婷 等: "一种确定水驱气藏动态储量及水体能量的新方法", 《大庆石油地质与开发》 *
马承杰: "基于循环神经网络的油藏产量预测方法", 《内蒙古石油化工》 *

Also Published As

Publication number Publication date
CN115204530B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN113052371B (en) Residual oil distribution prediction method and device based on deep convolutional neural network
CN111523713B (en) Method and device for predicting saturation distribution of residual oil in oil field
Sreekanth et al. Stochastic and robust multi-objective optimal management of pumping from coastal aquifers under parameter uncertainty
US20170338802A1 (en) Actually-measured marine environment data assimilation method based on sequence recursive filtering three-dimensional variation
CN111832227B (en) Shale gas saturation determination method, device and equipment based on deep learning
NO20230580A1 (en) Estimating reservoir production rates using machine learning models for wellbore operation control
CN115186936A (en) Optimal well pattern construction method for oil field based on GNN model
Kumar et al. Ensemble-based assimilation of nonlinearly related dynamic data in reservoir models exhibiting non-Gaussian characteristics
CN117076931A (en) Time sequence data prediction method and system based on conditional diffusion model
WO2022241137A1 (en) Physics-informed attention-based neural network
CN115204531B (en) Oil reservoir prediction method, equipment and medium based on Fourier neural operator
Razak et al. Embedding physical flow functions into deep learning predictive models for improved production forecasting
CN112949944A (en) Underground water level intelligent prediction method and system based on space-time characteristics
CN110486009B (en) Automatic parameter reverse solving method and system for infinite stratum
CN115604131B (en) Link flow prediction method, system, electronic device and medium
CN112444850B (en) Seismic data velocity modeling method, storage medium and computing device
CN115204530B (en) Oil reservoir prediction method based on Fourier neural operator and cyclic neural network
Samuel et al. Fast modelling of gas reservoir performance with proper orthogonal decomposition based autoencoder and radial basis function non-intrusive reduced order models
CN116842375A (en) Landslide displacement prediction method and system based on VMD and GA-Elman
CN116011338A (en) Full waveform inversion method based on self-encoder and deep neural network
CN115375867A (en) Method, system, equipment and medium for calculating geothermal resource quantity by using grid model
Al-Shamma et al. History matching of the Valhall field using a global optimization method and uncertainty assessment
JP7512090B2 (en) Earthquake motion evaluation model generation method, earthquake motion evaluation model generation device, earthquake motion evaluation method, and earthquake motion evaluation device
Zhao et al. Online generic diagnostic reservoir operation tools
CN115169761A (en) GNN and LSTM based complex well pattern oil field yield prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant