CN115204530B - Oil reservoir prediction method based on Fourier neural operator and cyclic neural network - Google Patents
Oil reservoir prediction method based on Fourier neural operator and cyclic neural network Download PDFInfo
- Publication number
- CN115204530B CN115204530B CN202211125454.2A CN202211125454A CN115204530B CN 115204530 B CN115204530 B CN 115204530B CN 202211125454 A CN202211125454 A CN 202211125454A CN 115204530 B CN115204530 B CN 115204530B
- Authority
- CN
- China
- Prior art keywords
- fourier
- operator
- neural
- distribution
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/14—Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Abstract
The embodiment of the invention discloses an oil reservoir prediction method based on a Fourier neural operator and a cyclic neural network, which comprises the following steps: acquiring the current permeability distribution of a target area; extracting a first characteristic of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolution layer, a plurality of first Fourier neural operators and a second convolution layer which are sequentially connected; evolving a time dimension based on the first characteristic through a trained convolution long-short-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing oil reservoir distribution characteristics of a corresponding time point; and recovering each second characteristic into the oil reservoir distribution of the corresponding time point through a trained decoder, wherein the decoder comprises a third convolution layer, a plurality of second Fourier neural operators and a fourth convolution layer which are sequentially connected. The embodiment improves the prediction precision of the large-scale oil field.
Description
Technical Field
The embodiment of the invention relates to the field of oil reservoir simulation, in particular to an oil reservoir prediction method based on a Fourier neural operator and a cyclic neural network.
Background
Reservoir prediction refers to predicting future reservoir change trend according to current reservoir data, and provides effective guidance for oil field development and reservoir exploitation. Current reservoir predictions mostly use some business software or deep learning based network models.
Business software has low calculation speed, expensive payment and the like. The existing oil reservoir prediction model based on deep learning is mainly good in effect aiming at small oil reservoirs, and the oil reservoir scale and the oil well number are limited; the existing model mainly relies on convolution to calculate, and has limited precision and low generalization in oil reservoir prediction.
Disclosure of Invention
The embodiment of the invention provides an oil reservoir prediction method based on a Fourier neural operator and a cyclic neural network, which improves the oil reservoir prediction precision of a large-scale oil field.
In a first aspect, an embodiment of the present invention provides a method for predicting an oil reservoir based on a fourier neural operator and a recurrent neural network, including:
acquiring the current permeability distribution of a target area;
extracting a first characteristic of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolution layer, a plurality of first Fourier neural operators and a second convolution layer which are sequentially connected;
evolving a time dimension based on the first characteristic through a trained convolution long-short-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing oil reservoir distribution characteristics of a corresponding time point;
and recovering each second characteristic into the oil reservoir distribution of the corresponding time point through a trained decoder, wherein the decoder comprises a third convolution layer, a plurality of second Fourier neural operators and a fourth convolution layer which are sequentially connected.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the fourier neural operator and recurrent neural network-based reservoir prediction method described above.
In a third aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described reservoir prediction method based on fourier nerve operators and a recurrent neural network.
The embodiment of the invention simulates the oil reservoir control equation by combining the Fourier operator and the neural network, and the derivation process of solving the Fourier operator is completely carried out according to the partial differential equation, so that the method is more in line with the process of solving the numerical value and has higher prediction precision; and the calculation speed is faster because of less convolution used in the data space; in addition, the method can predict the pressure and the saturation simultaneously, does not need to predict the pressure and the saturation twice, and further improves the prediction efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an oil reservoir prediction model based on a fourier neural operator and a recurrent neural network according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an encoder according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a decoder according to an embodiment of the present invention.
Fig. 4 is a flowchart of a reservoir prediction method based on a fourier neural operator and a recurrent neural network according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a practical example of working conditions according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the invention, are within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The embodiment of the invention provides an oil reservoir prediction method based on FNO (Fourier Neural Operator ) and a recurrent neural network. To illustrate the method, a reservoir prediction model implementing the method is preferentially described. FIG. 1 is a schematic diagram of an oil reservoir prediction model provided by an embodiment of the present invention, which is used to simulate the change in the distribution of oil reservoirs under different oil permeability conditions, thereby simulating the effect of geological changes on the recovery process.
As shown in fig. 1, the model includes: the encoder, convSLTM (Convolutional Long Short Term Memory) convolves the long and short term memory neural network and the decoder. After the current permeability distribution m representing the current geological condition is input into a model, firstly, the characteristics are extracted through an encoder, then, the time is evolved through a convolution long-short-term memory neural network, and finally, the pressure distribution and the saturation distribution are reconstructed through a decoder.
Specifically, the encoder includes a first convolution layer, a plurality of first fourier nerve operators, and a second convolution layer connected in sequence, as shown in fig. 2.
ConvLSTM was developed from fully connected LSTM, with the output of the fully connected network being the input of LSTM, so that data passing through the fully connected network can be represented well in time sequence by means of the recurrent neural network LSTM, however, its ability to capture the spatial characteristics of the data is still insufficient, and ConvLSTM's birth has precisely solved this problem. ConvLSTM has a similar chain structure as a standard recurrent neural network, while controlling information transfer through a gating cell. The ConvLSTM gating cell has four interaction layers, including an input gate, an output gate, a forget gate, and a tanh layer. The gate is used to control how much of the last step of information can be transferred to the current time step and to determine how much of the current information to allocate to the next time step. The forgetting gate layer decides which parts in the hidden state of the previous step are transferred to the current step; the input gate layer decides which new information needs to be added into the current state; the tanh layer creates new candidate values that can be added to the current state; the final output value is determined by the output gate together with the current state value. The ConvLSTM network and its variants can efficiently extract spatial and temporal information.
The decoder includes a third convolutional layer, a plurality of second fourier nerve operators, and a fourth convolutional layer, which are sequentially connected, as shown in fig. 3. In the decoding process, the output of the last layer of each layer of second FNO is spliced with the output of each layer of first FNO in the encoder to be used as the input of each layer of second FNO. As shown in fig. 1, it is assumed that the first FNO and the second FNO have three layers, respectively, and the three layers of the first FNO output the characteristic F, respectively 1 、F 2 And F 3 Will F 3 After being spliced with the output of ConvLSTM, the first layer of second FNO is input; will F 2 After being spliced with the output of the first layer second FNO, the second FNO is input; will F 3 And after being spliced with the output of the second FNO of the second layer, inputting the second FNO of the third layer.
The function and arrangement of the modules will be described in detail in the following embodiments.
Based on the above model schematic, fig. 4 is a flowchart of an oil reservoir prediction method based on a fourier neural operator and a cyclic neural network, which is provided by the embodiment of the invention, and is suitable for predicting the oil reservoir distribution under a certain geological condition. The method is executed by the electronic equipment, as shown in fig. 4, and specifically comprises the following steps:
s110, acquiring the current permeability distribution of the target area.
The target area refers to a geographical area to be studied, such as an oilfield. The permeability distribution refers to the distribution of the permeability composition at each position within the target area. As shown in FIG. 5, a 191 is provided3133D practical examples of 7 permeability distribution and well position distribution along the X direction. Wherein PERMX represents permeability and black dots represent well locations. The current permeability profile refers to a data volume composed of permeability profiles at the current point in time.
More specifically, assume that the number of two-dimensional geographic grids of the target area is XY, X and Y represent the number of grids in two perpendicular directions parallel to the ground, respectively, then the current permeability profile is of dimension [ X, Y,1]Is a three-dimensional data volume of (a). When N examples are predicted at the same time, the current permeability distribution is of dimension [ N, X, Y,1]]N is a natural number. In particular, X and Y are arbitrary natural numbers, and are not limited by the dimensions of the model training samples.
S120, extracting the first characteristic of the current permeability distribution through a trained encoder.
The main function of the encoder is to reduce the dimension (i.e., dimension) of the input data and extract features. Firstly, inputting the current permeability distribution into the first convolution layer to perform feature extraction, and obtaining initial features. Optionally, the first convolution layer includes 32 33, the initial characteristic dimension obtained after the current permeability distribution is calculated by the first convolution layer is [ M, X, Y,64]]。
Then, inputting the initial features into the plurality of first Fourier neural operators, gradually performing dimension reduction in a Fourier space and extracting depth features, wherein the iterative process of each first Fourier neural operator is as follows:
wherein W andare all the weight parameters which are well-trained,input data representing the current first fourier nerve operator,lrepresentation ofIs used for the dimension index of (a),representation pairAs a result of the fourier transform performed on the sample,an index representing the different frequency waves,representation ofIs used for the dimension index of (a),representation ofIs defined as the sum of the dimensions of (a),representing the inverse fourier transform,the activation function is represented as a function of the activation,output data representing a current first fourier neural operator;1 for 64 steps of 21 convolving, the dimension of each first fourier nerve operator output feature is 1/2 of the input feature.
Specifically, the implementation of the fourier nerve operator is divided into three steps: the data is transformed into a Fourier space by adopting Fourier transformation, a wave band with higher frequency is taken in the Fourier space for calculation, and the data is mapped into an original space by adopting Fourier inverse transformation. When the inverse Fourier transform is used, the output size of the method is directly changed into 1/2 of the original size, so that the characteristics can be evolved in the Fourier space and the dimension of the input data can be reduced. The calculation mode of each Fourier neural operator is as follows: by usingThe numerical solution is iterated to solve the differential equation, and the iteration equation is specifically as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,xrepresents the input variable, D represents the dimension of the variable, in this application, the input of each layer FNO isThus can be used forxIs considered as a virtual variable of the type,a linear differential operator is represented and is used to determine,a parameter is indicated to be a parameter which,representing unknown variables by equation (1) converting the solution of differential equations into solution functions。
Wherein, the liquid crystal display device comprises a liquid crystal display device,is a parameter ofIs needed to learn from the data. If operator pairs are removedDepends on (1) to makeThe above equation becomes a convolution calculation:
since the convolution can be multiplied by fourier transform and then inverse transformed back to the original space, equation (3) evolves as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the fourier transform. In fact, sinceIs the parameter to be learned and is inThe spatial learning is followed by a fourier transform, unlike direct learning in fourier space, so that a convolution operator defined in fourier space is used directlyIn place of the above:
The final equation (5) can be converted into equation (6):
for the following,,Is a reference to the frequency domainIs a function of (2). For a periodic function, the periodic function is expanded by using a Fourier series, and calculated in dataIn the process, the whole process is approximated by cutting off the progression, and selecting a limited number of progression expansion terms. That is, a finite frequency wave pair is selected in the frequency domainAnd (5) optimizing. The number of choices is assumed to be:
then parameterize directly with truncated fourier frequencyFinallyBecome%) Tensor sum of (2)Independent (i.e。Is a hyper-parameter of the model, and needs to be determined at the time of model construction. The final formula (6) becomes:
due to the fact thatxConsider as a virtual variable, then equation (8) becomes:
this is an iterative process of fourier nerve operators, where,andare weight parameters obtained by gradient descent optimization. From the above calculation, the calculation of the fourier nerve operator is more consistent with the solving process of the partial differential equation, so as to achieve higher precision on fewer characteristics as an encoder.
It should be noted that this embodiment performs dimensional downsampling while evolving fourier space.Is 64 convolution operators with the size of 1x1 and the step length of 2, the parameter size of the convolution operators is 64,the recovered space size becomes 1/2 of the original one. Thus, after each fourier operator operation, the output becomes 1/2 of the original.
In one embodiment, the current permeability profile is assumed to be [ M, X, Y,1]M represents the number of calculation examples, X and Y represent the number of grids in two perpendicular directions of the target area parallel to the ground, respectively; as shown in fig. 2, there are 3 first fourier nerve operators. The first convolution layer includes 32 33, the initial characteristic dimension obtained after the current permeability distribution is calculated by the first convolution layer is [ M, X, Y,64]]The method comprises the steps of carrying out a first treatment on the surface of the After the initial feature passes through a first Fourier neural operator, the dimension of the output feature is [ M, X/2, Y/2,64 ]]The number of parameters of the model is ,Representing the maximum fourier series selected from the first fourier neural operator,. After the output characteristics of the first Fourier neural operator pass through the second first Fourier neural operator, the dimension of the output characteristics is [ M, X/4, Y/4,64 ]]The number of parameters of the model is;Representing the maximum fourier series selected from the second first fourier neural operator. After the output characteristics of the second first Fourier neural operator pass through the third first Fourier neural operator, the output depth characteristic dimension is [ M, X/8, Y/8,64]]The number of parameters of the model is,Representing the maximum fourier series selected from the third first fourier neural operator,。
and after obtaining the depth characteristic, inputting the depth characteristic into the second convolution layer for characteristic fusion to obtain the first characteristic of the current permeability distribution. Optionally, the second convolution layer includes 32 33, the number of parameters of the second convolution layer is 3233。
And S130, evolving a time dimension based on the first characteristic through the trained convolution long-short time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing the oil reservoir distribution characteristics of the corresponding time point.
The main roles of ConvLSTM are: based on the first feature, the time dimension is then evolved. Because the first feature after dimension reduction still contains geospatial information, convLSTM is adopted for feature extraction, and T (T represents the number of future time points needing to be calculated) is obtained and used for pressure and saturation recovery. For ease of distinction and description, the characteristics of the encoder output are referred to as first characteristics, and the characteristics of the ConvLSTM output are referred to as second characteristics.
And S140, recovering each second characteristic into the oil reservoir distribution of the corresponding time point through the trained decoder.
Reservoir distribution refers to the distribution of reservoir data composition at various locations within the target area. Wherein the reservoir data at each location includes at least one of the following channels: oil phase pressure, oil phase saturation, water phase pressure, and water phase saturation. The reservoir distribution at a certain point in time is a three-dimensional data volume with dimensions [ X, Y, M ], wherein M represents the number of channels of reservoir data, and X and Y represent the dimensions of the geographic grid respectively. When N computing examples are predicted at the same time, the current reservoir distribution is a four-dimensional data volume with the size of [ N, X, Y, M ].
The main function of the decoder is to up-sample the second characteristic corresponding to each time point obtained by ConvLSTM, so as to obtain the final oil reservoir distribution. Optionally, first, each second feature is input into the third convolution layer to perform feature extraction. As shown in fig. 3, the third convolution layer includes 64 3 steps of 13 for extracting the characteristics of each second characteristic input, the parameter number of the third convolution layer is 6433。
Then, inputting the output features of the third convolution layer into the plurality of second fourier nerve operators, and gradually performing dimension lifting and feature evolution in a fourier space, wherein an iterative process of each second fourier nerve operator comprises: the dimension of the output feature is changed into 2 times of the input feature by adopting interpolation up-sampling, and the up-sampled feature is evolved by adopting Fourier transformation and inverse Fourier transformation. As shown in fig. 3, 3 second fourier operators are used for feature dimension lifting and evolution. In each Fourier operator, firstly, an interpolation up-sampling mode is adopted to change the input into 2 times of the original input, and then, 64 convolution kernels with the step length of 2 and 1x1 are adopted to carry out convolution calculation, wherein the number of parameters of the part is 64.
Accordingly, in the fourier part,the recovered space size also becomes 2 times the original one. Thus, after each second fourier operator operation, the overall output becomes 2 times the original. Still taking the above embodiment as an example, the permeability profile is [ M, X, Y,1]]Tensors of (2), each second feature is [ M, X/8, Y/8,32]]And M represents the number of cases. The third convolution layer includes 64 3 steps of 13, the number of parameters is 6433, after each second feature passes through the third convolution layer, the dimension of the output feature is [ M, X/8, Y/8,64]]. After the output characteristics of the third convolution layer pass through the first and second Fourier neural operators, the dimension of the output characteristics is [ M, X/4, Y/4,64 ]]The number of parameters of the model is 64 M k max4 k max4 +64,Representing the maximum fourier series selected from the first and second fourier nerve operators,. After the output characteristics of the first second Fourier operator pass through the second Fourier operator, the dimension of the output characteristics is [ M, X/2, Y/2,64 ]]The number of parameters of the model is,Representing the maximum fourier series selected from the second fourier neural operator,. After the output characteristics of the second Fourier neural operator pass through the third second Fourier operator, the output characteristics have the dimensions of [ M, X, Y,64]The number of parameters of the model is,Representing a selected maximum fourier series of the third second fourier neural operator,。
and finally, inputting the output characteristics of the plurality of second Fourier neural operators into the fourth convolution layer for characteristic extraction to obtain oil reservoir distribution at a corresponding time point. The fourth convolution layer includes 2-33, the number of parameters is 233。
In addition, a plurality of Fourier nerve operators in the encoder and the decoder are arranged, so that the gradual progress of downsampling, feature extraction and upsampling is realized, and excessive information loss is avoided.
In the embodiment, the oil reservoir control equation is simulated by combining the Fourier operator and the neural network, and the derivation process of solving the Fourier operator is completely performed according to the partial differential equation, so that the method is more in line with the process of solving the numerical value, and has higher prediction precision; and the calculation speed is faster because of less convolution used in the data space; in addition, the method can predict the pressure and the saturation simultaneously, does not need to predict the pressure and the saturation twice, and further improves the prediction efficiency. Based on the simulation result of the full physical model, the average relative error of the method in the aspects of pressure and saturation prediction is 4.1% and 2.1%, 0.2s is needed for completing one prediction, and the prediction result of pressure and saturation is obtained at the same time.
In particular, the present embodiment implements up-sampling and down-sampling of data while evolving features in fourier space using fourier transforms with different input and output dimensions in the encoder and decoder. The method is particularly suitable for a scene taking a certain geological condition distribution (embodied as permeability distribution) as a prediction premise, and because the change of the geological condition is different, the permeability field is required to be adjusted through topography fitting, so that the characteristic extraction and upsampling and downsampling processes are integrated into the same operation, the calculation amount can be reduced, the fitting of different geological condition distributions is facilitated, and the application range of a model is improved.
Optionally, resNet or DenseNet is also included before and after ConvLSTM to improve model performance. For example, calculations are performed using a 3x3 convolution, using a 3 layer ReseNet or DenseNet, directly based on the final layer output of the Fourier.
On the basis of the above embodiment and the following embodiment, the present embodiment refines the training process of the entire model. Optionally, before the extracting the first feature of the current permeability distribution by the trained encoder, the method further includes the following steps:
firstly, randomly generating a plurality of calculation examples of different permeability distribution, adopting LandSim simulation software to simulate the calculation examples, and extracting pressure distribution, saturation distribution and oil and water yield of the well. As shown at 191 in figure 53137, randomly generating a plurality of calculation examples with different permeabilities based on the 3D actual calculation examples, and performing software simulation and data extraction.
The extracted data is then normalized and divided into training and testing sets. Optionally, normalizing the data by adopting a maximum value and minimum value method, and mixing the normalized data according to 8:2 to divide the training set and the test set.
Meanwhile, in order to better adjust the importance of the output of each part, the embodiment divides the total loss function into two parts of pressure loss and saturation, calculates the pressure loss by adopting 1 norm, calculates the saturation loss by adopting 2 norms, and constructs the following loss function:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating saturation loss, N t Represents the number of samples, n b Represents the total number of grids per sample, T represents the number of future points in time,representing the true saturation of grid j at time point tThe degree of the heat dissipation,representing the predicted saturation of grid j at time point t;indicating the loss of pressure and,represents the true pressure of grid j at time point t,representing the predicted pressure of grid j at time point t;indicating the overall loss of the device,andrepresenting the weights, 4 and 1 may be chosen, respectively.
And finally, training a deep learning model formed by the encoder, the convolution long-short time memory neural network and the decoder by using the training samples and the loss function. Specifically, the number of epochs is 100 using an ADAM optimizer, the initial learning rate is 0.0002, and the learning rate is attenuated by 0.8 times for every 10 epochs.
And after training, the model parameters are stored as corresponding model files. The prediction stage firstly loads a model, then takes an initial permeability field of an oil reservoir as input, calculates a series of pressure and saturation distribution at time through the model, and further calculates relevant information of a well.
According to the embodiment, large-scale actual oil field data are used as samples, and the oil reservoir prediction model based on the Fourier neural operator and the cyclic neural network is trained, so that the model accuracy is improved. The model uses less convolution in the data space and the training speed is faster. Compared with other technical schemes, the model can achieve better precision and faster speed.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the device includes a processor 50, a memory 51, an input device 52 and an output device 53; the number of processors 50 in the device may be one or more, one processor 50 being taken as an example in fig. 6; the processor 50, the memory 51, the input means 52 and the output means 53 in the device may be connected by a bus or other means, in fig. 6 by way of example.
The memory 51 is used as a computer readable storage medium for storing software programs, computer executable programs and modules, such as program instructions/modules corresponding to the reservoir prediction method based on fourier nerve operators and recurrent neural networks in the embodiment of the present invention. The processor 50 executes various functional applications of the device and data processing, namely, implements the reservoir prediction method based on the fourier nerve operator and the recurrent neural network described above, by running software programs, instructions, and modules stored in the memory 51.
The memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 51 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 51 may further include memory located remotely from processor 50, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 52 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device. The output means 53 may comprise a display device such as a display screen.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the reservoir prediction method based on the fourier neural operator and the recurrent neural network of any embodiment.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.
Claims (5)
1. An oil reservoir prediction method based on fourier nerve operators and a cyclic neural network, which is characterized by being applied to oil reservoir prediction of a large-scale oil field, and comprising the following steps:
acquiring the current permeability distribution of a target area, wherein the permeability distribution refers to two-dimensional geographical distribution, and the number of two-dimensional geographical grids is any number;
extracting a first characteristic of the current permeability distribution through a trained encoder, wherein the encoder comprises a first convolution layer, a plurality of first Fourier neural operators and a second convolution layer which are sequentially connected; specifically, inputting the current permeability distribution into the first convolution layer for feature extraction to obtain initial features; inputting the initial features into the first Fourier neural operators, and gradually performing Fourier transformThe leaf space is subjected to dimension reduction and depth feature extraction, and specifically, the implementation of the Fourier neural operator is divided into three steps: transforming the data into a Fourier space by adopting Fourier transformation, calculating by taking a wave band with higher frequency in the Fourier space, and mapping the data into an original space by adopting Fourier inverse transformation; the iterative process of each first fourier nerve operator is as follows:wherein W and R k,l,j Are all trained weight parameters, v t Input data representing the current first fourier neural operator, l representing v t Dimension index of F (v) t ) Representation of v t As a result of fourier transform, k represents an index of different frequency waves, j represents F (v t ) Dimension index d of (d) v Represents F (v) t ) Of the total number of dimensions F -1 Represents the inverse fourier transform, σ represents the activation function, v t+1 Output data representing a current first fourier neural operator; w is 64 1X1 convolutions with step sizes of 2, and the dimension of each first Fourier neural operator output characteristic is 1/2 of the input characteristic; inputting the depth features into the second convolution layer to perform feature fusion to obtain first features of the current permeability distribution; it can be seen that the calculation of the fourier nerve operator is more in line with the solving process of the partial differential equation, so that the encoder can achieve higher precision on fewer characteristics;
evolving a time dimension based on the first characteristic through a trained convolution long-short-time memory neural network to obtain second characteristics of a plurality of time points in the future, wherein each second characteristic is used for representing oil reservoir distribution characteristics of a corresponding time point; because the first feature after dimension reduction still contains geospatial information, convLSTM is adopted for feature extraction, T different features are obtained and used for pressure and saturation recovery, and T represents the number of future time points to be calculated;
restoring each second characteristic into oil reservoir distribution of corresponding time points through a trained decoder, wherein the oil reservoir distribution refers to two-dimensional geographic distribution and the number of two-dimensional geographic gridsAny number; the reservoir distribution includes at least one of the following channels: oil phase pressure distribution, oil phase saturation distribution, water phase pressure distribution, water phase saturation distribution; the decoder is used for upsampling the second characteristic corresponding to each time point obtained by ConvLSTM to obtain final oil reservoir distribution; the decoder comprises a third convolution layer, a plurality of second Fourier neural operators and a fourth convolution layer which are sequentially connected; specifically, inputting each second feature into the third convolution layer to perform feature extraction; inputting the output features of the third convolution layer into the plurality of second Fourier neural operators, and gradually performing dimension lifting and feature evolution in a Fourier space, wherein the iterative process of each second Fourier neural operator comprises: the dimension of the output characteristic is changed into 2 times of the input characteristic by adopting interpolation up-sampling, and the up-sampled characteristic is evolved by adopting Fourier transformation and inverse Fourier transformation; inputting the output characteristics of the plurality of second Fourier neural operators into the fourth convolution layer for characteristic extraction to obtain oil reservoir distribution of corresponding time points; in the decoder, in the Fourier part, F -1 The size of the restored space is changed into 2 times of the original space;
because in the encoder, the output becomes 1/2 of the original output after each Fourier operator operation, and correspondingly in the decoder, the integral output becomes 2 times of the original output after each second Fourier operator operation; therefore, by adopting Fourier transformation with different input and output dimensions in the encoder and the decoder, the up-sampling and the down-sampling of data are realized while the characteristics are evolved in the Fourier space, the method is suitable for being distributed under certain geological conditions, namely permeability distribution, as a scene of prediction precondition, due to the change of the geological conditions, permeability fields are different and need to be adjusted through topography fitting, therefore, the characteristic extraction, the up-sampling and the down-sampling processes are integrated into the same operation, the calculated amount can be reduced, the fitting of different geological condition distribution is facilitated, and the application range of a model is improved;
wherein prior to extracting the first feature of the current permeability profile by the trained encoder, further comprising: training an oil reservoir prediction model based on a Fourier neural operator and a circulating neural network by taking large-scale actual oil field data as a sample, wherein the model is used for simulating oil reservoir distribution changes under different oil permeability conditions so as to simulate the influence of geological changes on the exploitation process; specifically, a plurality of calculation examples with different permeability distribution are randomly generated, landSim simulation software is adopted to simulate the calculation examples, and pressure distribution, saturation distribution and oil and water yield of a well are extracted; normalizing the extracted data, and dividing the extracted data into a training set and a testing set; the following loss function was constructed:
E AT =αL s +βL p ,
wherein L is s Represents saturation loss, nt represents the number of samples, n b Represents the total number of grids per sample, T represents the number of future points in time,represents the true saturation of grid j at time point t,/for>Representing the predicted saturation of grid j at time point t; l (L) p Indicating pressure loss, ++>Represents the true pressure of grid j at time point t, < >>Representing the predicted pressure of grid j at time point t; e (E) AT Representing overall loss, α and β representing weights;
training a deep learning model formed by the encoder, the convolution long-short time memory neural network and the decoder by using training samples in the training set and the testing set and the loss function;
the method simulates the oil reservoir control equation by combining the Fourier operator and the neural network, and the derivation process of solving the Fourier operator is completely carried out according to the partial differential equation, so that the method is more in line with the process of solving the numerical value, and has higher prediction precision.
2. The method according to claim 1, wherein the current permeability profile is a tensor of [ M, X, Y,1], M representing the number of samples, X and Y representing the number of grids of the target area in two perpendicular directions parallel to the ground, respectively; the number of the first Fourier neural operators is 3;
the first convolution layer comprises 32 convolutions of 3 multiplied by 3, and the initial characteristic dimension obtained after the current permeability distribution is calculated by the first convolution layer is [ M, X, Y,64];
after the initial feature passes through a first Fourier neural operator, the dimension of the output feature is [ M, X/2, Y/2,64 ]]Wherein the parameter number of the first Fourier neural operator is 64 xMxk 1max ×k 1max +64,k 1max Representing the maximum Fourier series, k, selected from the first Fourier neural operator 1max =20;
After the output characteristics of the first Fourier neural operator pass through the second first Fourier neural operator, the dimension of the output characteristics is [ M, X/4, Y/4,64 ]]Wherein the number of parameters of the second first Fourier neural operator is 64 xMxk 2max ×k 2max +64,k 2max Representing a maximum fourier series, k, selected from the second first fourier neural operator 2max =30;
After the output characteristics of the second first Fourier neural operator pass through the third first Fourier neural operator, the output depth characteristic dimension is [ M, X/8, Y/8,64]]Wherein the third first FourierThe number of parameters of the nerve operator is 64×M×k 3max ×k 3max +64,k 3max Representing the maximum Fourier series, k, selected from the third first Fourier neural operator 3max =12;
The second convolution layer includes 32 convolutions of 3×3, and the number of parameters of the second convolution layer is 32×3×3.
3. The method according to claim 1, wherein the permeability profile is a tensor of [ M, X, Y,1], each second feature is a tensor of [ M, X/8, Y/8,32], M representing the number of samples, X and Y representing the number of grids of the target area in two perpendicular directions parallel to the ground, respectively; 3 second Fourier neural operators;
the third convolution layer comprises 64 convolution kernels with the step length of 3 multiplied by 3 of 1, the parameter number is 64 multiplied by 3, and the dimension of the output feature is [ M, X/8, Y/8,64] after each second feature passes through the third convolution layer;
after the output characteristics of the third convolution layer pass through the first and second Fourier neural operators, the dimension of the output characteristics is [ M, X/4, Y/4,64 ]]Wherein the number of parameters of the first and second Fourier nerve operators is 64×M×k 4max ×k 4max +64,k 4max Representing a maximum fourier series, k, selected from the first and second fourier neural operators 4max =10;
After the output characteristics of the first second Fourier operator pass through the second Fourier operator, the dimension of the output characteristics is [ M, X/2, Y/2,64 ]]Wherein the parameter number of the second Fourier operator is 64 xMxk 5max ×k 5max +64k 5max Representing a maximum fourier series, k, selected from the second fourier neural operator 5max =30;
After the output characteristics of the second Fourier neural operator pass through the third second Fourier operator, the output characteristics have the dimensions of [ M, X, Y,64]Wherein the parameter number of the third second Fourier operator is 64×M×k 6max ×k 6max +64;k 6max Representing said third second fourier transformMaximum Fourier series k selected from the inner nerve operator 6max =12;
The fourth convolution layer comprises 2 convolutions of 3x3, the number of parameters being 2 x 3.
4. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the fourier-neural-operator and recurrent neural-network-based reservoir prediction method of any of claims 1-3.
5. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the fourier operator and recurrent neural network based reservoir prediction method as claimed in any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211125454.2A CN115204530B (en) | 2022-09-16 | 2022-09-16 | Oil reservoir prediction method based on Fourier neural operator and cyclic neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211125454.2A CN115204530B (en) | 2022-09-16 | 2022-09-16 | Oil reservoir prediction method based on Fourier neural operator and cyclic neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115204530A CN115204530A (en) | 2022-10-18 |
CN115204530B true CN115204530B (en) | 2023-05-23 |
Family
ID=83572133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211125454.2A Active CN115204530B (en) | 2022-09-16 | 2022-09-16 | Oil reservoir prediction method based on Fourier neural operator and cyclic neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115204530B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114282725A (en) * | 2021-12-24 | 2022-04-05 | 山东大学 | Construction of transient oil reservoir agent model based on deep learning and oil reservoir prediction method |
CN114492213A (en) * | 2022-04-18 | 2022-05-13 | 中国石油大学(华东) | Wavelet neural operator network model-based residual oil saturation and pressure prediction method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101951595B1 (en) * | 2018-05-18 | 2019-02-22 | 한양대학교 산학협력단 | Vehicle trajectory prediction system and method based on modular recurrent neural network architecture |
CN110717270B (en) * | 2019-10-10 | 2022-10-28 | 特雷西能源科技(杭州)有限公司 | Oil and gas reservoir simulation method based on data |
CN114152977B (en) * | 2020-09-07 | 2023-01-10 | 中国石油化工股份有限公司 | Reservoir parameter prediction method and device based on geological feature constraint and storage medium |
CN112541572B (en) * | 2020-11-25 | 2022-08-12 | 中国石油大学(华东) | Residual oil distribution prediction method based on convolutional encoder-decoder network |
CN113052371B (en) * | 2021-03-16 | 2022-05-31 | 中国石油大学(华东) | Residual oil distribution prediction method and device based on deep convolutional neural network |
CN114462262A (en) * | 2021-12-07 | 2022-05-10 | 中国海洋石油集团有限公司 | History fitting prediction method based on dual dimensionality of time and space |
CN114492211B (en) * | 2022-04-15 | 2022-07-12 | 中国石油大学(华东) | Residual oil distribution prediction method based on autoregressive network model |
CN114693005B (en) * | 2022-05-31 | 2022-08-26 | 中国石油大学(华东) | Three-dimensional underground oil reservoir dynamic prediction method based on convolution Fourier neural network |
-
2022
- 2022-09-16 CN CN202211125454.2A patent/CN115204530B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114282725A (en) * | 2021-12-24 | 2022-04-05 | 山东大学 | Construction of transient oil reservoir agent model based on deep learning and oil reservoir prediction method |
CN114492213A (en) * | 2022-04-18 | 2022-05-13 | 中国石油大学(华东) | Wavelet neural operator network model-based residual oil saturation and pressure prediction method |
Also Published As
Publication number | Publication date |
---|---|
CN115204530A (en) | 2022-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180247227A1 (en) | Machine learning systems and methods for data augmentation | |
CN111523713B (en) | Method and device for predicting saturation distribution of residual oil in oil field | |
CN113052371B (en) | Residual oil distribution prediction method and device based on deep convolutional neural network | |
Deutsch | Annealing techniques applied to reservoir modeling and the integration of geological and engineering (well test) data | |
GB2547816B (en) | Actually-measured marine environment data assimilation method based on sequence recursive filtering three-dimensional variation | |
CN112541572B (en) | Residual oil distribution prediction method based on convolutional encoder-decoder network | |
JP7453244B2 (en) | Estimation device, training device, estimation method, and model generation method | |
US20190034802A1 (en) | Dimensionality reduction in Bayesian Optimization using Stacked Autoencoders | |
KR20220103739A (en) | Methods and systems for estimating computational cost of simulations | |
JP7297038B2 (en) | Neural network model pre-training method, device, electronic device and medium | |
CN111832227A (en) | Shale gas saturation determination method, device and equipment based on deep learning | |
Kadeethum et al. | Data-driven reduced order modeling of poroelasticity of heterogeneous media based on a discontinuous Galerkin approximation | |
CN112613356A (en) | Action detection method and device based on deep attention fusion network | |
Kumar et al. | Ensemble-based assimilation of nonlinearly related dynamic data in reservoir models exhibiting non-Gaussian characteristics | |
Regazzoni et al. | A physics-informed multi-fidelity approach for the estimation of differential equations parameters in low-data or large-noise regimes | |
CN114008666A (en) | Dynamic image resolution assessment | |
CN113642675A (en) | Underground rock stratum distribution imaging obtaining method, system and terminal based on full waveform inversion and convolutional neural network and readable storage medium | |
Ginting et al. | Rapid quantification of uncertainty in permeability and porosity of oil reservoirs for enabling predictive simulation | |
Razak et al. | Embedding physical flow functions into deep learning predictive models for improved production forecasting | |
CN114037770A (en) | Discrete Fourier transform-based attention mechanism image generation method | |
CN115204530B (en) | Oil reservoir prediction method based on Fourier neural operator and cyclic neural network | |
Melland et al. | Differentiable programming for online training of a neural artificial viscosity function within a staggered grid Lagrangian hydrodynamics scheme | |
US20220414429A1 (en) | Physics-informed attention-based neural network | |
CN115639605A (en) | Automatic high-resolution fault identification method and device based on deep learning | |
Samuel et al. | Fast modelling of gas reservoir performance with proper orthogonal decomposition based autoencoder and radial basis function non-intrusive reduced order models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |