WO2010088516A2 - System and method for predicting fluid flow in subterranean reservoirs - Google Patents

System and method for predicting fluid flow in subterranean reservoirs Download PDF

Info

Publication number
WO2010088516A2
WO2010088516A2 PCT/US2010/022584 US2010022584W WO2010088516A2 WO 2010088516 A2 WO2010088516 A2 WO 2010088516A2 US 2010022584 W US2010022584 W US 2010022584W WO 2010088516 A2 WO2010088516 A2 WO 2010088516A2
Authority
WO
WIPO (PCT)
Prior art keywords
reservoir
data
enkf
gaussian
models
Prior art date
Application number
PCT/US2010/022584
Other languages
French (fr)
Other versions
WO2010088516A3 (en
Inventor
Pallav Sarma
Wen Hsiung Chen
Original Assignee
Chevron U.S.A. Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chevron U.S.A. Inc. filed Critical Chevron U.S.A. Inc.
Priority to EP10736475.4A priority Critical patent/EP2391831A4/en
Priority to CA2750926A priority patent/CA2750926A1/en
Priority to BRPI1006973A priority patent/BRPI1006973A2/en
Priority to AU2010208105A priority patent/AU2010208105B2/en
Publication of WO2010088516A2 publication Critical patent/WO2010088516A2/en
Publication of WO2010088516A3 publication Critical patent/WO2010088516A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V20/00Geomodelling in general
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/33Design verification, e.g. functional simulation or model checking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Definitions

  • the present invention relates generally to a system and method for predicting fluid flow within subterranean reservoirs, and more particularly, to a system and method for utilizing kernel-based ensemble Kalman filters for updating reservoir models corresponding to reservoirs having non-Gaussian random field and non-Gaussian production data characteristics.
  • the EnKF has been recently applied and improved upon by many researchers in the petroleum industry. It was introduced to the petroleum industry by Naevdal, G., Johnsen, L.M., Aanonsen, S.I., Vefring, E. H., Reservoir Monitoring and Continuous Model Updating Using Ensemble Kalman Filter, SPE paper 84372 presented at the SPE Annual Technical Conference and Exhibition, Denver, CO, 2003. Naevdal, G., Mannseth, T., Vefring, E.
  • Kernel methods have recently generated significant interest in the machine learning community (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002), and enable efficient nonlinear generalizations of linear algorithms.
  • Well known examples of the application of kernel methods to linear algorithms to create nonlinear generalizations are support vector machines, kernel-based clustering, and kernel principal component analysis (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). See also Sarma, P., Durlofsky, L.
  • a system for predicting fluid flow in a subterranean reservoir having non-Gaussian characteristics.
  • the system includes a computer processor, a computer readable program code for accessing a set of models representing the reservoir, and one or more data sources in communication with and/or accessible to the the computer processor for collecting reservoir field data for a predetermined duration of time.
  • the system further includes a reservoir model update program code, executable by the computer processor, for receiving the reservoir field data and for using the field data to update the set of models at a predetermined time such that data from the updated set of models is consistent with the field data.
  • the non-Gaussian characteristics of the reservoir in the updated set of models are preserved, thereby maximizing accuracy of reservoir prediction data to be generated by the updated set of models.
  • kernel methods are used to create a nonlinear generalization of the EnKF capable of representing non-Gaussian random fields characterized by multi-point geostatistics.
  • the EnKF By deriving the EnKF in a high- dimensional feature space implicitly defined using kernels, both the Kalman gain and update equations are nonlinearized, thus providing a completely general nonlinear set of EnKF equations, the nonlinearity being controlled by the kernel.
  • the feature space and associated kernel are chosen such that it is more appropriate to apply the EnKF in this space rather than in the input space.
  • such class of kernels is high order polynomial kernels, using which multi-point statistics and therefore geological realism of the updated random fields can be preserved.
  • the present method is applied to two non- limiting example cases where permeability is updated using production data, and is shown to better reproduce complex geology compared to the standard EnKF, while providing a reasonable match to production data.
  • FIG. Ia is schematic diagram showing an implementation of the reservoir prediction system of the present invention.
  • FIG. Ib is a flow diagram showing an example of a computer-implemented method for reservoir prediction in accordance with the present invention.
  • FIG. 2a shows a reference or "true” permeability field (left) for Example 1 and its histogram (FIG. 2b);
  • FIG. 2c shows a channel training image used for Example 1
  • FIGS. 3a-3d show four realizations from an exemplary initial ensemble or realizations
  • FIGS. 4a-4d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the standard EnKF;
  • FIGS. 5a-5d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 5 polynomial kernel;
  • FIGS. 6a-6d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 7 polynomial kernel;
  • FIGS. 7a-7d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 9 polynomial kernel;
  • FIGS. 8a-8d show typical marginal distributions of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with polynomial kernels of order 1 (Fig. 8a), order 5 (Fig. 8b), order 7 (Fig. 8c) and order 9 (Fig. 8d);
  • FIGS. 9a-9d show oil (91, 93, 95, 97) and water (92, 94, 96, 98) production rates for four wells for an initial ensemble, and that of the true permeability field (O's for oil rate, X's for water rate);
  • FIGS. 1 Oa-IOd show oil (101, 103, 105, 107) and water (102, 104, 106, 108) production rates for four wells for a final ensemble obtained with the standard EnKF, and that of the true permeability field (O 's for oil rate, X's for water rate);
  • FIGS. 1 Ia-I Id show oil (111, 113, 115, 117) and water (112, 114, 116, 118) production rates for four wells for a final ensemble obtained with the EnKF of order 5, and that of the true permeability field (O's for oil rate, X's for water rate);
  • FIGS. 12a- 12d show oil (121, 123, 125, 127) and water (122, 124, 126, 128) production rates for four wells for a final ensemble obtained with the EnKF of order 7, and that of the true permeability field (O's for oil rate, X's for water rate);
  • FIGS. 13a-13d show oil (131, 133, 135, 137) and water (132, 134, 136, 138) production rates for four wells for a final ensemble obtained with the EnKF of order 9, and that of a true permeability field (O's for oil rate, X's for water rate);
  • FIG. 14a shows a reference or "true” permeability field (left) for Example 2 and its histogram (FIG. 14b);
  • FIG. 15 shows a "Donut" training image used for Example 2.
  • FIGS. 16a-16d show four realizations of another exemplary initial ensemble
  • FIGS. 17a-17d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS . 16a- 16d) obtained with the standard EnKF;
  • FIGS. 18a-18d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with the EnKF with order 5 polynomial kernel;
  • FIGS. 19a-19d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with the EnKF with order 7 polynomial kernel;
  • FIG. 20 is a histogram of a true permeability field, and typical marginal distributions of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with polynomial kernels of order 1 (FIG. 20b), order 5 (FIG. 20c) and order 7 (FIG. 2Od);
  • FIGS. 21a-21d show oil (211, 213, 215, 217) and water (212, 214, 216, 218) production rates for four wells for an initial ensemble, and that of a true permeability field (O's for oil rate, X's for water rate);
  • FIGS. 22a-22d show oil (221, 223, 225, 227) and water (222, 224, 226, 228) production rates for four wells for a final ensemble obtained with the standard EnKF, and that of a true permeability field (O's for oil rate, X's for water rate);
  • FIGS. 23a-23d show oil (231, 233, 235, 237) and water (232, 234, 236, 238) production rates for four wells for a final ensemble obtained with the EnKF of order 5, and that of a true permeability field (O's for oil rate, X's for water rate); and
  • FIGS. 24a-24d show oil (241, 243, 245, 247) and water (242, 244, 246, 248) production rates for four wells for a final ensemble obtained with the EnKF of order 7, and that of a true permeability field (O's for oil rate, X's for water rate)
  • the present invention may be described and implemented in the general context of instructions to be executed by a computer.
  • Such computer-executable instructions may include programs, routines, objects, components, data structures, and computer software technologies that can be used to perform particular tasks and process abstract data types.
  • Software implementations of the present invention may be coded in different languages for application in a variety of computing platforms and environments. It will be appreciated that the scope and underlying principles of the present invention are not limited to any particular computer software technology.
  • the present invention may be practiced using any one or combination of computer processing system configurations, including but not limited to single and multi-processer systems, hand-held devices, programmable consumer electronics, mini-computers, mainframe computers, and the like.
  • the invention may also be practiced in distributed computing environments where tasks are performed by servers or other processing devices that are linked through a one or more data communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • an article of manufacture for use with a computer processor such as a CD, prerecorded disk or other equivalent devices, could include a computer program storage medium and program means recorded thereon for directing the computer processor to facilitate the implementation and practice of the present invention.
  • Such devices and articles of manufacture also fall within the spirit and scope of the present invention.
  • the invention can be implemented in numerous ways, including for example as a system (including a computer processing system), a method (including a computer implemented method), an apparatus, a computer readable medium, a computer program product, a graphical user interface, a web portal, or a data structure tangibly fixed in a computer readable memory.
  • a system including a computer processing system
  • a method including a computer implemented method
  • an apparatus including a computer readable medium, a computer program product, a graphical user interface, a web portal, or a data structure tangibly fixed in a computer readable memory.
  • FIG. Ia shows an embodiment of system 10 for predicting fluid flow in a subterranean reservoir having non-Gaussian characteristics.
  • non-Gaussian characteristics or features refers to permeability, porosity and other reservoir characteristics or features having non-linear, non-Gaussian geospatial distribution.
  • the system includes one or more data sources 12, which can include electronically accessible or operator-entered data, for providing initial model ensemble, reservoir field data, including for example, oil production, water production, gas production and seismic data.
  • the initial model ensemble includes initial reservoir characteristics, at a time to, derived by operators and/or computer modeling based on initial observations, knowledge or data gathered about the reservoir.
  • the initial model ensemble may take into account initial estimates of permeability, porosity and other characteristics used to create simulation models of subterranean reservoirs.
  • a one or more initial model computer processors 20 having computer readable program code and one or more sensors 18 can be provided in lieu or in addition to data source 12.
  • System 10 further includes a model update processor 14, which can be or physically reside as part of processor 20, that is used for updating the initial model ensemble at a predetermined or user defined time ti.
  • the model update processor 14 includes model update program code for receiving the reservoir field data and initial model ensemble data from data source(s) 12, sensors 18 and/or initial model processor(s) 20.
  • the model update program code of processor 14 updates the initial model ensemble at so as to preserve the non- Gaussian characteristics of the reservoir in the updated set of models, which enables enhanced accuracy and reliability of reservoir prediction data.
  • the model update program code of processor 14 is programmed to implement steps 24, 26 and 28 shown in FIG. Ib.
  • Updating of the initial model ensemble includes using model update code utilizing a generalized ensemble Kalman filter having higher order kernels.
  • "Higher order" refers to any order greater than 1, which can be selected by the user.
  • the kernels are polynomial kernels.
  • the generalized ensemble Kalman filter includes a gain function adapted for reservoirs having non-Gaussian random field characteristics, as represented for example by Equation 19 shown below.
  • the gain function can further be adapted for reservoirs having non-Gaussian production data characteristics, as shown for example by Equation 34 described below.
  • the generalized ensemble Kalman filter also includes an update model adapted for reservoirs having non-Gaussian random field characteristics, as shown by Equation 24.
  • the update model can also be adapted for reservoirs having non- Gaussian production data characteristics as shown by Equation 38 described below.
  • system 10 includes display/forecasting/optimization processor(s) 16 having image realization program code, executable by 16.
  • processor 16 can be the same or part of one or more of processors 14 and 20.
  • Processor(s) 16 via the image realization program code, transform the prediction data generated by the updated set of models into image data representations of the reservoir. The data representations are then communicated to image display mean for displaying the image representations of the reservoir. Output from the updated model ensemble can also be used for reservoir forecasting and optimization.
  • the EnKF is a temporally sequential data assimilation method, and at each assimilation time, the following steps are performed: a forecast step (evaluation of the dynamic system), followed by a data assimilation step (calculation of Kalman gain), and then by an update of the state variables of the EnKF (Kalman update).
  • the state variables of the EnKF usually consist of the following types of variables: static variables (such as permeability, porosity), dynamic variables (such as pressure, saturation), and production data (such as bottom hole pressures, production and injection rates).
  • static variables such as permeability, porosity
  • dynamic variables such as pressure, saturation
  • production data such as bottom hole pressures, production and injection rates
  • y 7 e R Ns is the j th ensemble member
  • m are the static variables
  • x are the dynamic variables
  • d e R Nj are the production data to be assimilated.
  • the number of ensemble members is M.
  • an initial ensemble of the static and dynamic variables has to be created.
  • geostatistical techniques are used to create the ensemble of static variables (permeability field etc.) corresponding to prior knowledge of the geology and hard data.
  • the dynamic variables are usually considered known without uncertainty, primarily because the uncertainty of the dynamic variables such as pressure and saturation at the initial time is smaller compared to that of the static variables, as the reservoir is generally in dynamic equilibrium before start of production. In any case, if the initial dynamic variables are considered uncertain, it can be easily reflected through the initial ensemble. There is usually no production data available initially.
  • the forecast step can be performed, wherein the dynamic system (reservoir simulation model) is evaluated to the next assimilation time using the current estimates of the static and dynamic variables.
  • the simulator has to be run once for each ensemble member (M times). This provides the forecasted dynamic variables and production data at the next assimilation step, and the step can be written compactly as: (Equation 2)
  • y 7 is the forecasted state vector obtained after the forecast step
  • y" is the updated state vector obtained after the Kalman update
  • subscript k stands for the assimilation time. / depicts the reservoir simulation equations. The assimilation time subscript k will be removed from discussions below for simplicity, and will only be shown when necessary.
  • the data assimilation step is performed, wherein, the Kalman gain is calculated using the static and dynamic variables and production data obtained from the forecast step, and is given as:
  • K g is known as the Kalman gain matrix, and is of size N s x N d
  • C f y is the covariance matrix of y f
  • Matrix H [0
  • C 6 is the error covariance matrix of the production data d which is usually assumed to be diagonal (that is, data measurement errors are independent of each other).
  • the final step of the EnKF is to update the state variables using the actual observations, which is given as:
  • d 0 are the observed production data, and random perturbations corresponding to C 6 are added to d 0 to create the ensemble of observed data, d ⁇ .
  • the updated state vector y" thus calculated is then used to evaluate the next forecast step, and the process repeated for the next assimilation time.
  • Equation 6 the updated state vectors are linear combinations of the forecasted state vectors. Further, because the EnKF only uses the covariance of the forecasted state vectors, it is technically appropriate only for multi-Gaussian random fields. In other words, the EnKF is only able to preserve two-point statistics or covariance of random fields.
  • the EnKF can be generalized to handle non-Gaussian random fields and multi-point statistics using kernel methods.
  • kernel methods have generated a lot of interest (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002).
  • the basic idea is to map the data in the input space R N - to a so-called feature space F through a nonlinear map ⁇ , and then apply a linear algorithm in the feature space.
  • the feature space F is chosen such that it is more appropriate to apply the linear algorithm in this space rather than in the input space R N - .
  • This approach can be applied to any linear algorithm that can be expressed solely in terms of dot products without the explicit use of the variables themselves; thus kernel methods allow the construction of elegant nonlinear generalizations of linear algorithms (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). By replacing the dot product in the feature space with an appropriate kernel function, efficiency similar to the linear algorithm can be achieved.
  • kernel methods to linear algorithms to create nonlinear generalizations
  • kernel principal component analysis Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002).
  • F is called the feature space, and it could have an arbitrarily large dimensionality N F (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Sarma, P., Durlofsky, L.J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008).
  • the definition will become clearer when space F is associated with a kernel function below. Because state vector y consists of different kinds of variables (permeability, porosity, saturation etc.) with possibly different random fields, it may not be appropriate to have a single mapping ⁇ that operates on y as a whole. Thus will consider the following mapping:
  • K z C ⁇ >H ⁇ + C « r ) 1 m (Equat .i.on i 1 m 0)
  • H [0
  • N F could possibly be very large (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002; Sarma, P., Durlofsky, L.J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008), it may not be practically possible to calculate K g or C ⁇ .
  • K g can be written as:
  • Equation 10 can be equivalently written as:
  • Equation 14 [0070] The left hand side of Equation 14 can be compactly written as:
  • Equation 16
  • A only involves the production data d and its covariances, and because d is not mapped to any feature space but is in the original input space, A can be calculated very efficiently.
  • Equation 6 The Kalman update equation in feature space F is similar to the standard Kalman update equation given by Equation 6, and can be written as:
  • Equation 20 can be written as:
  • Equation 22 can be simplified as;
  • the updated state vector Y" in the feature space is a linear combination of the ⁇ maps of the forecasted state vectors in the feature space ⁇ (y f ).
  • the kernel function ⁇ (y,,y y ) calculates the dot product in space .F directly from the elements of the input spaced"' and can therefore be calculated very efficiently. That is, the right hand side of Equation 26 does not directly involve the mapping ⁇ (y). Every kernel function satisfying Mercer's theorem is uniquely associated to a mapping ⁇ (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). In accordance to Equation 8, if we have different feature spaces for the static and dynamic variables and production data, it can be easily seen that the kernel function can be written as:
  • ⁇ (y, > y, ) ⁇ m (m, , m, ) + W x (*, , X 7 ) + ⁇ d (d, , d, ) (Equation 27)
  • ⁇ m , ⁇ * and ⁇ d are the kernels function corresponding to ⁇ , ⁇ and ⁇ .
  • Equation 29 This equation can be thought of as equivalent to the Kalman update equation, as the solution of this equation provides the updated state vectors y" in the input space. It is clear from Equation 29 that depending on the nature of the kernel function, y" could be a linear or nonlinear combination of the forecasted state vectors y f . Further, this is clearly a generalization of the EnKF, because by choosing different kernel functions, different versions of the EnKF can be obtained.
  • Equation 8 m, x, and d have
  • Equation 27
  • feature space F is the same as the input space R N - , and as a result, the kernel formulation of
  • Equation 29 For the polynomial kernel defined in Equation 30, the Kalman update equation defined in Equation 29 can be written as:
  • One approach to solve Equation 31 for y" efficiently is to apply a fixed- point iteration method (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998), wherein the iteration scheme is given as:
  • kernel functions such as RBF, exponential, sigmoid kernels, and other kernel functions satisfying Mercer's theorem could be used and are within the scope of the claimed invention.
  • the production data d was assumed to be Gaussian, and was therefore not mapped to any feature space.
  • the above derivations can be extended to account for non-Gaussian characterisitcs of the production data d by mapping it to an appropriate feature space through the mapping ⁇ d (d).
  • An approach similar to above can be applied to arrive at the equivalent of the Kalman gain equation for non-Gaussian d, and therefore, without delving into details of the derivation, the final equivalent of the Kalman gain equation is given as:
  • is the M xM centered kernel matrix of the forecasted production data, that is:
  • l [l,l,...,lf is a M xI vector
  • ⁇ d [V (M 1 ),..., ⁇ (d,d M )]
  • r is also a M xI vector
  • ⁇ d (d J ) ⁇ (d,,d J ).
  • Equation 34 it is assumed that the error covariance matrix of ⁇ d (d) is diagonal with variance ⁇ 2 . Equation 34 can be considered equivalent to the
  • the simulation model for this example represents a simple 2D horizontal square reservoir covering an area of 450x450 m 2 with a thickness of 10 m, and is modeled by a 45x45x1 horizontal 2D grid.
  • the fluid system is essentially an incompressible two-phase unit mobility oil-water system, with zero connate water saturation and zero residual oil saturation.
  • the reservoir is initially saturated with oil at a constant pressure of 5800 psi at the top.
  • the reference or "true" permeability field is that of a fluvial channelized reservoir shown in FIGS. 2a and 2b, and is obtained using the training image in FIG. 2c with the multipoint geostatistical software snesim (Strebelle, S., Conditional Simulation of Complex Geological Structures using Multiple-point Statistics, Mathematical Geology, 34, 1-22, 2002).
  • FIG. 2a high permeability sand is depicted by region 32 and the low permeability background is depicted by region 34.
  • the sand and background are assumed to be homogeneous and isotropic with permeability of the sand being 10 D and the background permeability being 500 mD.
  • FIG. 2b shows the binary histogram of the permeability field.
  • Such a binary and discontinuous permeability field is purposefully chosen because the discontinuous nature of such random fields is quite difficult to preserve with continuous linear algorithms like the EnKF, and thus can be considered an effective case to demonstrate the applicability of the KEnKF.
  • the reservoir has 8 injectors and 8 producers placed in a line drive pattern as shown in FIG. 2a, where the black circles (O 's) represent producers and black crosses (X' s) represent injectors.
  • the model is run for 1900 days with the true permeability field, with the injectors under water rate control at 100 bbd and the producers under bottom hole pressure (BHP) control at 5789 psi. This provides the true observed data that consists of the injector BHPs and producer oil and water production rates. Gaussian noise with standard deviation of 1 psi and 1 bbd are added to this injection BHPs and production rates respectively to obtain the ensemble of observed data.
  • FIG. 3a-3d An initial ensemble of 100 permeability fields are obtained using snesim with the same training image as in FIG. 2c, and four of these realizations are shown in FIG. 3a-3d. High permeability sand is depicted by region 40, and the low permeability background is depicted by region 42.
  • the information conveyed by the training image can be thought of as our prior geological knowledge of the reservoir, that is, we assume that we know that there are channels in the reservoir (say from outcrop data), and by integrating this knowledge with the observed dynamic data, we are trying to obtain a posterior estimate of the locations and sinuosity of the channels, conditioned to both our prior knowledge and observations.
  • FIG. 4-7 show four updated realizations each of the final ensemble obtained with polynomial kernels of order 1 (that is, the standard EnKF), order 5, order 7 and order 9 respectively.
  • order 1 that is, the standard EnKF
  • order 5 order 7
  • order 9 order 9
  • FIGS. 9a-9d shows the oil (91, 93, 95, 97) and water (92, 94, 96, 98) production rates for four of the eight producers for the initial ensemble, and also that of the reference permeability field (black circles (o's) for oil production rate, black crosses (x's) for water production rate).
  • FIGS. 10-13 illustrates the match to the production data obtained with kernels of order 1, 5, 7 and 9 respectively.
  • FIGS. 10a- 1Od shows that all the members of the final ensemble almost exactly match the observed production data. However, as the order of the kernel increases (FIGS.
  • the match deteriorates, but is still reasonable except for one of the wells for the order 9 kernel (FIG. 13).
  • a possible explanation for the deterioration in match could be that, since this data assimilation problem is a non- unique ill-posed problem, with possibly many solutions, as the order of the kernel is increased, the subspace of the original input space R N - over which the KEnKF searches for solutions becomes smaller and smaller, and it thus becomes more difficult to find realizations that provide a reasonable degree of match to the observations. In other words, as the order of the kernel is increased, the dimension of the feature space increases, and thus, the same number of realizations (determined by ensemble size) span a smaller subspace of the feature space.
  • the reservoir model for this example is the same at the last example.
  • the only difference is the reference permeability field, shown in FIG. 14a, which is obtained using the training image shown in FIG. 15 with the geostatistical software filter 'sim (Zhang, T., Multiple Point Geostatistics: Filter-based Local Training Pattern Classification for Spatial Pattern Simulation, PhD Thesis, Stanford University, Stanford, CA, 2006).
  • the permeability field is characterized by high permeability circular inclusions ("donuts") 142 immersed in a low permeability background (144 regions).
  • the model is run with the true permeability field for 950 days in this case to obtain the observed data.
  • an initial ensemble of 100 permeability fields are obtained usingfiltersim with the same training image as in FIG. 15, and four of these realizations are shown in FIGS. 16a-16d.
  • FIGS. 17-19 show four updated realizations each of the final ensemble obtained with polynomial kernels of order 1, order 5, and order 7 respectively.
  • FIG. 20a shows the marginal distributions, and we see again that with the standard EnKF, the marginal distribution is Gaussian as expected (FIG. 20b), and with the increase of the kernel order, the marginal distribution becomes a better approximation to the original binary distribution (FIG. 20a).
  • FIGS. 21a-21d show the oil and water production rates for four of the eight producers for the initial ensemble as in the first example, and also that of the reference permeability field.
  • FIG. 22-24 illustrates the match to the production data obtained with kernels of order 1, 5, and 7 respectively. In contrast to the first example, in this case, all kernels obtain a very good match to the production data. However, clearly the realizations obtained with the 7 th order kernel (FIG. 19) are much better estimates of the true realization than that obtained with the standard EnKF (FIG. 17). This exemplifies the fact that if reasonable matches are obtained with different order kernels, the ensemble obtained with higher order kernels may provide better estimates of the random field being estimated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

A reservoir prediction system and method are provided that use generalized EnKF using kernels, capable of representing non-Gaussian random fields characterized by multi-point geostatistics. The main drawback of the standard EnKF is that the Kalman update essentially results in a linear combination of the forecasted ensemble, and the EnKF only uses the covariance and cross-covariance between the random fields (to be updated) and observations, thereby only preserving two-point statistics. Kernel methods allow the creation of nonlinear generalizations of linear algorithms that can be exclusively written in terms of dot products. By deriving the EnKF in a high-dimensional feature space implicitly defined using kernels, both the Kalman gain and update equations are nonlinearized, thus providing a completely general nonlinear set of EnKF equations, the nonlinearity being controlled by the kernel. By choosing high order polynomial kernels, multi-point statistics and therefore geological realism of the updated random fields can be preserved. The method is applied to two non-limiting examples where permeability is updated using production data as obserations, and is shown to better reproduce complex geology compared to the standard EnKF, while providing reasonable match to the production data.

Description

SYSTEM AND METHOD FOR PREDICTING FLUID FLOW IN SUBTERRANEAN
RESERVOIRS
This application claims priority to U.S. Serial No. 61/148,800, filed January 30, 2009, which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
[0001] The present invention relates generally to a system and method for predicting fluid flow within subterranean reservoirs, and more particularly, to a system and method for utilizing kernel-based ensemble Kalman filters for updating reservoir models corresponding to reservoirs having non-Gaussian random field and non-Gaussian production data characteristics.
BACKGROUND OF THE INVENTION
[0002] Integrating various kinds of static and dynamic data during the reservoir modeling and simulation process has been shown to reduce the uncertainty of the simulation models, thereby improving the predictive capacity of such models, which in turn can lead to better reservoir management decisions. In this regard, the ensemble Kalman filter (EnKF) has recently generated significant attention as a promising method for conditioning reservoir simulation models to dynamic production data. Further, a recent emphasis on uncertainty quantification, closed-loop reservoir optimization and real-time monitoring has made the EnKF even more valuable, as the EnKF is particularly suited for continuous model updating, and provides an ensemble of models that can be used to approximate the posterior distribution of any output of the simulation model.
[0003] The EnKF has been recently applied and improved upon by many researchers in the petroleum industry. It was introduced to the petroleum industry by Naevdal, G., Johnsen, L.M., Aanonsen, S.I., Vefring, E. H., Reservoir Monitoring and Continuous Model Updating Using Ensemble Kalman Filter, SPE paper 84372 presented at the SPE Annual Technical Conference and Exhibition, Denver, CO, 2003. Naevdal, G., Mannseth, T., Vefring, E. H., Near-Well Reservoir Monitoring Through Ensemble Kalman Filter, paper SPE 75235 presented at the SPE/DOE Improved Oil Recovery Symposium, Tulsa, OK, 2002, wherein the EnKF was used to update static parameters in near-wellbore simulation models, and later also used to update permeability, pressure and saturation fields of a 2D three phase simulation model. Since then, others have modified and improved the EnKF including, Gu, Y., Oliver, D. S., History Matching of the PUNQ-S3 Reservoir Model Using the Ensemble Kalman Filter, SPE Journal, 10, 217-224, 2005, Wen, X.-H., Chen, W.H., Real-time Reservoir Model Updating Using Ensemble Kalman Filter, paper SPE 92991 presented at the SPE Reservoir Simulation Symposium, Houston, TX, 2005, Li, G., Reynolds, A.C., An Iterative Ensemble Kalman Filter for Data Assimilation, paper SPE 109808 presented at the SPE Annual Technical Conference and Exhibition, Anaheim, CA, 2007, Skjervheim, J.A., Evensen, G., Aanonsen, S. L, Ruud, B.O., Johansen, T.A., Incorporating 4D Seismic Data in Reservoir Simulation Models Using Ensemble Kalman Filter, SPE Journal, 12, 282-292, 2007, etc.
[0004] It is known that a key limitation of the EnKF is that it is technically appropriate only for random fields (e.g., permeability) characterized by two-point geostatistics (multi- Gaussian random fields). Application of the EnKF to complex non-Gaussian geological models such as channels systems leads to modification of these models towards Gaussianity . As a result, although a good agreement to the observed production data may be obtained from the updated models, the predictive capacity of such models may be questionable. The main reason behind this limitation is that the updated ensemble obtained using the EnKF is a linear combination of the forecasted ensemble, and the EnKF only uses the covariance and cross-
9 covariance between the random fields (to be updated) and observations, thereby only preserving two-point statistics.
[0005] Kernel methods have recently generated significant interest in the machine learning community (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002), and enable efficient nonlinear generalizations of linear algorithms. Well known examples of the application of kernel methods to linear algorithms to create nonlinear generalizations are support vector machines, kernel-based clustering, and kernel principal component analysis (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). See also Sarma, P., Durlofsky, L. J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008, which describes a method that utilizes kernel PCA to parameterize non-Gaussian random fields, which could then be used with gradient-based optimization methods for efficient history matching while preserving geological realism.
SUMMARY OF THE INVENTION
[0006] A system is provided for predicting fluid flow in a subterranean reservoir having non-Gaussian characteristics. The system includes a computer processor, a computer readable program code for accessing a set of models representing the reservoir, and one or more data sources in communication with and/or accessible to the the computer processor for collecting reservoir field data for a predetermined duration of time. The system further includes a reservoir model update program code, executable by the computer processor, for receiving the reservoir field data and for using the field data to update the set of models at a predetermined time such that data from the updated set of models is consistent with the field data. [0007] In accordance with the present invention, the non-Gaussian characteristics of the reservoir in the updated set of models are preserved, thereby maximizing accuracy of reservoir prediction data to be generated by the updated set of models.
[0008] In accordance with another aspect of the present invention kernel methods are used to create a nonlinear generalization of the EnKF capable of representing non-Gaussian random fields characterized by multi-point geostatistics. By deriving the EnKF in a high- dimensional feature space implicitly defined using kernels, both the Kalman gain and update equations are nonlinearized, thus providing a completely general nonlinear set of EnKF equations, the nonlinearity being controlled by the kernel. The feature space and associated kernel are chosen such that it is more appropriate to apply the EnKF in this space rather than in the input space. In accordance with non- limiting embodiment, such class of kernels is high order polynomial kernels, using which multi-point statistics and therefore geological realism of the updated random fields can be preserved. The present method is applied to two non- limiting example cases where permeability is updated using production data, and is shown to better reproduce complex geology compared to the standard EnKF, while providing a reasonable match to production data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] A detailed description of the present invention is made with reference to specific embodiments thereof that are illustrated in the appended drawings. The drawings depict only typical embodiments of the invention and therefore are not to be considered to be limiting of its scope.
[0010] FIG. Ia is schematic diagram showing an implementation of the reservoir prediction system of the present invention; and [0011] FIG. Ib is a flow diagram showing an example of a computer-implemented method for reservoir prediction in accordance with the present invention;
[0012] FIG. 2a shows a reference or "true" permeability field (left) for Example 1 and its histogram (FIG. 2b);
[0013] FIG. 2c shows a channel training image used for Example 1;
[0014] FIGS. 3a-3d show four realizations from an exemplary initial ensemble or realizations;
[0015] FIGS. 4a-4d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the standard EnKF;
[0016] FIGS. 5a-5d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 5 polynomial kernel;
[0017] FIGS. 6a-6d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 7 polynomial kernel;
[0018] FIGS. 7a-7d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with the EnKF with order 9 polynomial kernel;
[0019] FIGS. 8a-8d show typical marginal distributions of a final ensemble (corresponding to the initial ensemble of FIGS. 3a-3d) obtained with polynomial kernels of order 1 (Fig. 8a), order 5 (Fig. 8b), order 7 (Fig. 8c) and order 9 (Fig. 8d);
[0020] FIGS. 9a-9d show oil (91, 93, 95, 97) and water (92, 94, 96, 98) production rates for four wells for an initial ensemble, and that of the true permeability field (O's for oil rate, X's for water rate); [0021] FIGS. 1 Oa-IOd show oil (101, 103, 105, 107) and water (102, 104, 106, 108) production rates for four wells for a final ensemble obtained with the standard EnKF, and that of the true permeability field (O 's for oil rate, X's for water rate);
[0022] FIGS. 1 Ia-I Id show oil (111, 113, 115, 117) and water (112, 114, 116, 118) production rates for four wells for a final ensemble obtained with the EnKF of order 5, and that of the true permeability field (O's for oil rate, X's for water rate);
[0023] FIGS. 12a- 12d show oil (121, 123, 125, 127) and water (122, 124, 126, 128) production rates for four wells for a final ensemble obtained with the EnKF of order 7, and that of the true permeability field (O's for oil rate, X's for water rate);
[0024] FIGS. 13a-13d show oil (131, 133, 135, 137) and water (132, 134, 136, 138) production rates for four wells for a final ensemble obtained with the EnKF of order 9, and that of a true permeability field (O's for oil rate, X's for water rate);
[0025] FIG. 14a shows a reference or "true" permeability field (left) for Example 2 and its histogram (FIG. 14b);
[0026] FIG. 15 shows a "Donut" training image used for Example 2
[0027] FIGS. 16a-16d show four realizations of another exemplary initial ensemble;
[0028] FIGS. 17a-17d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS . 16a- 16d) obtained with the standard EnKF;
[0029] FIGS. 18a-18d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with the EnKF with order 5 polynomial kernel; [0030] FIGS. 19a-19d show four realizations of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with the EnKF with order 7 polynomial kernel;
[0031] FIG. 20 is a histogram of a true permeability field, and typical marginal distributions of a final ensemble (corresponding to the initial ensemble of FIGS. 16a-16d) obtained with polynomial kernels of order 1 (FIG. 20b), order 5 (FIG. 20c) and order 7 (FIG. 2Od);
[0032] FIGS. 21a-21d show oil (211, 213, 215, 217) and water (212, 214, 216, 218) production rates for four wells for an initial ensemble, and that of a true permeability field (O's for oil rate, X's for water rate);
[0033] FIGS. 22a-22d show oil (221, 223, 225, 227) and water (222, 224, 226, 228) production rates for four wells for a final ensemble obtained with the standard EnKF, and that of a true permeability field (O's for oil rate, X's for water rate);
[0034] FIGS. 23a-23d show oil (231, 233, 235, 237) and water (232, 234, 236, 238) production rates for four wells for a final ensemble obtained with the EnKF of order 5, and that of a true permeability field (O's for oil rate, X's for water rate); and
[0035] FIGS. 24a-24d show oil (241, 243, 245, 247) and water (242, 244, 246, 248) production rates for four wells for a final ensemble obtained with the EnKF of order 7, and that of a true permeability field (O's for oil rate, X's for water rate)
DETAILED DESCRIPTION
[0036] The present invention may be described and implemented in the general context of instructions to be executed by a computer. Such computer-executable instructions may include programs, routines, objects, components, data structures, and computer software technologies that can be used to perform particular tasks and process abstract data types. Software implementations of the present invention may be coded in different languages for application in a variety of computing platforms and environments. It will be appreciated that the scope and underlying principles of the present invention are not limited to any particular computer software technology.
[0037] Moreover, those skilled in the art will appreciate that the present invention may be practiced using any one or combination of computer processing system configurations, including but not limited to single and multi-processer systems, hand-held devices, programmable consumer electronics, mini-computers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by servers or other processing devices that are linked through a one or more data communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
[0038] Also, an article of manufacture for use with a computer processor, such as a CD, prerecorded disk or other equivalent devices, could include a computer program storage medium and program means recorded thereon for directing the computer processor to facilitate the implementation and practice of the present invention. Such devices and articles of manufacture also fall within the spirit and scope of the present invention.
[0039] Referring now to the drawings, embodiments of the present invention will be described. The invention can be implemented in numerous ways, including for example as a system (including a computer processing system), a method (including a computer implemented method), an apparatus, a computer readable medium, a computer program product, a graphical user interface, a web portal, or a data structure tangibly fixed in a computer readable memory. Several embodiments of the present invention are discussed below. The appended drawings illustrate only typical embodiments of the present invention and therefore are not to be considered limiting of its scope and breadth.
[0040] FIG. Ia shows an embodiment of system 10 for predicting fluid flow in a subterranean reservoir having non-Gaussian characteristics. For purposes of the invention described herein, "non-Gaussian characteristics" or features refers to permeability, porosity and other reservoir characteristics or features having non-linear, non-Gaussian geospatial distribution. The system includes one or more data sources 12, which can include electronically accessible or operator-entered data, for providing initial model ensemble, reservoir field data, including for example, oil production, water production, gas production and seismic data. The initial model ensemble includes initial reservoir characteristics, at a time to, derived by operators and/or computer modeling based on initial observations, knowledge or data gathered about the reservoir. The initial model ensemble may take into account initial estimates of permeability, porosity and other characteristics used to create simulation models of subterranean reservoirs. Optionally, a one or more initial model computer processors 20 having computer readable program code and one or more sensors 18 can be provided in lieu or in addition to data source 12.
[0041] System 10 further includes a model update processor 14, which can be or physically reside as part of processor 20, that is used for updating the initial model ensemble at a predetermined or user defined time ti. The model update processor 14 includes model update program code for receiving the reservoir field data and initial model ensemble data from data source(s) 12, sensors 18 and/or initial model processor(s) 20. At time tls the model update program code of processor 14 updates the initial model ensemble at so as to preserve the non- Gaussian characteristics of the reservoir in the updated set of models, which enables enhanced accuracy and reliability of reservoir prediction data. [0042] In accordance with a preferred embodiment of the present invention, the model update program code of processor 14 is programmed to implement steps 24, 26 and 28 shown in FIG. Ib. Updating of the initial model ensemble includes using model update code utilizing a generalized ensemble Kalman filter having higher order kernels. "Higher order" refers to any order greater than 1, which can be selected by the user. In a preferred embodiment, the kernels are polynomial kernels. The generalized ensemble Kalman filter includes a gain function adapted for reservoirs having non-Gaussian random field characteristics, as represented for example by Equation 19 shown below. The gain function can further be adapted for reservoirs having non-Gaussian production data characteristics, as shown for example by Equation 34 described below.
[0043] The generalized ensemble Kalman filter also includes an update model adapted for reservoirs having non-Gaussian random field characteristics, as shown by Equation 24. As with the gain function, the update model can also be adapted for reservoirs having non- Gaussian production data characteristics as shown by Equation 38 described below.
[0044] Referring again to FIG. Ia, system 10 includes display/forecasting/optimization processor(s) 16 having image realization program code, executable by 16. Again, processor 16 can be the same or part of one or more of processors 14 and 20. Processor(s) 16, via the image realization program code, transform the prediction data generated by the updated set of models into image data representations of the reservoir. The data representations are then communicated to image display mean for displaying the image representations of the reservoir. Output from the updated model ensemble can also be used for reservoir forecasting and optimization. The Ensemble Kalman Filter
[0045] This section provides a brief description of the ensemble Kalman filter. More details can be found in Naevdal, G., Johnsen, L.M., Aanonsen, S.I., Vefring, E.H., Reservoir Monitoring and Continuous Model Updating Using Ensemble Kalman Filter, SPE paper 84372 presented at the SPE Annual Technical Conference and Exhibition, Denver, CO, 2003 and Wen, X.-H., Chen, W.H., Real-time Reservoir Model Updating Using Ensemble Kalman Filter, paper SPE 92991 presented at the SPE Reservoir Simulation Symposium, Houston, TX, 2005. The EnKF is a temporally sequential data assimilation method, and at each assimilation time, the following steps are performed: a forecast step (evaluation of the dynamic system), followed by a data assimilation step (calculation of Kalman gain), and then by an update of the state variables of the EnKF (Kalman update).
[0046] The state variables of the EnKF usually consist of the following types of variables: static variables (such as permeability, porosity), dynamic variables (such as pressure, saturation), and production data (such as bottom hole pressures, production and injection rates). Thus, the state vector of the EnKF at a given assimilation time can be written as:
Figure imgf000012_0001
(Equation 1)
[0047] Here, y7 e RNs is the jth ensemble member, m are the static variables, x are the dynamic variables, and d e RNj are the production data to be assimilated. The number of ensemble members is M. Before the EnKF can be applied, an initial ensemble of the static and dynamic variables has to be created. Usually, geostatistical techniques are used to create the ensemble of static variables (permeability field etc.) corresponding to prior knowledge of the geology and hard data. However, the dynamic variables are usually considered known without uncertainty, primarily because the uncertainty of the dynamic variables such as pressure and saturation at the initial time is smaller compared to that of the static variables, as the reservoir is generally in dynamic equilibrium before start of production. In any case, if the initial dynamic variables are considered uncertain, it can be easily reflected through the initial ensemble. There is usually no production data available initially.
[0048] Once the initial ensemble is created, the forecast step can be performed, wherein the dynamic system (reservoir simulation model) is evaluated to the next assimilation time using the current estimates of the static and dynamic variables. The simulator has to be run once for each ensemble member (M times). This provides the forecasted dynamic variables and production data at the next assimilation step, and the step can be written compactly as:
Figure imgf000013_0001
(Equation 2)
[0049] Here y7 is the forecasted state vector obtained after the forecast step, y" is the updated state vector obtained after the Kalman update, and subscript k stands for the assimilation time. / depicts the reservoir simulation equations. The assimilation time subscript k will be removed from discussions below for simplicity, and will only be shown when necessary.
[0050] After the forecasted state vector is obtained, the data assimilation step is performed, wherein, the Kalman gain is calculated using the static and dynamic variables and production data obtained from the forecast step, and is given as:
g y \ y " ) (Equation 3) [0051] Here, Kg is known as the Kalman gain matrix, and is of size Ns x Nd , and Cf y is the covariance matrix of yf. Matrix H = [0 | l] , where O is a Nd χ(Ns -Nd) matrix of zeros, and I is aNd χNd identity matrix. C6 is the error covariance matrix of the production data d which is usually assumed to be diagonal (that is, data measurement errors are independent of each other). From the above, it is clear that Cf yllτ is the covariance matrix of the full forecasted state vector yf with the forecasted production data d and HC^Hr is the covariance matrix of forecasted production data dwith itself. That is, defining the centered vectors, yf = y/ -y/ and d = d - d , we have:
M J=1 (Equation 4)
Figure imgf000014_0001
(Equation 5)
[0052] The final step of the EnKF is to update the state variables using the actual observations, which is given as:
y; = y; + K, (d βJ -d,) (Equation 6)
[0053] Here, d0 are the observed production data, and random perturbations corresponding to C6 are added to d0 to create the ensemble of observed data, d^.The updated state vector y" thus calculated is then used to evaluate the next forecast step, and the process repeated for the next assimilation time.
[0054] It is clear from Equation 6 that the updated state vectors are linear combinations of the forecasted state vectors. Further, because the EnKF only uses the covariance of the forecasted state vectors, it is technically appropriate only for multi-Gaussian random fields. In other words, the EnKF is only able to preserve two-point statistics or covariance of random fields. However, just preserving two-point statistics is not appropriate if the goal is to capture realistic geology with complex patterns of continuity (such as channels) that is characterized by multipoint statistics or non-Gaussian random fields (Sarma, P., Durlofsky, L., Aziz, K., A New Approach to Automatic History Matching using Kernel PCA, paper SPE 106176 presented at the SPE Reservoir Simulation Symposium, Houston, TX, 2006; Sarma, P., Durlofsky, L.J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008).
[0055] Generalized EnKF using Kernels
[0056] The EnKF can be generalized to handle non-Gaussian random fields and multi-point statistics using kernel methods. In recent years, kernel methods have generated a lot of interest (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). The basic idea is to map the data in the input space RN- to a so-called feature space F through a nonlinear map φ, and then apply a linear algorithm in the feature space. The feature space F is chosen such that it is more appropriate to apply the linear algorithm in this space rather than in the input space RN- . This approach can be applied to any linear algorithm that can be expressed solely in terms of dot products without the explicit use of the variables themselves; thus kernel methods allow the construction of elegant nonlinear generalizations of linear algorithms (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). By replacing the dot product in the feature space with an appropriate kernel function, efficiency similar to the linear algorithm can be achieved. Well known examples of the application of kernel methods to linear algorithms to create nonlinear generalizations are support vector machines, kernel- based clustering, and kernel principal component analysis (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998 and Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002).
[0057] Consider the nonlinear mapping φ that maps the input space RN- (state space of the EnKF) to another space F. That is:
: RN' → F; Y = φ(y); \ G RN° ,Y G F ~- .. _Λ ψ y* > ' J 'J (Equation 7)
[0058] F is called the feature space, and it could have an arbitrarily large dimensionality N F (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Sarma, P., Durlofsky, L.J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008). The definition will become clearer when space F is associated with a kernel function below. Because state vector y consists of different kinds of variables (permeability, porosity, saturation etc.) with possibly different random fields, it may not be appropriate to have a single mapping φ that operates on y as a whole. Thus will consider the following mapping:
Figure imgf000016_0001
(Equation 8) [0059] For simplicity, the initial derivation of the kernel-based EnKF (KEnKF) will be carried out assuming that the production data is not mapped to any feature space, that is, φd (d) = d (implying that production data is assumed multi-Gaussian). This will be later extended to accommodate non-Gaussian production data.
Calculation of Kalman Gain in Feature Space
[0060] Defining the centered map φ(yj^ = φ(yj ^-φ,j = l,...,M , that is, φ(y j ) are the centered
counterparts of φ{y }) about their mean φ in F, the covariance matrix in the feature space F
of size NF x Np can be written as:
Figure imgf000017_0001
(Equation 9)
[0061] If the EnKF is applied in the feature space F instead of the original input space RN- ,
then similar to Equation 3, the Kalman gain equation in the feature space F can be written
as:
K z = C{ >HΓ
Figure imgf000017_0002
+ C « r) 1 m (Equat .i.on i 1m0)
[0062] Here again, H = [0 | l] , but 0 is a Nd χ(NF -Nd) matrix of zeros, and I and C6 are the same as before, as d is not mapped to a feature space, andKg is of size NF χNd. Because NF could possibly be very large (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002; Sarma, P., Durlofsky, L.J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint Geostatistics, Mathematical Geosciences, 40, 3-32, 2008), it may not be practically possible to calculate Kg or C^. The solution to this problem, which leads to the kernel-based EnKF equations, is to realize from Equation 9 that Cf y can only have a maximum rank of M as each column of Cf y is a linear combination of the φ(χ \,j = \,...,M vectors. Therefore Kg also has a maximum rank of M.
[0063] Thus, if we writeκg = [k1,...k,,...kwj , where k is a column of Kg,then the above
argument implies that for a given column k,, there exists coefficients ay, j = \,...,M such that:
Figure imgf000018_0001
(Equation 11)
[0064] This can be compactly written as:
[0065] *
Figure imgf000018_0002
(Equanon U)
[0066] Therefore, Kg can be written as:
r [n0n06/c-771] K g = ΦA; where A = T La1 !,' ...,'aw ^ 1J ( Ejquati .on Λ 13~Λ)
[0068] From Equation 11 , since k, lies in the span of φ(y ^J = \,...,M, and using Equation
13, Equation 10 can be equivalently written as:
[0069]
Figure imgf000018_0003
l,...,M (Equation 14)
[0070] The left hand side of Equation 14 can be compactly written as:
[0071] v y "' (Equation 15)
1 M
[0072] Since we have C^Hr = — ∑^(yy)dj (dy = ά] -dis centered as before) the right hand
IVl j
side of Equation 14 becomes: [0073]
Figure imgf000019_0001
(Equation 16)
[0074] Defining D = Td1, ...,dM], Equation 16 can be compactly written as:
— ΦΓΦDΓ [0075] M (Equation 17)
[0076] From Equations 14, 15 and 17, we obtain the following:
ΦrΦA(HC^Hr + Ce) = — ΦrΦD
(Equation 18)
[0077] Since ΦrΦis a non-zero positive definite matrix, it can be eliminated from both
sides, and defining Cd = HC^Hr as the covariance matrix of size Nd x Nd of the forecasted
production data d (again d is not mapped to any feature space), we finally obtain:
A = - Oτ (cf d +CeΫ [0078] M v I (Equation 19)
[0079] This equation can be thought of as an equivalent of the Kalman gain equation,
because after A is calculated, the Kalman gain can be calculated using Equation 13. However, as will be shown below, explicit calculation of the Kalman gain Kg is not
necessary, and the Kalman update can be performed directly using A. Because the calculation
of A only involves the production data d and its covariances, and because d is not mapped to any feature space but is in the original input space, A can be calculated very efficiently.
However, calculating the Kalman gain Kg requires calculating the maps φ(y^,j = l,...,M. As
mentioned earlier, because the dimension NFof ^(y) could be very large, it may be
computationally intractable to calculate ^(y). Kalman Update as a Pre-Image Problem
[0080] The Kalman update equation in feature space F is similar to the standard Kalman update equation given by Equation 6, and can be written as:
γ; = «*(y;)+κg (dβ>/ -d,) V/ = I,...,M
[0081] (Equation 20)
[0082] Using Equation 13, Equation 20 can be written as:
Y; = ø(yf ) + ΦA(do>; -d,) Y/ = 1,...,M
[0083] (Equation 21)
[0084] Defining zy = A(d0 J -A1) , which is a vector of length M, Equation 21 can be written as:
V = φ{yf J )+Σ^J {φ{y!)-φ} y/ =i,...,M
[0085] (Equation 22)
[0086] Furthermore, defining ω} = ∑zυ , Equation 22 can be simplified as;
Figure imgf000020_0001
(Equation 23)
[0087] We observe that similar to the standard EnKF, the updated state vector Y" in the feature space is a linear combination of the φ maps of the forecasted state vectors in the feature space φ(yf ). We are, however, not interested in the updated state vector Y" in F, but are interested in the updated state vector y" in the original input space RN- . In order to obtain a updated state vector y" in the original input space RN' that corresponds to the updated state vector Y" in F, an inverse φ map of Y" is required, that is, y" = φ ^1 ( Y" ) . This is known as
the pre -image problem (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis
as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Kwok, J.T., Tsang, I.W., The Pre-Image Problem with Kernel Methods, IEEE Transactions in Neural Networks, 15, 1517-1525, 2004). However, again, due to the very large dimensionality of the
feature space F, it may not be possible to calculate this pre -image, and further, such a pre-
image may not even exist, or, if it exists, it may be non-unique (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural
Computation, 10, 1299-1319, 1998; Kwok, J.T., Tsang, I.W., The Pre-Image Problem with Kernel Methods, IEEE Transactions in Neural Networks, 15, 1517-1525, 2004). These issues can be addressed by solving a minimization problem, in which a vector y is sought such that
the least-square error between ^(y) and Y" is minimized (Scholkopf, B., Smola, A., Muller,
K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Kwok, J.T., Tsang, I.W., The Pre-Image Problem with Kernel Methods, IEEE Transactions in Neural Networks, 15, 1517-1525, 2004):
y* = argmin /?(y) = ||^(y)- Y;f = ^(y).^(y)-2Y;.^(y)+Y;.Y; y (Equation 24)
[0088] Note that M such minimization problems have to be solved to obtain the M updated state vectors y",j = \,...,M. From Equation 23 and Equation 24, we obtain:
p(y) = Φ(y).φ(y)-2∑bjφ(y{).φ(y) + Ω γ/ = l,...,M
(Equation 25) [0089] Here Ω represents terms independent of y. A key observation here is that in order to calculate p(y) , only the dot product of vectors in the feature space F are required; the explicit calculation of the map ^(y) is not required. This is extremely important, because as discussed earlier, it may be computationally intractable to calculate ^(y) due to its very large dimension. Since only the dot products in the space F are required to calculate p(y) and not ^(y) itself, this can be calculated very efficiently with what is known as a kernel function
(Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002):
(φ(y, ).φ WW(y,,y,) (Equation 26)
[0090] The kernel function ^(y,,yy) calculates the dot product in space .F directly from the elements of the input spaced"' and can therefore be calculated very efficiently. That is, the right hand side of Equation 26 does not directly involve the mapping ^(y). Every kernel function satisfying Mercer's theorem is uniquely associated to a mapping φ (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998; Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002). In accordance to Equation 8, if we have different feature spaces for the static and dynamic variables and production data, it can be easily seen that the kernel function can be written as:
[0091] Ψ (y, > y, ) = Ψm (m, , m, ) + Wx (*, , X 7 ) + Ψd (d, , d, ) (Equation 27) [0092] Here Ψm , ψ* and Ψd are the kernels function corresponding to ^ , ^ and ^ . Note
that since we assume Gaussian production data, ψd1J ) = ά1J . From Equation 25 and
Equation 26, we have:
p (y) = ^ (y,y) - 2∑^ (y,/,y) + Ω V/ = I, ..., M
[0093] (Equation 28)
[0094] The minimum of the objective function p(y) can be obtained by setting its gradient to zero, resulting in the following equation:
MM)_2^ ^M.0 VJ,UM
[0095] dy =• dy (Equation 29)
[0096] This equation can be thought of as equivalent to the Kalman update equation, as the solution of this equation provides the updated state vectors y" in the input space. It is clear from Equation 29 that depending on the nature of the kernel function, y" could be a linear or nonlinear combination of the forecasted state vectors yf. Further, this is clearly a generalization of the EnKF, because by choosing different kernel functions, different versions of the EnKF can be obtained.
Preserving Multi-point Statistics using Polynomial Kernels
[0097] The above section describes the development of the Kalman update equation in terms of kernel functions without specifying the form the kernel function itself. Although there are various kinds of kernel functions available in literature (Scholkopf, B., Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA, 2002), the kernel function of interest in this work is the polynomial kernel, defined as (Sarma, P., Durlofsky, L. J., Aziz, K., Kernel Principal Component Analysis for an Efficient, Differentiable Parameterization of Multipoint
Geostatistics, Mathematical Geosciences, 40, 3-32, 2008):
(φ(y,).φ(yJ)) = ^(y,,yJ) = ∑(y,yJ)i . k=l (Equation 30)
[0098] Note that, for simplicity, the above equation and the derivation below assumes a
single kernel function for the entire state vector y. If as in Equation 8, m, x, and d have
separate feature spaces and associated kernel functions, the derivation below can be easily extended to account for this using Equation 27.
[0099] For the kernel function defined by Equation 30, it can be shown that for q = \, the
feature space F is the same as the input space RN- , and as a result, the kernel formulation of
the EnKF reduces to the standard EnKF. Therefore, for q = \, the co variance matrix Cf y is the
standard covariance matrix used in the usual EnKF, and thus includes 2nd order moments or
two-point statistics of the random field of which y^ are members. For q > \ , the covariance
matrix Cf y in the feature space F includes up to 2q'h order moments of the random field of
which yf j are members, and thus, if the key features of y( are represented by these higher
order moments (or multi-point statistics), it then becomes possible to preserve these during the Kalman update.
[00100] For the polynomial kernel defined in Equation 30, the Kalman update equation defined in Equation 29 can be written as:
[00101]
Figure imgf000024_0001
(Equation 31) [00102] This is equivalent to the Kalman update equation for this particular kernel function. Note that for q > \ , Equation 31 gives the updated state vector y" as a nonlinear combination of yf j Y/ = 1,...,M. One approach to solve Equation 31 for y" efficiently is to apply a fixed- point iteration method (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998), wherein the iteration scheme is given as:
[00103]
Figure imgf000025_0001
(Equation 32)
[00104] Unfortunately, this iteration scheme is quite unstable and does not converge for the above kernel, although the method is quite stable for other kernels such as the Gaussian kernel (Scholkopf, B., Smola, A., Muller, K., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, 10, 1299-1319, 1998). This problem can be eliminated by applying an approach similar to successive over relaxation used for the solution of systems of linear equations (Aziz, K., Settari, A., Petroleum Reservoir Simulation, Chapman and Hall, 1979): Af [00105]
Figure imgf000025_0002
(Equation 33)
[00106] Here, θ is known as a relaxation factor. Equation 33 is used for updating the state vectors in the examples below, and θ = 6 seemed to work quite well for all polynomial kernels applied in the examples. Note that a larger θ gives more weight to the last iterate y" and is more stable, but takes more iterations to converge. In terms of efficiency, because of the iterative nature of Equation 33, it is more expensive to solve than the standard Kalman update equation, however, because the forecast step requires running the reservoir simulator which is much more expensive, overall efficiency is not much affected.
[00107] Note that while the preferred embodiment of the invention described herein applies polynomial kernels, the scope of the present invention is = not limited to this type of kernel function. Other kernel functions, such as RBF, exponential, sigmoid kernels, and other kernel functions satisfying Mercer's theorem could be used and are within the scope of the claimed invention.
Extension to Non-Gaussian Production Data
[00108] In the previous derivations, the production data dwas assumed to be Gaussian, and was therefore not mapped to any feature space. In the event that this assumption is not appropriate, the above derivations can be extended to account for non-Gaussian characterisitcs of the production data d by mapping it to an appropriate feature space through the mapping φd (d). An approach similar to above can be applied to arrive at the equivalent of the Kalman gain equation for non-Gaussian d, and therefore, without delving into details of the derivation, the final equivalent of the Kalman gain equation is given as:
Ψ
Figure imgf000026_0001
(Equation 34)
[00109] Here, Ψ is the M xM centered kernel matrix of the forecasted production data, that is:
Ψ : Ψ1F = A (dI ).A (d,) =
Figure imgf000026_0002
« = U,Λf; y = U,Λf (Equation 35) [00110] The relationship between the centered and non-centered kernel matrix is given as (Kwok, J.T., Tsang, I.W., The Pre-Image Problem with Kernel Methods, IEEE Transactions in Neural Networks, 15, 1517-1525, 2004):
w{a,,a.) = w{a,,a.)-— lrψd -— lrψd +-^1ΓΨ1 [00111] ψ \ " '> ψ \ " > ] M Ψd- M Υd> M2 (Equation 36)
[00112] Here, l = [l,l,...,lf is a M xI vector, ψd = [V (M1 ),..., ψ (d,dM)]r is also a M xI vector, and Ψ is the M xM non-centered kernel matrix, defined as, Ψ : Ψy = φd (A1 ).φd (dJ) = ^(d,,dJ).
[00113] In the above derivation given by Equation 34, it is assumed that the error covariance matrix of φd (d) is diagonal with variance σ2. Equation 34 can be considered equivalent to the
Kalman gain equation, because, after the coefficient matrix B is calculated using Equation 34, the Kalman gain matrix in feature space F can be calculated as:
[00114] K^ΦB'Φ- where Φ^ μ K) A (d.;J (Equation 37)
[00115] However, as before, it is neither necessary nor possible to calculate this Kalman gain matrix, and the updated state vectors can be obtained directly using B within a pre-image problem.
[00116] The Kalman update equation in feature space F in this case can be written as:
[00117] V - «W)**, (* K,)-* (*,)) W- > *' (E,uation 38)
[00118] Using Equation 37, it can be shown that Equation 38 can be reduced exactly to the same form as Equation 22, but with z defined as: zj = BT {Vi, ~ l\ - Ψd +1Λ ) where Λ = — ∑ ψ (d,d, ) [00119] V ; M =i (Equation 39)
[00120] Once z^ is calculated, the rest of the pre-image problem can be solved in exactly the same way as already described above to obtain the updated state vectors y".
Results using Kernel-based EnKF
[00121] In this section we apply the kernel-based EnKF to estimate the permeability fields of two synthetic water flooded reservoirs and demonstrate the ability of the KEnKF to preserve complex geological structures while providing reasonable matches to the production data.
Example 1 : Channel Sand Model
[00122] The simulation model for this example represents a simple 2D horizontal square reservoir covering an area of 450x450 m2 with a thickness of 10 m, and is modeled by a 45x45x1 horizontal 2D grid. The fluid system is essentially an incompressible two-phase unit mobility oil-water system, with zero connate water saturation and zero residual oil saturation. The reservoir is initially saturated with oil at a constant pressure of 5800 psi at the top.
[00123] The reference or "true" permeability field is that of a fluvial channelized reservoir shown in FIGS. 2a and 2b, and is obtained using the training image in FIG. 2c with the multipoint geostatistical software snesim (Strebelle, S., Conditional Simulation of Complex Geological Structures using Multiple-point Statistics, Mathematical Geology, 34, 1-22, 2002). In FIG. 2a, high permeability sand is depicted by region 32 and the low permeability background is depicted by region 34. The sand and background are assumed to be homogeneous and isotropic with permeability of the sand being 10 D and the background permeability being 500 mD. FIG. 2b shows the binary histogram of the permeability field.
Such a binary and discontinuous permeability field is purposefully chosen because the discontinuous nature of such random fields is quite difficult to preserve with continuous linear algorithms like the EnKF, and thus can be considered an effective case to demonstrate the applicability of the KEnKF.
[00124] The reservoir has 8 injectors and 8 producers placed in a line drive pattern as shown in FIG. 2a, where the black circles (O 's) represent producers and black crosses (X' s) represent injectors. The model is run for 1900 days with the true permeability field, with the injectors under water rate control at 100 bbd and the producers under bottom hole pressure (BHP) control at 5789 psi. This provides the true observed data that consists of the injector BHPs and producer oil and water production rates. Gaussian noise with standard deviation of 1 psi and 1 bbd are added to this injection BHPs and production rates respectively to obtain the ensemble of observed data.
[00125] An initial ensemble of 100 permeability fields are obtained using snesim with the same training image as in FIG. 2c, and four of these realizations are shown in FIG. 3a-3d. High permeability sand is depicted by region 40, and the low permeability background is depicted by region 42. The information conveyed by the training image can be thought of as our prior geological knowledge of the reservoir, that is, we assume that we know that there are channels in the reservoir (say from outcrop data), and by integrating this knowledge with the observed dynamic data, we are trying to obtain a posterior estimate of the locations and sinuosity of the channels, conditioned to both our prior knowledge and observations.
[00126] The KEnKF is applied to this ensemble over 10 assimilation steps of 190 days each. FIG. 4-7 show four updated realizations each of the final ensemble obtained with polynomial kernels of order 1 (that is, the standard EnKF), order 5, order 7 and order 9 respectively. We observe that although the standard EnKF is able to produce realizations that have some channel like structure with longer correlations in the direction of the channels, the realizations clearly look quite Gaussian, and this is also verified from the Gaussian marginal distribution of one of the realizations as seen in FIG. 8a. However, as the order of the kernel is increased, the channel structure clearly becomes more and more visible, with the order 7 kernel (FIG. 8c) producing realizations that show two channel looking structure at approximately right locations. The bimodal marginal distribution of these realizations as seen in FIG. 8c is also closer to the original binary distribution. With a kernel of order 9 (FIG. 8d), the converged realizations become almost binary, and the two channels are obtained at approximately right locations. The marginal distribution as seen in FIG. 8d is very close to the original binary distribution, but with the mode at 1 somewhat shifted to about 0.9. Another point to note is that for the standard EnKF, permeability of the final ensembles range from below -1 to above +1, while the original range is [0, I]. With the kernels of order 7 and 9, the ranges are much closer the original range. We also observe that for the standard EnKF, there is some variability among the converged realizations, but as the order of the kernel is increased, the variability reduces, and for the 7th and 9th order kernels, the all members of the ensemble converge almost to the same permeability field. This is clearly an issue; the reason for this is currently not understood, but will be investigated in the future.
[00127] FIGS. 9a-9d shows the oil (91, 93, 95, 97) and water (92, 94, 96, 98) production rates for four of the eight producers for the initial ensemble, and also that of the reference permeability field (black circles (o's) for oil production rate, black crosses (x's) for water production rate). We observe that there is a significant variation in the rates, implying that the initial uncertainty is significant. FIGS. 10-13 illustrates the match to the production data obtained with kernels of order 1, 5, 7 and 9 respectively. FIGS. 10a- 1Od shows that all the members of the final ensemble almost exactly match the observed production data. However, as the order of the kernel increases (FIGS. 11-13), the match deteriorates, but is still reasonable except for one of the wells for the order 9 kernel (FIG. 13). A possible explanation for the deterioration in match could be that, since this data assimilation problem is a non- unique ill-posed problem, with possibly many solutions, as the order of the kernel is increased, the subspace of the original input space RN- over which the KEnKF searches for solutions becomes smaller and smaller, and it thus becomes more difficult to find realizations that provide a reasonable degree of match to the observations. In other words, as the order of the kernel is increased, the dimension of the feature space increases, and thus, the same number of realizations (determined by ensemble size) span a smaller subspace of the feature space. Thus, if the true permeability field or similar fields do not lie in this subspace, which could be the case if the initial realizations do not belong to exactly the same random field as the true permeability field, a good match may not be obtained. Therefore, if a good match is not being obtained for a particular order kernel for a given ensemble size, increasing the ensemble size may lead to a better match. This problem, in a way, could actually be considered a benefit, because if a good match is not being obtained with a high order kernel, it could possibly indicate that the prior geological model from which the initial ensemble is drawn may not be correct. In any case, further investigation of this issue is required before a compelling conclusion can be drawn.
[00128] However, based on these results, an important conclusion that can be made is that, by controlling the kernel order, a balance between the degree of match to the production data and the geological realism of the final ensemble can be obtained. Further, if the final ensemble obtained with a high order kernel provides an acceptable degree of match; such an ensemble should exude more confidence than another ensemble providing the same degree of match but obtained with a lower order kernel. Example 2: "Donut" Sand Mode
[00129] The reservoir model for this example is the same at the last example. The only difference is the reference permeability field, shown in FIG. 14a, which is obtained using the training image shown in FIG. 15 with the geostatistical software filter 'sim (Zhang, T., Multiple Point Geostatistics: Filter-based Local Training Pattern Classification for Spatial Pattern Simulation, PhD Thesis, Stanford University, Stanford, CA, 2006). The permeability field is characterized by high permeability circular inclusions ("donuts") 142 immersed in a low permeability background (144 regions). The model is run with the true permeability field for 950 days in this case to obtain the observed data. As in the last example, an initial ensemble of 100 permeability fields are obtained usingfiltersim with the same training image as in FIG. 15, and four of these realizations are shown in FIGS. 16a-16d.
[00130] The KEnKF is applied to this ensemble over 5 assimilation steps of 190 days each. FIGS. 17-19 show four updated realizations each of the final ensemble obtained with polynomial kernels of order 1, order 5, and order 7 respectively. As before, we observe that as the kernel order is increased, the donut structure becomes more and more apparent, and in this case, with the 5th and 9th order kernels, the donut are obtained very clearly at almost the exact locations as in the true realization. FIG. 20a shows the marginal distributions, and we see again that with the standard EnKF, the marginal distribution is Gaussian as expected (FIG. 20b), and with the increase of the kernel order, the marginal distribution becomes a better approximation to the original binary distribution (FIG. 20a).
[00131] FIGS. 21a-21d show the oil and water production rates for four of the eight producers for the initial ensemble as in the first example, and also that of the reference permeability field. FIG. 22-24 illustrates the match to the production data obtained with kernels of order 1, 5, and 7 respectively. In contrast to the first example, in this case, all kernels obtain a very good match to the production data. However, clearly the realizations obtained with the 7th order kernel (FIG. 19) are much better estimates of the true realization than that obtained with the standard EnKF (FIG. 17). This exemplifies the fact that if reasonable matches are obtained with different order kernels, the ensemble obtained with higher order kernels may provide better estimates of the random field being estimated.
[00132] An efficient nonlinear generalization of the EnKF using kernels has been demonstrated, which is capable of representing complex geological models characterized by multi-point geostatistics. By deriving the EnKF in a high-dimensional feature space implicitly defined using kernels, both the Kalman gain and update equations are nonlinearized, thus providing a completely general nonlinear set of EnKF equations, the nonlinearity being controlled by the kernel. By choosing high order polynomial kernels, multi-point statistics and therefore geological realism of the updated random fields can be preserved. If a polynomial kernel of order one is used, the KEnKF reduces to the standard EnKF. The efficiency of the KEnKF is similar to the standard EnKF irrespective of the kernel order.
[00133] The applicability of the approach was demonstrated through two synthetic examples, where the permeability field was updated with the KEnKF using polynomial kernels of different orders. Results indicate that as the order of the kernel is increased, key geological features are better retained by the updated ensemble, while providing a reasonable degree of match to the production data. By controlling the kernel order, a balance between the degree of match to the production data and the geological realism of the final ensemble can be obtained. Further, if the final ensemble obtained with a high order kernel provides an acceptable degree of match, such an ensemble is seen to be more geologically realistic than another ensemble providing the same degree of match but obtained with a lower order kernel. [00134] Other embodiments of the present invention and its individual components will become readily apparent to those skilled in the art from the foregoing detailed description. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. It is therefore not intended that the invention be limited except as indicated by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A system for predicting fluid flow in a subterranean reservoir having non- Gaussian characteristics, the system comprising:
a computer processor having computer readable program code for accessing a set of models representing the reservoir;
a data source accessible to the computer processor for collecting reservoir field data for a predetermined duration of time;
model update program code, executable by the computer processor, for receiving the reservoir field data and for using the field data to update the set of models at a predetermined time such that data from the updated set of models is consistent with the field data and such that the non-Gaussian characteristics of the reservoir in the updated set of models are preserved, thereby maximizing accuracy of reservoir prediction data to be generated by the updated set of models.
2. The system of claim 1 , wherein the model update code includes a generalized ensemble Kalman filter represented by kernel functions.
3. The system of claim 2, wherein the generalized ensemble Kalman filter includes a gain function adapted for reservoirs having non-Gaussian random field characteristics.
4. The system of claim 2, wherein the generalized ensemble Kalman filter includes a gain function adapted for reservoirs having non-Gaussian production data characteristics.
5. The system of claim 2, wherein the generalized ensemble Kalman filter includes an update model adapted for reservoirs having non-Gaussian random field characteristics.
6. The system of claim 2, wherein the generalized ensemble Kalman filter includes an update model adapted for reservoirs having non-Gaussian production data characteristics.
7. The system of claim 1, further comprising image realization means for transforming the prediction data generated by the updated set of models into image data representations of the reservoir.
8. The system of claim 1, further comprising image display means, in communication image realization means, for displaying the image representations of the reservoir.
9. A computer-implemented method for predicting fluid flow in a subterranean reservoir having non-Gaussian features, the method comprising:
accessing, via a computer processor, a set of models representing the reservoir ;
collecting reservoir field data for a predetermined duration of time;
communicating the reservoir field data to model update program code executable by the computer processor; and updating the set of models, via the model update program, at a predetermined time such that data from the updated set of models is consistent with the field data and such that the non-Gaussian characteristics of the reservoir in the updated set of models are preserved, thereby maximizing accuracy of reservoir prediction data to be generated by the updated set of models.
10. The computer-implemented method of claim 9, wherein the updating step comprises using a generalized ensemble Kalman filter represented by kernel functions.
11. The computer-implemented method of claim 9, wherein the updating step comprises using a generalized ensemble Kalman filter having a gain function adapted for reservoirs having non-Gaussian random field characteristics.
12. The computer-implemented method of claim 9, wherein the updating step comprises using an ensemble Kalman filter having a gain function adapted for reservoirs having non-Gaussian production data characteristics.
13. The computer-implemented method of claim 9, wherein the updating step comprises using a generalized ensemble Kalman filter having an update model adapted for reservoirs having non-Gaussian random field characteristics.
14. The computer-implemented method of claim 9, wherein the updating step comprises using a generalized ensemble Kalman filter having an update model adapted for reservoirs having non-Gaussian production data characteristics.
5. The computer-implemented method of claim 9, further comprising transforming, via an image realization code executable by the computer processor, the prediction data generated by the updated set of models into image data representations of the reservoir.
PCT/US2010/022584 2009-01-30 2010-01-29 System and method for predicting fluid flow in subterranean reservoirs WO2010088516A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP10736475.4A EP2391831A4 (en) 2009-01-30 2010-01-29 System and method for predicting fluid flow in subterranean reservoirs
CA2750926A CA2750926A1 (en) 2009-01-30 2010-01-29 System and method for predicting fluid flow in subterranean reservoirs
BRPI1006973A BRPI1006973A2 (en) 2009-01-30 2010-01-29 "system for predicting fluid flow in an underground reservoir, and computer-implemented method."
AU2010208105A AU2010208105B2 (en) 2009-01-30 2010-01-29 System and method for predicting fluid flow in subterranean reservoirs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14880009P 2009-01-30 2009-01-30
US61/148,800 2009-01-30

Publications (2)

Publication Number Publication Date
WO2010088516A2 true WO2010088516A2 (en) 2010-08-05
WO2010088516A3 WO2010088516A3 (en) 2010-11-25

Family

ID=42396366

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/022584 WO2010088516A2 (en) 2009-01-30 2010-01-29 System and method for predicting fluid flow in subterranean reservoirs

Country Status (6)

Country Link
US (1) US8972231B2 (en)
EP (1) EP2391831A4 (en)
AU (1) AU2010208105B2 (en)
BR (1) BRPI1006973A2 (en)
CA (1) CA2750926A1 (en)
WO (1) WO2010088516A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2970732A1 (en) * 2011-01-20 2012-07-27 IFP Energies Nouvelles METHOD FOR UPDATING A REAL-TIME RESERVOIR MODEL FROM DYNAMIC DATA WHILE MAINTAINING ITS CONSISTENCY WITH STATIC OBSERVATIONS

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8972231B2 (en) 2009-01-30 2015-03-03 Chevron U.S.A. Inc. System and method for predicting fluid flow in subterranean reservoirs
US20100313633A1 (en) * 2009-06-11 2010-12-16 Schlumberger Technology Corporation Estimating effective permeabilities
US8594818B2 (en) * 2010-09-21 2013-11-26 Production Monitoring As Production monitoring system and method
US8942966B2 (en) * 2010-10-20 2015-01-27 Conocophillips Company Method for parameterizing and morphing stochastic reservoir models
WO2012060821A1 (en) * 2010-11-02 2012-05-10 Landmark Graphics Corporation Systems and methods for generating updates of geological models
US8972232B2 (en) 2011-02-17 2015-03-03 Chevron U.S.A. Inc. System and method for modeling a subterranean reservoir
US9164193B2 (en) 2012-06-11 2015-10-20 Chevron U.S.A. Inc. System and method for optimizing the number of conditioning data in multiple point statistics simulation
US9031822B2 (en) * 2012-06-15 2015-05-12 Chevron U.S.A. Inc. System and method for use in simulating a subterranean reservoir
US10670753B2 (en) 2014-03-03 2020-06-02 Saudi Arabian Oil Company History matching of time-lapse crosswell data using ensemble kalman filtering
US10233727B2 (en) 2014-07-30 2019-03-19 International Business Machines Corporation Induced control excitation for enhanced reservoir flow characterization
US10578768B2 (en) 2014-08-15 2020-03-03 International Business Machines Corporation Virtual sensing for adjoint based incorporation of supplementary data sources in inversion
KR101625660B1 (en) * 2015-11-20 2016-05-31 한국지질자원연구원 Method for making secondary data using observed data in geostatistics
CA3090956C (en) * 2018-05-15 2023-04-25 Landmark Graphics Corporaton Petroleum reservoir behavior prediction using a proxy flow model
US11914671B2 (en) 2018-10-01 2024-02-27 International Business Machines Corporation Performing uncertainty quantification analysis with efficient two dimensional random fields
US11125905B2 (en) * 2019-05-03 2021-09-21 Saudi Arabian Oil Company Methods for automated history matching utilizing muon tomography
US11719856B2 (en) 2019-10-29 2023-08-08 Saudi Arabian Oil Company Determination of hydrocarbon production rates for an unconventional hydrocarbon reservoir
US12049820B2 (en) 2021-05-24 2024-07-30 Saudi Arabian Oil Company Estimated ultimate recovery forecasting in unconventional reservoirs based on flow capacity
CN113485307B (en) * 2021-08-02 2022-06-17 天津大学 Gas-liquid two-phase flow state monitoring method based on multi-mode dynamic nuclear analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969130A (en) * 1989-09-29 1990-11-06 Scientific Software Intercomp, Inc. System for monitoring the changes in fluid content of a petroleum reservoir
US5992519A (en) * 1997-09-29 1999-11-30 Schlumberger Technology Corporation Real time monitoring and control of downhole reservoirs
US20020016703A1 (en) * 2000-07-10 2002-02-07 Claire Barroux Modelling method allowing to predict as a function of time the detailed composition of fluids produced by an underground reservoir under production
KR20050037342A (en) * 2003-10-17 2005-04-21 스미토모 고무 고교 가부시키가이샤 Method of simulating viscoelastic material
EP1975879A2 (en) * 2007-03-27 2008-10-01 Mitsubishi Electric Corporation Computer implemented method for tracking object in sequence of frames of video

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02286899A (en) * 1989-04-28 1990-11-27 Masahiro Inoue Stall predicting and preventing device for turbo machine
WO2002086277A2 (en) * 2001-04-24 2002-10-31 Exxonmobil Upstream Research Company Method for enhancing production allocation in an integrated reservoir and surface flow system
US20070271077A1 (en) * 2002-11-15 2007-11-22 Kosmala Alexandre G Optimizing Well System Models
US7584081B2 (en) * 2005-11-21 2009-09-01 Chevron U.S.A. Inc. Method, system and apparatus for real-time reservoir model updating using ensemble kalman filter
US7620534B2 (en) * 2006-04-28 2009-11-17 Saudi Aramco Sound enabling computerized system for real time reservoir model calibration using field surveillance data
FR2920816B1 (en) * 2007-09-06 2010-02-26 Inst Francais Du Petrole METHOD FOR UPDATING A GEOLOGICAL MODEL USING DYNAMIC DATA AND WELL TESTS
WO2009079570A2 (en) * 2007-12-17 2009-06-25 Landmark Graphics Corporation, A Halliburton Company Systems and methods for optimization of real time production operations
US8972231B2 (en) 2009-01-30 2015-03-03 Chevron U.S.A. Inc. System and method for predicting fluid flow in subterranean reservoirs
US8612156B2 (en) * 2010-03-05 2013-12-17 Vialogy Llc Active noise injection computations for improved predictability in oil and gas reservoir discovery and characterization
US8972232B2 (en) * 2011-02-17 2015-03-03 Chevron U.S.A. Inc. System and method for modeling a subterranean reservoir
US9031822B2 (en) * 2012-06-15 2015-05-12 Chevron U.S.A. Inc. System and method for use in simulating a subterranean reservoir

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969130A (en) * 1989-09-29 1990-11-06 Scientific Software Intercomp, Inc. System for monitoring the changes in fluid content of a petroleum reservoir
US5992519A (en) * 1997-09-29 1999-11-30 Schlumberger Technology Corporation Real time monitoring and control of downhole reservoirs
US20020016703A1 (en) * 2000-07-10 2002-02-07 Claire Barroux Modelling method allowing to predict as a function of time the detailed composition of fluids produced by an underground reservoir under production
KR20050037342A (en) * 2003-10-17 2005-04-21 스미토모 고무 고교 가부시키가이샤 Method of simulating viscoelastic material
EP1975879A2 (en) * 2007-03-27 2008-10-01 Mitsubishi Electric Corporation Computer implemented method for tracking object in sequence of frames of video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2970732A1 (en) * 2011-01-20 2012-07-27 IFP Energies Nouvelles METHOD FOR UPDATING A REAL-TIME RESERVOIR MODEL FROM DYNAMIC DATA WHILE MAINTAINING ITS CONSISTENCY WITH STATIC OBSERVATIONS
US8942967B2 (en) 2011-01-20 2015-01-27 IFP Energies Nouvelles Method for real-time reservoir model updating from dynamic data while keeping the coherence thereof with static observations
EP2479590A3 (en) * 2011-01-20 2017-05-10 IFP Energies nouvelles Method for updating a reservoir model in real time using dynamic data while maintaining the consistency of same with static observations

Also Published As

Publication number Publication date
AU2010208105B2 (en) 2015-01-22
CA2750926A1 (en) 2010-08-05
EP2391831A4 (en) 2017-05-10
EP2391831A2 (en) 2011-12-07
WO2010088516A3 (en) 2010-11-25
US8972231B2 (en) 2015-03-03
BRPI1006973A2 (en) 2019-09-24
US20100198570A1 (en) 2010-08-05
AU2010208105A1 (en) 2011-08-18

Similar Documents

Publication Publication Date Title
US8972231B2 (en) System and method for predicting fluid flow in subterranean reservoirs
US8972232B2 (en) System and method for modeling a subterranean reservoir
Aanonsen et al. The ensemble Kalman filter in reservoir engineering—a review
Sarma et al. Generalization of the ensemble Kalman filter using kernels for non-gaussian random fields
US9031822B2 (en) System and method for use in simulating a subterranean reservoir
CA2630384A1 (en) Method, system and apparatus for real-time reservoir model updating using ensemble kalman filter
Arnold et al. Uncertainty quantification in reservoir prediction: Part 1—Model realism in history matching using geological prior definitions
Devegowda et al. Efficient and robust reservoir model updating using ensemble Kalman filter with sensitivity-based covariance localization
Kang et al. Characterization of three-dimensional channel reservoirs using ensemble Kalman filter assisted by principal component analysis
Chen et al. Integration of principal-component-analysis and streamline information for the history matching of channelized reservoirs
Kumar et al. Ensemble-based assimilation of nonlinearly related dynamic data in reservoir models exhibiting non-Gaussian characteristics
Scheidt et al. A multi-resolution workflow to generate high-resolution models constrained to dynamic data
Razak et al. History matching with generative adversarial networks
Korjani et al. Reservoir characterization using fuzzy kriging and deep learning neural networks
Zakirov et al. Optimal control of field development in a closed loop
Chang et al. Jointly updating the mean size and spatial distribution of facies in reservoir history matching
US8942966B2 (en) Method for parameterizing and morphing stochastic reservoir models
Zhang et al. History matching using a hierarchical stochastic model with the ensemble Kalman filter: a field case study
Ma et al. Conditioning multiple-point geostatistical facies simulation on nonlinear flow data using pilot points method
Grujić Subsurface modeling with functional data
Caers et al. Integration of engineering and geological uncertainty for reservoir performance prediction using a distance-based approach
Jang et al. Stochastic optimization for global minimization and geostatistical calibration
Ahmadi et al. A sensitivity study of FILTERSIM algorithm when applied to DFN modeling
Gervais et al. Identifying influence areas with connectivity analysis–application to the local perturbation of heterogeneity distribution for history matching
Watanabe et al. A hybrid ensemble Kalman filter with coarse scale constraint for nonlinear dynamics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10736475

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 2750926

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2010736475

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2010208105

Country of ref document: AU

Date of ref document: 20100129

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: PI1006973

Country of ref document: BR

ENP Entry into the national phase

Ref document number: PI1006973

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20110728