CN116844054A - Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network - Google Patents

Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network Download PDF

Info

Publication number
CN116844054A
CN116844054A CN202310891927.8A CN202310891927A CN116844054A CN 116844054 A CN116844054 A CN 116844054A CN 202310891927 A CN202310891927 A CN 202310891927A CN 116844054 A CN116844054 A CN 116844054A
Authority
CN
China
Prior art keywords
training
stage
scale
network
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310891927.8A
Other languages
Chinese (zh)
Inventor
刘刚
范文遥
陈麒玉
崔哲思
陈根深
吴雪超
王思璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202310891927.8A priority Critical patent/CN116844054A/en
Publication of CN116844054A publication Critical patent/CN116844054A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a reservoir model multi-scale fine characterization method based on a concurrency generation countermeasure network. The original training image is subjected to multi-scale representation through the pyramid structure, and consistent spatial distribution modes are maintained among representation results of different scales. Under the condition of fixed receptive fields, the corresponding global information and local information can be respectively extracted from the small-scale image and the large-scale image. Meanwhile, a concurrent training strategy is applied, random initialization is replaced by a parameter inheritance mode between adjacent stages, and the parameters of the network model can be ensured to be sufficiently and effectively trained. Finally, the Wasserstein distance is utilized to carry out differential measurement on the generated distribution and the real distribution, the gradient of the loss function is limited within a certain range by adopting a gradient punishment strategy, the stability in the training process is ensured, the reconstruction loss is utilized to repair the detail information of the image, and the overall simulation performance of the network is improved through the coupling of two different loss functions.

Description

Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network
Technical Field
The invention relates to the field of three-dimensional geological modeling and geographic information systems, in particular to a reservoir model multi-scale fine characterization method based on a concurrency generation countermeasure network.
Background
The fine construction of a reservoir model and the visualization thereof are always the research emphasis in the field of the earth information science, and have important roles in describing the characteristics of underground space structures, the decisions of resource exploration targets, the description of regional geological background and the like, thereby assisting geological specialists in carrying out quantitative evaluation on the migration, formation and distribution of underground resources. With the deep exploration degree, for a reservoir unit with a complex space structure, due to obvious space heterogeneity, heterogeneity and complex communication degree, under the condition that observation data and expert cognition are limited, the model is difficult to construct, and certain uncertainty exists in corresponding priori geological knowledge expression. The numerical simulation method is based on the stochastic function theory and the equal probability modeling of Monte Carlo sampling, and has been widely applied in the field of reservoir modeling. Although uncertainty of a reservoir model can be quantified, the method has low calculation efficiency for large-scale reservoir units with obvious non-stationarity and complex space structure due to the stationarity assumption principle and CPU intensive calculation characteristics, and can effectively relieve the problem of difficult reservoir model reconstruction under the non-stationary condition due to strategies such as local angle rotation, coordinate affinity ratio adjustment and the like, but under the condition of complex and various channel geometric forms, the algorithm needs to set a large number of parameters about angle and affinity ratio change, so that actual operation is very complex, and accuracy of simulation results cannot be ensured. Because reservoir modeling can be regarded as a generating task, the modeling method based on the generation countermeasure network (Generative Adversarial Network, GAN for short) can effectively overcome the limitation of a numerical simulation method, the development of a basic theory and various improved neural network models thereof, and provides technical support for three-dimensional random simulation and fine characterization of a complex geological structure. The prior research shows that GAN and variants thereof have been well applied in the fields of porous medium reconstruction, hydrogeologic simulation, reservoir history fitting, parameterization thereof and the like.
The basic idea of GAN is to build a generator and a arbiter, by which the random reconstruction of the image is achieved by the antagonistic learning properties of both. Specifically, the generator randomly samples a noise from the gaussian distribution and fits the higher order statistical distribution characteristics of the training data set to generate a False sample to "confuse" the arbiter, which performs a classification task to determine the probability of the corresponding input data being from the generated sample (labeled False) or the True sample (labeled True) by outputting a scalar. By iterating the training multiple times, a "Nash equilibrium" state is reached between the generator and the arbiter, i.e., the generator can produce samples similar to the training dataset, while the arbiter cannot identify the true or false level of the input data. At this time, the network parameters of the two are stored, and various simulation results can be obtained only by inputting specific data once, so that the subsequent generation target is completed.
GAN-based reservoir modeling methods generally include two broad classes, unconditional simulation and conditional simulation, the latter being random simulation under observation data constraints, the simulation results often having more geological significance than the former. However, GAN also has some drawbacks, for example, when the actual distribution and the generated distribution have a large difference, the corresponding JS divergence between the two has a saturation region, so that the arbiter cannot perform reasonable differential measurement, and the gradient disappears. Although the-log scheme can effectively relieve the problem of unobvious gradient in the initial training stage, due to KL divergence of an asymmetric structure, the penalty strength of the discriminator is inconsistent, the training process of GAN is unstable, and the diversity of the generated results cannot be ensured. However, the wasperstein distance, due to its own smooth nature, can reasonably measure the difference between the two distributions, and thus can reasonably guide the training process of GAN. Therefore, a reasonable measurement index needs to be established, and the difference between the generated distribution and the real distribution is reasonably evaluated, so that the simulation performance and the calculation efficiency of the GAN are ensured.
For the reservoir modeling field, the prior geological knowledge expression is limited, and the training process of the GAN requires a large number of training data set supports to complete the corresponding generation task. Although the training data set can be made by a model segmentation method, the relevance of global features and local features in the training image is easily destroyed by segmentation results, and the fixed receptive field may cause that the reconstruction process of the GAN cannot consider the global features and the local features of the training image, so that the simulation performance is low. However, the pyramid structure-based multi-scale characterization method can overcome the problem of limitation of the GAN dataset. The principle is that by downsampling the original image to a different size, a similar spatial distribution pattern is maintained in all images. By the method, the GAN network can extract the corresponding characteristic information from training images with different scales, so that multi-stage and multi-scale random generation tasks are realized.
In addition, in the multi-stage generating task based on the pyramid structure, a plurality of generators and discriminators need to be established in a training queue, and the adjacent generators need to realize conversion of image scale in an up-sampling mode. However, training of each phase generator is accomplished by random initialization, which results in a failure of close association of network parameters between adjacent phases. Training a generator alone at a fixed magnitude learning rate at each stage does not adequately optimize all network model parameters. Also, for the generated sequence, under the chain rule, the simulation result of the previous stage will lead to the reconstruction performance of the subsequent stage. Because the corresponding training images between two adjacent stages have similar spatial distribution patterns, it can be presumed that the parameters of the adjacent generators also maintain certain association characteristics. Therefore, the random initialization can be replaced by a parameter inheritance mode, the training task of the next stage can be ensured, the parameter optimization and adjustment are carried out on the basis of the previous stage, and the authenticity and reliability of the simulation result are ensured.
In summary, the conventional GAN is prone to the problems of gradient extinction and pattern collapse during training, which is essentially due to the asymmetry of KL divergence. Therefore, loss measurement using Wasserstein distance with smoothing characteristics is an important premise for ensuring network training stability. At the same time, the support set is lacking for training of GAN due to limited expression of a priori knowledge of the reservoir model. In the case of limited data sets, how to ensure that the GAN is adequately trained is a challenge. Furthermore, by way of random initialization, training the network separately for each stage may result in network parameters that are not effectively trained. Therefore, a strategy needs to be introduced, so that multiple groups of networks can be trained simultaneously, and random initialization is replaced by a parameter inheritance mode, so that training efficiency is improved. In view of the above-described problems and challenges of analyzed GAN in complex geological three-dimensional reconstruction, it is necessary to propose a generation of an countermeasure network model based on a single training image and concurrent training strategies to achieve automatic reconstruction of reservoir models.
Disclosure of Invention
In order to overcome the limitation of GAN in the field of reservoir modeling, the invention provides a reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network. In the method, the pyramid structure is introduced to carry out multi-scale representation on the original training image, so that the training images under different scales all keep similar spatial distribution modes. Under the condition of fixed receptive fields, corresponding global information and local information are respectively captured for small-scale and large-scale training images, the training images under the small-scale condition can be regarded as soft data constraint, and the subsequent reconstruction process is based on detail information restoration of small-scale reconstruction. Meanwhile, in order to avoid the problem of insufficient training of network parameters caused by random initialization, after the training in the previous stage is completed, the parameter values of the generators in the training queue are transmitted to the adjacent next stage, and the maximum concurrency quantity is set for the generators in the training queue, so that a plurality of generators in each stage participate in the training, and the optimal value range of the network parameters is ensured. In addition, aiming at the optimization process of each stage, the gradient of the optimization process meets the 1-Lipschitz constraint condition by introducing Wasserstein distance and gradient penalty term to carry out difference measurement, so that the training stability of each stage is ensured, and meanwhile, the correlation between the generated sample and the original training image is also ensured. Therefore, the method can realize multi-scale random reconstruction under the condition that the expression of the prior reservoir model is limited, thereby providing a new technical means and theoretical support for random simulation and fine characterization of the complex reservoir model.
The technical scheme adopted by the invention is as follows: the multi-scale fine characterization method for the reservoir model based on the concurrency generation countermeasure network comprises the following steps:
s1: establishing a priori reservoir model of a research area as an original training image, and extracting corresponding attribute information;
s2: defining a value range of the super parameter, and acquiring the size of an original training image;
s3: inputting an original training image into a pyramid model, obtaining a multi-scale characterization result, and storing the multi-scale characterization result into a training image queue;
s4: creating a generator network, carrying out random initialization on parameters in the generator network only in a first training stage, and adding the initialized generator into a generator training queue;
s5: traversing the training image queue, sequentially extracting from the small-scale training images, and training stage by stage until the traversing process is finished; as the training degree deepens, adopting a concurrent training mode, establishing an countermeasure learning process through a discriminator, and optimizing network parameters of a plurality of generators until convergence;
s6: when the generators of all stages complete training, the multi-scale reconstruction result is output based on the network parameters saved in different stages by inputting given data.
Further, the method further comprises the following steps:
s7: evaluating the reconstruction performance (space variability, connectivity, structural similarity, phase attribute proportion and the like) of the multi-scale reconstruction result, if the performance threshold condition is met, storing all network parameters, and inputting specific data to realize multi-scale automatic reconstruction of the reservoir model; otherwise, returning to the step S2, readjusting the parameters and the network structure, and retraining the network model.
Further, in step S1, the prior reservoir model includes a two-dimensional model and a three-dimensional model.
Further, for the two-dimensional model, the step of inputting the original training image into the pyramid model to obtain the multi-scale characterization result includes:
acquiring the dimension of the training image, wherein the corresponding dimension of the two-dimensional model is y respectively n = (L, W), where L and W represent length and width;
with a training phase n, then for each individual training phase it corresponds to the size y of the training image i The method meets the following conditions:
y i =y n ×m ((n-1)/log n)*log(n-i)+1 ,i=1,…,n-1.
for a two-dimensional model, m satisfies:
m=(min(L 1 ,W 1 )/min(L*δ,W*δ)) 1/(n-1) ,
δ=min(max(L s ,W s )/max(L,W),1).
wherein ,L1 And W is equal to 1 Representing the length and width, i.e. the minimum size, of the first stage training image, respectively; l (L) s And W is equal to s Representing the size, i.e. the maximum size, of the original training image respectively; min (·) and max (·) represent the minimum and maximum values taken for the values inside the brackets, respectively; m and delta represent scaling factors.
Further, for the three-dimensional model, the step of inputting the original training image into the pyramid model to obtain the multi-scale characterization result includes:
acquiring the dimension of a training image, wherein the corresponding dimension of a three-dimensional model is y n = (L, W, H), where L and W represent length and width, H represents the height of the training image;
with a training phase n, then for each individual training phase it corresponds to the size y of the training image i The method meets the following conditions:
y i =y n ×m ((n-1)/log n)*log(n-i)+1 ,i=1,…,n-1.
for a three-dimensional model, m can be defined as:
m=(min(L 1 ,W 1 ,H 1 )/min(L*δ,W*δ,H*δ)) 1/(n-1) ,
δ=min(max(L s ,W s ,H s )/max(L,W,H),1).
wherein ,L1 ,W 1 And H is 1 Representing the length, width and height, i.e. the minimum size, of the training image of the first stage, respectively; l (L) s ,W s And H is s Representing the length, width and height of the original training image, namely the maximum size; min (·) and max (·) represent the minimum and maximum values taken for the values inside the brackets, respectively; m and delta represent scaling factors.
Further, in step S2, defining a hyper-parameter value range includes: total training phase train_stations, maximum concurrency number con num The training times epochs of a single stage, noise weight, learning rate, gradient penalty term weight coefficient alpha, reconstruction loss weight coefficient beta and the like.
Further, in step S5, the training process of the generator network for each individual stage includes the following steps:
s51: initializing a discriminator, carrying out parameter random initialization if the phase is the first phase, otherwise loading the network parameters of the discriminator of the previous phase, and carrying out a discriminating process on the basis;
s52: defining a noise sequence, wherein no noise is added in the training process of the first stage; starting from the second training stage, randomly sampling from Gaussian distribution, adding the sampling to a noise sequence, and participating in a random reconstruction process of a corresponding stage;
s53: defining two optimizers to participate in the parameter optimization process of each training stage network, starting from the second stage, directly transmitting the network parameters after the training of the previous stage to the next adjacent stage, if the number of participating training generators in the generator training queue is less than con num The learning rate is gradually decreased from the back to the front and training is carried out at the same time; the learning rate can be written as:
θ i =θ nn-i ,i=1,2,…,n-1
wherein ,θn For the learning rate of the nth stage, λ (0<λ<1) For scaling the factors correspondingly, if the length L of the training queue seq Greater than con num Then pop up the (L) th in the queue seq -con num ) The number of generators is kept fixed, and the subsequent generators are trained in the same way;
s54: the generator and the discriminator of each stage calculate the corresponding loss function and adopt the same optimization mode;
s55: performing back propagation and gradient updating operation on the result of the loss function calculation, so as to update the network parameters of the generator and the discriminator in the current training stage;
s56: after the training of the stage is completed, inputting given data to obtain a simulation result in the stage, upsampling the simulation result, combining the simulation result with noise with a certain weight, and taking the result as the input of a generator of the next stage.
Further, in step S54, the loss function calculation mode of each stage is the same, and the calculation formula is as follows:
wherein ,indicating loss of antagonism, the->Representing reconstruction loss, G i And D i Representing the network parameters of the phase i generator and the arbiter, respectively, alpha being a weight factor for +.>Can be defined as follows:
wherein ,xi Representing the training image input in the i-th stage,representing simulation results of the corresponding phase +.>Representing the result obtained from the up-sampling of stage i-1, D being the discrimination result of the discriminator, G being the generator result,/-for the generator>Representing gradient penalty term->Representing samples selected in a proportion from the real samples and the generated samples,to mix the gradient magnitude of the sample, P g ,P r and />Representing the distribution characteristics between the generated data, the real data and the sampled data, respectively, < >>Representing the gradient calculation sign, beta represents the gradient penalty correlation coefficient, E * Mathematical expectations for the corresponding data distribution;
for the followingTaking the L2 norm for measurement, a mathematical formula can be defined as:
further, network parameters of an independent training stage are optimized and adjusted by establishing a joint loss function, the difference between the generated distribution and the real distribution is estimated by using Wasserstein distance, and a gradient penalty strategy is adopted, so that the gradient of the loss function meets 1-Lipschitz constraint.
The invention has the beneficial effects based on the technical scheme that:
(1) According to the invention, a pyramid structure is utilized to perform downsampling operation on the prior reservoir model, a corresponding multi-scale characterization result can be obtained, and the training images with different scales all keep a consistent spatial distribution mode. Feature information extraction is performed from the small-scale training image, more global information can be extracted from the small-scale training image under the condition that the receptive field is fixed, and more detail information can be captured in the large-scale training image. The whole training process of the GAN is supported by a multi-scale representation mode, and the problem that the GAN cannot be effectively trained under the condition of limited data is solved.
(2) The concurrent training strategy is adopted in the invention, so that the parameters of the network model can be effectively trained. In the multi-scale characterization result, training images of two adjacent stages have certain relevance, and the two stages are connected in series in an up-sampling mode. From this it can be speculated that the network model parameters of the two adjacent phases are also correlated. Therefore, the network model parameters of the previous stage can be directly transferred to the adjacent next stage in a parameter inheritance mode, and generators of different numbers and different stages are trained simultaneously by setting the maximum concurrence quantity and dynamic learning rate adjustment, so that the network parameters can be fully trained.
(3) In the invention, a joint loss function is established for each individual training stage to optimize network parameters. The joint loss function mainly includes a priori loss and a reconstructed loss. For prior loss, wasserstein distance is adopted to carry out differential measurement on the generated distribution and the real distribution, and a gradient penalty strategy is introduced, so that the gradient of the loss function meets the 1-Lipschitz constraint condition, the training process is more stable, and the gradient explosion or pattern collapse phenomenon is avoided. For reconstruction loss, L2 norms are adopted for calculation, and as the scale is increased, the restoration of detail information is obviously improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a general technical route of the present invention.
FIG. 2 is a multi-scale characterization result of the convolution pyramid structure of the present invention.
Fig. 3 is a basic structure of the concurrent generation of the countermeasure network in the present invention, including two parts of a generator and a discriminator. The generator is a generating sequence composed of a plurality of sub-networks, and the discriminator discriminates the multi-scale reconstruction result.
FIG. 4 is a graph showing various random simulation results of non-stationary fan-shaped delta in the two-dimensional case of example 1 of the present invention.
Fig. 5 is a quantized evaluation result of the corresponding two-dimensional reconstruction in embodiment 1 of the present invention. Wherein, fig. 5 (a) is the spatial variability curve distribution trend between different simulation results and the reference model; FIG. 5 (b) is a distribution trend of connectivity curves between different simulation results and a reference model; fig. 5 (c) is a spatial structural similarity measurement, visualized in MDS in a two-dimensional planar rectangular coordinate system.
Fig. 6 shows a plurality of random simulation results of the three-dimensional case of non-stationary multi-attribute v lake delta according to example 2 of the present invention.
Fig. 7 is a quantized evaluation result of the corresponding three-dimensional reconstruction in embodiment 2 of the present invention. Wherein, fig. 7 (a) is the spatial variability curve distribution trend between different simulation results and the reference model; fig. 7 (b) is a spatial structural similarity measurement, visualized in MDS in a two-dimensional planar rectangular coordinate system. Fig. 7 (c) and fig. 7 (d) are schematic distribution diagrams of connectivity curves of two sedimentary phases of the estuary dam and the branched river between different simulation results and the reference model respectively.
Detailed Description
For a clearer understanding of technical features, objects and effects of the present invention, a detailed description of embodiments of the present invention will be made with reference to the accompanying drawings.
In order to overcome the defects and shortcomings of a numerical simulation method for random reconstruction of a non-stationary reservoir model, the invention provides a reservoir model multi-scale fine characterization method based on a concurrency generation countermeasure network. The original training image is subjected to multi-scale representation through the pyramid structure, and consistent spatial distribution modes are maintained among representation results of different scales. Under the condition of fixed receptive fields, the corresponding global information and local information can be respectively extracted from the small-scale image and the large-scale image. Meanwhile, a concurrent training strategy is applied, random initialization is replaced by a parameter inheritance mode between adjacent stages, and the parameters of the network model can be ensured to be sufficiently and effectively trained. Finally, the Wasserstein distance is utilized to carry out differential measurement on the generated distribution and the real distribution, the gradient of the loss function is limited within a certain range by adopting a gradient punishment strategy, the stability in the training process is ensured, the reconstruction loss is utilized to repair the detail information of the image, and the coupling of two different loss functions is utilized, so that the overall simulation performance of the network is improved.
Referring to fig. 1 and 3, a multi-scale fine characterization method of a reservoir model based on concurrent generation of an antagonism network, comprising the steps of:
s1: establishing a priori reservoir model of a research area as an original training image, and extracting corresponding attribute information;
s2: defining a value range of the super parameter, and acquiring the size of an original training image;
s3: inputting an original training image into a pyramid model, obtaining a multi-scale characterization result, and storing the multi-scale characterization result into a training image queue;
s4: creating a generator network, carrying out random initialization on parameters in the generator network only in a first training stage, and adding the initialized generator into a generator training queue;
s5: traversing the training image queue, sequentially extracting from the small-scale training images, and training stage by stage until the traversing process is finished; as the training degree deepens, adopting a concurrent training mode, establishing an countermeasure learning process through a discriminator, and optimizing network parameters of a plurality of generators until convergence;
s6: when the generators of all stages complete training, the multi-scale reconstruction result is output based on the network parameters saved in different stages by inputting given data.
S7: evaluating the reconstruction performance (space variability, connectivity, structural similarity, phase attribute proportion and the like) of the multi-scale reconstruction result, if the performance threshold condition is met, storing all network parameters, and inputting specific data to realize multi-scale automatic reconstruction of the reservoir model; otherwise, returning to the step S2, readjusting the parameters and the network structure, and retraining the network model.
The specific implementation process of the step S1 is as follows:
combining the existing geological data and observation data, establishing a priori reservoir model of a corresponding research area, extracting corresponding attribute information, and mapping different types of attribute information into a regular two-dimensional grid or a regular three-dimensional grid through relation mapping;
in step S2, a super parameter value range is defined, which specifically includes: total training phase train_stations, maximum concurrency number con num The number of single-stage training epcchs, noise weight, learning rate, gradient penalty term weight coefficient alpha, reconstruction loss weight coefficient beta and the like. The values of these super parameters are critical to the reconstruction performance of the network model.
In step S3, the original training image is input into the pyramid structure, and as shown in fig. 2, the multi-scale characterization result is stored into the training image queue. For this step, the multi-scale characterization result may perform the following steps S31 to S34:
s31: acquiring the dimension of the training image, wherein the corresponding dimension under the two-dimensional and three-dimensional conditions is y n = (L, W) and y n = (L, W, H), where L and W represent length and width and H represents the height of the training image.
S32: assuming train_stations=n, for each individual training phase, the size of its corresponding training image can be defined as:
y i =y n ×m ((n-1)/log n)*log(n-i)+1 ,i=1,…,n-1.
s33: for the two-dimensional case, m can be defined as:
m=(min(L 1 ,W 1 ,H 1 )/min(L*δ,W*δ)) 1/(n-1) ,
whereδ=min(max(L s ,W s )/max(L,W),1).
s34: for the three-dimensional case, m can be defined as:
m=(min(L 1 ,W 1 ,H 1 )/min(L*δ,W*δ,H*δ)) 1/(n-1) ,
whereδ=min(max(L s ,W s ,H s )/max(L,W,H),1).
wherein ,L1 ,W 1 And H is 1 Representing the length, width and height, i.e. the minimum size, of the training image of the first stage, respectively; l (L) s ,W s And H is s Representing the length, width and height of the original training image, namely the maximum size; min (·) and max (·) represent the minimum and maximum values taken for the values inside the brackets, respectively; m and delta represent scaling factors.
In step S5, for each training process of the single-stage network model, steps S51 to S56 may be performed:
s51: initializing a discriminator, carrying out parameter random initialization if the phase is the first phase, otherwise loading the network parameters of the discriminator of the previous phase, and carrying out a discriminating process on the basis;
s52: defining a noise sequence, wherein no noise is added in the training process of the first stage; starting from the second training stage, randomly sampling from Gaussian distribution, adding the sampling to a noise sequence, and participating in a random reconstruction process of a corresponding stage;
s53: defining two optimizers to participate in the parameter optimization process of each training stage network, starting from the second stage, directly transmitting the network parameters after the training of the previous stage to the next adjacent stage, if the number of participating training generators in the generator training queue is less than con num The learning rate is gradually decreased from the back to the front and training is carried out at the same time; the learning rate can be written as:
θ i =θ nn-i ,i=1,2,…,n-1
wherein ,θn For the learning rate of the nth stage, λ (0<λ<1) For scaling the factors correspondingly, if the length L of the training queue seq Greater than con num Then pop up the (L) th in the queue seq -con num ) The number of generators is kept fixed, and the subsequent generators are trained in the same way;
s54: the generator and the discriminator of each stage calculate the corresponding loss function and adopt the same optimization mode;
s55: performing back propagation and gradient updating operation on the result of the loss function calculation, so as to update the network parameters of the generator and the discriminator in the current training stage;
s56: after the training of the stage is completed, inputting given data to obtain a simulation result in the stage, upsampling the simulation result, combining the simulation result with noise with a certain weight, and taking the result as the input of a generator of the next stage.
S57: steps S51 to S56 are repeated until the traversal in the training image queue is completed.
When the generators in all stages complete training, simulation results of different scales can be output by inputting given data, and the reconstruction results are evaluated in terms of space variability, connectivity, structural similarity, phase attribute proportion and the like. If the space variation curve and the connectivity curve between the simulation result and the reference model are relatively close, the fluctuation interval between the upper and lower parts does not exceed 1, all network model parameters are saved, and the multi-scale automatic reconstruction of the reservoir model is realized by inputting specific data. If the reconfiguration performance is poor, returning to the step S2, readjusting the parameters and the network structure, and retraining the network model.
Example 1:
for the automatic reconstruction process in the two-dimensional case, in example 1, a non-stationary sector delta is chosen as the two-dimensional prior reservoir model. Firstly, inputting an original model into a pyramid structure, defining the size of a training image of a first stage as 26×26 (unit: pel), defining a total training stage train_stages=7, and an initial learning rate θ 1 Learning rate scaling factor λ=0.1, maximum concurrence number con=0.0002 num =3, gradient penalty term weight coefficient β=0.1, reconstruction loss term weight coefficient α=10, single stage training number 600. Network parameters are optimized using Adam optimizers with default parameter settings. According to the parameter setting, starting from the training image with the minimum size, loading the generator into a training queue in a separate training stage, and adoptingAnd taking concurrent training strategies to perform parameter optimization. As the training phase increases, if the training queue length L seq Greater than co num Then pop up the (L) th in the training queue seq -con num ) And the generator keeps the parameters fixed and trains the subsequent generators. When training is completed, the network model parameters are saved. By inputting a random noise, different random simulation results are shown in fig. 4. For further quantitative evaluation, 20 simulation results are randomly generated, the variation function curve, the connectivity function curve and the multidimensional scaling analysis result between the simulation results and the reference model are shown in fig. 5, and fig. 5 (a) is a distribution trend of the spatial variability curve between different simulation results and the reference model; FIG. 5 (b) is a distribution trend of connectivity curves between different simulation results and a reference model; fig. 5 (c) is a spatial structural similarity measurement, visualized in MDS in a two-dimensional planar rectangular coordinate system. As can be seen from fig. 5, the different simulation results maintain high similarity with the reference model in three aspects of spatial variability, degree of connectivity of the attributes and spatial distribution pattern.
Example 2:
for the automatic reconstruction process in three-dimensional situations, in example 2, the Poyang lake sector delta was chosen as the three-dimensional prior reservoir model. Firstly, inputting an original model into a pyramid structure, defining the size of a training image of a first stage as 15×15×12 (unit: voxel), defining the total training stage train_stages=5, and the initial learning rate θ 1 Learning rate scaling factor λ=0.1, maximum concurrence number con=0.0001 num =3, gradient penalty term weight coefficient β=0.1, reconstruction loss term weight coefficient α=10, single-stage training number 400. Network parameters are optimized using Adam optimizers with default parameter settings. And according to the parameter setting, starting from the training image with the minimum size, loading the generator into a training queue in an independent training stage, and adopting a concurrent training strategy to perform parameter optimization. As the training phase increases, if the training queue length L seq Greater than con num Then pop up the (L) th in the training queue seq -con num ) Personal living thingsAnd the former keeps the parameters fixed and trains the subsequent generators. When training is completed, the network model parameters are saved. By inputting a random noise, different random simulation results are shown in fig. 6. For further quantitative evaluation, 20 simulation results are randomly generated, and the variation function curve, the connectivity function curve and the multidimensional scale analysis result between the simulation results and the reference model are shown in fig. 7, wherein fig. 7 (a) is a distribution trend of the spatial variability curve between different simulation results and the reference model; fig. 7 (b) is a spatial structural similarity measurement, visualized in MDS in a two-dimensional planar rectangular coordinate system. Fig. 7 (c) and fig. 7 (d) are schematic distribution diagrams of connectivity curves of two sedimentary phases of the estuary dam and the branched river between different simulation results and the reference model respectively. As can be seen from fig. 7, the different simulation results maintain high similarity with the reference model in three aspects of spatial variability, degree of connectivity of the attributes and spatial distribution pattern.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. do not denote any order, but rather the terms first, second, third, etc. are used to interpret the terms as labels.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. A multi-scale fine characterization method of a reservoir model based on a concurrency generation countermeasure network, comprising the steps of:
s1: establishing a priori reservoir model of a research area as an original training image, and extracting corresponding attribute information;
s2: defining a value range of the super parameter, and acquiring the size of an original training image;
s3: inputting an original training image into a pyramid model, obtaining a multi-scale characterization result, and storing the multi-scale characterization result into a training image queue;
s4: creating a generator network, carrying out random initialization on parameters in the generator network only in a first training stage, and adding the initialized generator into a generator training queue;
s5: traversing the training image queue, sequentially extracting from the small-scale training images, and training stage by stage until the traversing process is finished; as the training degree deepens, adopting a concurrent training mode, establishing an countermeasure learning process through a discriminator, and optimizing network parameters of a plurality of generators until convergence;
s6: when the generators of all stages complete training, the multi-scale reconstruction result is output based on the network parameters saved in different stages by inputting given data.
2. The reservoir model multi-scale fine characterization method of claim 1, further comprising:
s7: performing reconstruction performance evaluation on the multi-scale reconstruction result, if the performance threshold condition is met, storing all network parameters, and inputting specific data to realize multi-scale automatic reconstruction of the reservoir model; otherwise, returning to the step S2, readjusting the parameters and the network structure, and retraining the network model.
3. The method of multi-scale fine characterization of a reservoir model according to claim 1, wherein in step S1, the prior reservoir model comprises a two-dimensional model and a three-dimensional model.
4. A reservoir model multi-scale fine characterization method according to claim 3, wherein for the two-dimensional model, the step of inputting the original training image into a pyramid model to obtain a multi-scale characterization result comprises:
acquiring the dimension of the training image, wherein the corresponding dimension of the two-dimensional model is y respectively n = (L, W), where L and W represent length and width;
with a training phase n, then for each individual training phase it corresponds to the size y of the training image i The method meets the following conditions:
y i =y n ×m ((n-1)/logn)*log(n-i)+1 ,i=1,…,n-1.
for a two-dimensional model, m satisfies:
m=(min(L 1 ,W 1 )/min(L*δ,W*δ)) 1/(n-1 ),
δ=min(max(L s ,W s )/max(L,W),1).
wherein ,L1 And W is equal to 1 Representing the length and width, i.e. the minimum size, of the first stage training image, respectively; l (L) s And W is equal to s Representing the size, i.e. the maximum size, of the original training image respectively; min (·) and max (·) represent the minimum and maximum values taken for the values inside the brackets, respectively; m and delta represent scaling factors.
5. A reservoir model multi-scale fine characterization method according to claim 3, wherein for the three-dimensional model, the step of inputting the original training image into a pyramid model to obtain a multi-scale characterization result comprises:
acquiring the dimension of a training image, wherein the corresponding dimension of a three-dimensional model is y n = (L, W, H), where L and W represent length and width, H represents the height of the training image;
with a training phase n, then for each individual training phase it corresponds to the size y of the training image i The method meets the following conditions:
y i =y n ×m ((n-1)/logn)*log(n-i)+1 ,i=1,…,n-1.
for a three-dimensional model, m satisfies:
m=(min(L 1 ,W 1 ,H 1 )/min(L*δ,W*δ,H*δ)) 1/(n-1 ),
δ=min(max(L s ,W s ,H s )/max(L,W,H),1).
wherein ,L1 ,W 1 And H is 1 Representing the length, width and height, i.e. the minimum size, of the training image of the first stage, respectively; l (L) s ,W s And H is s Representing the length, width and height of the original training image, namely the maximum size; min (·) and max (·) represent the minimum and maximum values taken for the values inside the brackets, respectively; m and delta represent scaling factors.
6. The method for multi-scale fine characterization of a reservoir model according to claim 1, wherein in step S2, defining a range of values for the super parameter comprises: total training stage train_stations, maximum concurrence number connum, single stage training times epochs, noise weight, learning rate, gradient penalty term weight coefficient alpha, and reconstruction loss weight coefficient beta.
7. The method of multi-scale fine characterization of a reservoir model according to claim 1, characterized in that in step S5, the training process for each individual phase generator network comprises the steps of:
s51: initializing a discriminator, carrying out parameter random initialization if the phase is the first phase, otherwise loading the network parameters of the discriminator of the previous phase, and carrying out a discriminating process on the basis;
s52: defining a noise sequence, wherein no noise is added in the training process of the first stage; starting from the second training stage, randomly sampling from Gaussian distribution, adding the sampling to a noise sequence, and participating in a random reconstruction process of a corresponding stage;
s53: defining two optimizers to participate in the parameter optimization process of each training stage network, starting from the second stage, directly transmitting the network parameters after the training of the previous stage to the next adjacent stage, if the number of participating training generators in the generator training queue is less than con num The learning rate is gradually decreased from the back to the front and training is carried out at the same time; the learning rate is written by a formula in a progressive decreasing manner:
θ i =θ nn-i ,i=1,2,…,n-1
wherein ,θn Learning rate of the nth stage, 0<λ<1 is the corresponding scaling factor, if training queue length L seq Greater than con num Then pop L in queue seq -con num The number of generators is kept fixed, and the subsequent generators are trained in the same way;
s54: the generator and the discriminator of each stage calculate the corresponding loss function and adopt the same optimization mode;
s55: performing back propagation and gradient updating operation on the result of the loss function calculation, so as to update the network parameters of the generator and the discriminator in the current training stage;
s56: after the training of the stage is completed, inputting given data to obtain a simulation result in the stage, upsampling the simulation result, combining the simulation result with noise with a certain weight, and taking the result as the input of a generator of the next stage.
8. The multi-scale fine characterization method of reservoir models according to claim 6, wherein in step S54, the loss function of each stage is calculated in the same manner, and the calculation formula is as follows:
wherein ,indicating loss of antagonism, the->Representing reconstruction loss, G i And D i Representing the network parameters of the phase i generator and the arbiter, respectively, alpha being a weight factor for +.>Is defined as follows:
wherein ,xi Representing the training image input in the i-th stage,representing simulation results of the corresponding phase +.>Representing the result obtained from the up-sampling of stage i-1, D being the discrimination result of the discriminator, G being the generator result,/-for the generator>Representing gradient penalty term->Representing samples selected in a proportion from the real samples and the generated samples,to mix the gradient magnitude of the sample, P g ,P r and />Representing the distribution characteristics between the generated data, the real data and the sampled data, respectively, < >>Representing the gradient calculation sign, beta represents the gradient penalty correlation coefficient, E * Mathematical expectations for the corresponding data distribution;
for the followingMeasured by an L2 norm, and defined by a mathematical formula as:
9. the reservoir model multi-scale fine characterization method of claim 1, wherein: network parameters of an independent training stage are optimized and adjusted by establishing a joint loss function, the difference between the generated distribution and the real distribution is estimated by using Wasserstein distance, and a gradient penalty strategy is adopted, so that the gradient of the loss function meets the 1-Lipschitz constraint.
CN202310891927.8A 2023-07-19 2023-07-19 Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network Pending CN116844054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310891927.8A CN116844054A (en) 2023-07-19 2023-07-19 Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310891927.8A CN116844054A (en) 2023-07-19 2023-07-19 Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network

Publications (1)

Publication Number Publication Date
CN116844054A true CN116844054A (en) 2023-10-03

Family

ID=88170644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310891927.8A Pending CN116844054A (en) 2023-07-19 2023-07-19 Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network

Country Status (1)

Country Link
CN (1) CN116844054A (en)

Similar Documents

Publication Publication Date Title
US10439594B2 (en) Actually-measured marine environment data assimilation method based on sequence recursive filtering three-dimensional variation
CN101133422B (en) Multiple-point statistics (MPS) simulation with enhanced computational efficiency
CN107688201B (en) RBM-based seismic prestack signal clustering method
de Vries et al. Application of multiple point geostatistics to non-stationary images
Azevedo et al. Generative adversarial network as a stochastic subsurface model reconstruction
CN109272029B (en) Well control sparse representation large-scale spectral clustering seismic facies partitioning method
WO2010057505A1 (en) A deterministic version of the multiple point geostatistics simulation / reconstruction method wxth. the simulated / reconstructed values are directly taken from the training images without prior estimation of the conditional
Zhang et al. 3D reconstruction of digital cores based on a model using generative adversarial networks and variational auto-encoders
KR101625660B1 (en) Method for making secondary data using observed data in geostatistics
CN108645994A (en) A kind of geology stochastic inversion methods and device based on Multiple-Point Geostatistics
Liu et al. Multilevel strategies and geological parameterizations for history matching complex reservoir models
Zhao et al. 3D tight sandstone digital rock reconstruction with deep learning
Xiao et al. Conditioning of deep-learning surrogate models to image data with application to reservoir characterization
WO2022011893A1 (en) Reservoir-based modeling method and device for pore network model
CN116245013A (en) Geological prediction model construction method, modeling method, equipment and storage medium
CN112861890B (en) Reservoir evaluation model construction method and reservoir identification method
Patel et al. Smart adaptive mesh refinement with NEMoSys
US10209403B2 (en) Method of modelling a subsurface volume
CN116844054A (en) Reservoir model multi-scale fine characterization method based on concurrency generation countermeasure network
NO343550B1 (en) Method for constructing a 3D geological model by stochastic simulation of facies
CN113419278B (en) Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression
Koochak et al. A variability aware GAN for improving spatial representativeness of discrete geobodies
Salazar et al. Spatial data analytics-assisted subsurface modeling: A duvernay case study
Maucec et al. Geology-guided quantification of production-forecast uncertainty in dynamic model inversion
CN115685314A (en) Seismic reservoir physical property parameter prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination