CN113419278B - Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression - Google Patents

Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression Download PDF

Info

Publication number
CN113419278B
CN113419278B CN202110687122.2A CN202110687122A CN113419278B CN 113419278 B CN113419278 B CN 113419278B CN 202110687122 A CN202110687122 A CN 202110687122A CN 113419278 B CN113419278 B CN 113419278B
Authority
CN
China
Prior art keywords
state
data
matrix
random variable
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110687122.2A
Other languages
Chinese (zh)
Other versions
CN113419278A (en
Inventor
何文渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petrochina Co Ltd
Daqing Oilfield Co Ltd
Original Assignee
Petrochina Co Ltd
Daqing Oilfield Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petrochina Co Ltd, Daqing Oilfield Co Ltd filed Critical Petrochina Co Ltd
Priority to CN202110687122.2A priority Critical patent/CN113419278B/en
Publication of CN113419278A publication Critical patent/CN113419278A/en
Application granted granted Critical
Publication of CN113419278B publication Critical patent/CN113419278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/282Application of seismic models, synthetic seismograms

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a well-seismic joint multi-target simultaneous inversion method based on a state space model and support vector regression, which comprises the following steps of: based on measured data of a sample oil well, a state space model and a state transition transformation matrix are established by a discretization method, then a state residual error random variable probability density function is established, an observation function is established based on a support vector regression method, then an observation error random variable probability density function is established, and finally multi-target simultaneous inversion is carried out based on the state space model. The method greatly reduces the difficulty of establishing the state space model, saves manpower, material resources and time, and solves the problems of long period, high cost and low production efficiency of the conventional inversion method.

Description

Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression
The technical field is as follows:
the invention relates to the field of seismic data inversion and interpretation, in particular to a well-seismic combined multi-target simultaneous inversion method.
Background art:
the current stochastic inversion method based on logging constraints is a mainstream seismic inversion method, and research and engineering personnel often compare and discuss deterministic inversion and stochastic inversion as a characteristic of the prominent stochastic inversion method. Deterministic inversion assumes that the wave impedance is spatially a definite value, and is usually solved using a minimization criterion based on a convolution model to obtain a smooth estimated wave impedance value. Because seismic data are band-limited, the maximum limitation of deterministic inversion is that the inversion result is lack of both low frequency and high frequency components, the low frequency component is usually compensated by prestack time/depth migration velocity spectrum data or logging low frequency data, and the high frequency component is mainly compensated by logging data. Constrained sparse impulse inversion and model-based inversion are the two most common deterministic inversion methods. Stochastic seismic inversion assumes that the wave impedance is a random variable in space and can be represented by a probability distribution, and any sampling of the probability distribution is realized by one inversion. The random seismic inversion fuses seismic data, well logging data and geostatistical information into posterior probability distribution of the underground wave impedance, the posterior probability distribution is sampled by a Markov chain Monte-Lolo method, and the property of the posterior probability distribution is researched by comprehensively analyzing a plurality of sampling results. It can be seen that the deterministic inversion method focuses on obtaining the model with the highest probability in this distribution, while the stochastic inversion studies the properties of the entire probability distribution and is complementary to the deterministic inversion. Compared with the deterministic inversion method, the stochastic inversion has the following advantages: firstly, the method comprehensively utilizes logging information and geostatistical information, and the inversion process does not adopt local smoothing processing in deterministic inversion, so that more detailed information can be extracted from seismic data; secondly, random inversion starts from a well point, and the coincidence degree of an inversion result and the well is very high; thirdly, quantitative evaluation can be made on the uncertainty of the inversion result by comprehensively analyzing a plurality of implementations; finally, the latest random inversion method is established on the basis of the Betus formula, and multi-scale information can be conveniently fused. A large number of use experiences show that the transverse change of a deterministic inversion result is natural, but the resolution in the longitudinal direction is low, the single sand body and well drilling result is not good, and the actual production requirement cannot be met; the longitudinal resolution of the random inversion based on the logging constraint is greatly improved, the boundary of the thin sandstone is clearer and basically accords with the sandstone explained by drilling, because the initial wave impedance model contains high-frequency logging information exceeding a seismic frequency band and is kept in an inversion result, the high-frequency information corresponds to the response of the thin layer, and because the thin layer is usually small in distribution scale and quick in transverse phase change, enough wells are needed to ensure the reliability of an interpolation result, the random inversion based on the logging constraint provides higher requirements for the quantity of logging information and the spatial distribution of the logging information, and the random inversion is more suitable for reservoir seismic description in the development and production stages.
The state space model is a dynamic time sequence model, which can be expressed as equation (1) and equation (2):
Xt=ft(Xt-1)+Qt (1)
Yt=ot(Xt)+Rt (2)
wherein Xt=ft(Xt-1)+QtIs a random process representing the process of migration of a state within the model, X, which is not directly known with time sequencetIs a random variable at time t, representing the state at time t, Xt-1Is a random variable at time t-1, representing the state at time t-1, ft(Xt-1) Is the state transition function at time t, which may be arbitrary, most commonly a Markov chain state transition matrix, QtIs a state noise random variable at time t; y ist=ot(Xt)+RtIs a state observation at time t, representing a measurement of a state that is not directly known within the model at time t, ot(Xt) Is a state observation function at time t, RtIs a random variable representing the measurement error. If the time is discrete, i.e. the value of t can only take non-negative integers, Xt、Qt、Yt、RtAll Gaussian random variables, ot(Xt) The model is a linear function, and the model is a discrete time linear Gaussian system which is equivalent to Gaussian process regression in the field of machine learning. For convenience of description, let θ ═ Xt、Qt、Yt,Rt,ft,ot) Are parameters of the model. Based on the state space model, any inversion job can be expressed as: given model parameter theta and batch observation data y ═ y { (y)t-n+1,yt-n+2,...,yt-1,ytUnder the condition of the model, solving the state sequence x of the model as { x ═ xt-n+1,xt-n+2,...,xt-1,xtAnd n is the length of the batch of observation data y. Based on Bayesian framework, random variable X can be establishedtA posterior probability density function p (X)t|y,θ)=c*p(y|Xt,θ)*p(XtI θ), where a posterior probability density function p (X)tY, θ) is determined, a random sequence X is established as Xt-n+1,Xt -n+2,...,Xt-1,XtThe joint posterior probability density function of:
p(X|Y,θ)=p(Xt-n+1,Xt-n+2,...Xt-1,Xt|Y,θ)=p(Xt-n+1|Y,θ)*p(Xt-n+2|Xt-n+1,Y,θ)*p(Xt-n+3|Xt-n+2,Y,θ)...p(Xt|Xt-1,Y,θ)。
the sequence of states x ═ x can be solved by two methodst-n+1,xt-n+2,...,xt-1,xtOne is maximum a posteriori, i.e.
Figure BDA0003125119170000031
Second is Bayesian inference, i.e. solving for the expectation of x, i.e.
x={xt-n+1,xt-n+2,...,xt-1,xt}=Ep(X|Y,θ)(X)。
At present, state estimation methods built on state space models are mature, and some classical methods and their variants, such as hidden markov chain decoding method, kalman filtering, particle filtering, etc., have been developed. For some complex systems, how to build the state space model, i.e. how to give the model parameter θ ═ Xt、Qt、Yt、Rt、ft、ot) There are many difficulties, because the small to atomic, large to celestial body, whether it is natural science or human society science, it may be a core target of scientific research to establish and perfect a state space model, and it needs to pay a lot of manpower and material resources. Machine learning is a multi-disciplinary cross specialty, covers probability theory knowledge, statistical knowledge, approximate theoretical knowledge and complex algorithm knowledge, uses a computer as a tool and is dedicated to a real-time simulation human learning mode, and knowledge structure division is carried out on the existing content to effectively improve learning efficiency. By using the machine learning method, the difficulty of establishing the state space model can be greatly reduced, and manpower, material resources and time are saved. At present, most oil fields are in the middle and later stages of exploitation, a large amount of earthquake, well logging, statistical prediction and other data are accumulated, how to mine more information from ever-increasing mass data for decision making and insights is achieved, the period from seismic data acquisition, processing and explanation to well location planning is greatly shortened, the decision making precision is improved, the development cost is saved, and the generation efficiency is improved.
The invention content is as follows:
the invention can solve the problems by the following technical scheme: a well-to-seismic joint multi-target simultaneous inversion method based on a state space model and support vector regression comprises the following steps:
the first step is as follows: establishing a state space model by using a discretization method based on the measured data of the sample oil well, wherein the measured data of the sample oil well comprises the longitudinal wave speed, the transverse wave speed and the density of a measured sample curve;
the second step is that: establishing a state transition transformation matrix, wherein firstly, discretizing measured data of a sample oil well by utilizing discretization transformation;
the third step: establishing a state residual error random variable probability density function;
the fourth step: establishing an observation function based on a support vector regression method;
the fifth step: establishing a probability density function of an observation error random variable;
and a sixth step: and performing multi-target simultaneous inversion based on the state space model.
Further, in the first step, the discretization method is a K-Means discretization method, specifically, setting { xtThe sample data { X ] is discretized in a cluster K-MeanstBefore, will { x }tNormalizing continuous attribute values of all the dimensional data to a value range with the mean value of 0 and the variance of 1; the process of discretizing sample data by K-Means is recorded with S (), and X is sett=ft(Xt-1)+QtWherein X ist∈RnIs a model state, R is a real number, n is a state XtFor setting the state space, it is necessary to set the dimensions of (A) for the continuous states XtDiscretizing, setting StE.g. {1, 2, 3.. k }, k being an integer greater than 1, St=S(Xt),
Figure BDA0003125119170000041
Wherein S () is the K-Means discretization sample data process, S-1() For the inverse discretization transform, it is assumed that there is sufficient sample data { x }t},Xt=ft(Xt-1)+QtRewritable to St-1=S(Xt-1),
Figure BDA0003125119170000051
{StIs regarded as a stationary Markov chain stochastic process, Ft() Is { StThe state transition transformation matrix is a time-independent transformation matrix, so Ft() The state space model can be written as F (), and after the state data X of the sample data is discretized by K-Means, the state space model can be adjusted as the following formulas (3), (4), (5) and (6):
St=S(Xt) (3)
St=Ft(St-1) (4)
Xt=S-1(St-1)+Qt (5)
Yt=ot(Xt)+Rt (6)
further, in the first step, the longitudinal wave velocity, the shear wave velocity and the density of the formation can be simultaneously inverted, wherein n is 3, and X ist∈R3
Figure BDA0003125119170000052
Wherein
Figure BDA0003125119170000053
As the velocity of the longitudinal wave,
Figure BDA0003125119170000054
as the velocity of the transverse wave,
Figure BDA0003125119170000055
is the density.
Further, in the second step, the method for establishing the state transition transformation matrix is specifically to assume that there is sufficient sample data { x }tFirstly, discretizing the sample by discretizing transform S (), i.e. { S }t}=S({xt}) of which stIs the dispersion of sample dataSequence of states, according to equation (3) for successive states XtDiscretizing to obtain state space {1, 2, 3.. k }, StE {1, 2, 3.. k } is a state of the state space, k is an integer greater than 1, and F () is a k-th order square matrix of which each element FijRepresents the probability of a state transition from i to j in a Markov chain, in stTaking the statistic data sample of the Markov chain state transition probability matrix F as a statistic data sample, and counting each element F of the Fij=p(st=j|st-1I) where P(s)t=j|st-1I) denotes the probability that the t-th state is j under the condition that the t-1 th state is i.
Further, in the third step, a state residual error random variable probability density function is established specifically according to Xt=S-1(Ft(St-1))+QtCan obtain the product
Figure BDA0003125119170000061
Wherein QtIs a state residual random variable, due to St-1Being discrete random variables, St=Ft(St-1) Also referred to as a discrete random variable,
Figure BDA0003125119170000062
as state StCorresponding to a continuous random variable XtExpected value of (2) XtViewed as a k-dimensional Gaussian distribution, i.e.
Figure BDA0003125119170000063
Wherein epsilon2Is XtThe covariance matrix of (2) is a k-th order square matrix because
Figure BDA0003125119170000064
So QtAnd k-dimensional Gaussian distribution, namely obtaining a state residual error random variable probability density function:
Qt~N(0,ε2) (7)
where 0 is the mean value,. epsilon2Is a covariance matrix of XtThe same covariance matrix is used, based on sufficient samplesData { xtThe state transition transformation matrix F (), the discretization transformation S () and the discretization inverse transformation S established in the first step and the second step-1() And calculating to obtain a data residual error sequence { qtWherein q ist=xt-S-1(st),st=S(xt) I.e. qt=xt-s-1(S(xt) Namely { q) }t}={xt-S-1(S(xt) Q) according to { q }tConveniently, to statistically infer epsilon2
Further, in the fourth step, an observation function is established based on the support vector regression method, specifically, the observation function Y is established according to the state space modelt=ot(Xt)+RtWherein o ist() Representing a state observation function, order
Figure BDA0003125119170000065
To obtain
Figure BDA0003125119170000066
{xtIs the state data sequence in the sample, { ytIs the corresponding observed data sequence, { xtAnd { y }tComposition of State Observation Pair sequences { (x)t,yt) Using SVR from { (x)t,yt) Learn ot() The learning process is as follows: to XtPerforming dimension increasing treatment, namely, increasing the dimension of the original random variable which is obtained n-element for 1 time into n-element random variable for 3 times; if order XtIs a 2-dimensional random variable, i.e.
Figure BDA0003125119170000067
Set the upgraded random variable to
Figure BDA0003125119170000068
Namely, it is
Figure BDA0003125119170000069
Namely, it is
Figure BDA00031251191700000610
Is a 9-dimensional vector; is provided with YtIs a 4-dimensional random variable, obviously
Figure BDA00031251191700000611
RtAnd is also a 4-dimensional vector,
Figure BDA00031251191700000612
to get o thereint() Viewed as a linear transformation, i.e. ot() A matrix of 4 rows and 9 columns, denoted W, i.e.
Figure BDA00031251191700000613
Called W the observation matrix, so sample data { (x)t,yt) Can be expressed as
Figure BDA00031251191700000614
That is, the objective function of SVR can be established according to the sample data
Figure BDA00031251191700000615
Wherein
Figure BDA00031251191700000616
I.e. the objective function is:
Figure BDA00031251191700000617
wherein | W | is the sum of the modes of the row vectors of the matrix W, and c is a regularization factor, and W is obtained by an SVR learning method.
Further, in the fifth step, the probability density function of the random variable of the observation error is established specifically according to
Figure BDA0003125119170000071
Can obtain the product
Figure BDA0003125119170000072
Wherein
Figure BDA0003125119170000073
RtIn order to observe the residual random variable,
Figure BDA0003125119170000074
as a state
Figure BDA0003125119170000075
Corresponding observed random variable YtExpected value of (2) YtViewed as an m-dimensional Gaussian distribution, i.e.
Figure BDA0003125119170000076
Wherein mu2Is YtThe covariance matrix is an m-order matrix, and the probability density function of the random variable of the observation error can be obtained:
Rt~N(0,μ2) (8)
where 0 is the mean value, μ2Is a covariance matrix of it, with YtThe same covariance matrix, based on sufficient sample data { ytThe method comprises the steps of (1) and a state transition transformation matrix F (), a discretization transformation S (), a discretization inverse transformation S () established in the first step, the second step, the third step and the fourth step-1() And an observation matrix W, and calculating to obtain an observation data residual error sequence { rtTherein of
Figure BDA0003125119170000077
For the slave sample state data xtData obtained in ascending dimensions, i.e.
Figure BDA0003125119170000078
Namely, it is
Figure BDA0003125119170000079
According to { rtStatistical inference of μ2
Further, performing multi-target simultaneous inversion based on the state space model, specifically, the state space model is as follows: xt=ft(Xt-1)+Qt,Yt=ot(Xt)+RtWherein X ist、Qt、Yt、RtSubject to a Gaussian distributed random vector, ot() Taking the random variable X as a linear transformation matrix, adopting a maximum posterior method and based on a Bayesian framework to establish a random variable XtA posterior probability density function p (X)t|y,θ)=c*p(y|Xt,θ)*p(XtI θ), where a posterior probability density function p (X)tI Y, θ) is determined, a random sequence X is established as X ═ X1,X2,...,Xt-1,XtThe joint posterior probability density function of: p (X | Y, θ) ═ p (X)1,X2,...,Xt-1,Xt|Y,θ)=p(X1|Y,θ)*p(X2|X1,Y,θ)*p(X3|X2,Y,θ)*...*p(Xt|Xt-1Y, θ) solving the sequence of states by maximum a posteriori
Figure BDA00031251191700000710
Namely, it is
Figure BDA00031251191700000711
Figure BDA00031251191700000712
Since X is ═ X1,X2,...,Xt-1,XtIs a markov chain, so p (X | Y, θ) is p (X)1|Y1,θ)p(X2|X1,Y2,θ)....p(Xt|Xt-1,Ytθ), where p (X)t|Xt-1,Yt,θ)=p(Xt|Xt-1,θ)p(Yt|Xtθ), p (X | Y, θ) can be obtained as p (X)1|θ)*p(Y1|X1,θ)*p(X2|X1,θ)*p(Y2|X2,θ)*...*p(Xt|Xt-1,θ)*p(Yt|Xtθ), can be obtained
Figure BDA0003125119170000081
Due to the fact that
Figure BDA0003125119170000082
And
Figure BDA0003125119170000083
equivalence ofIs obtained by
Figure BDA0003125119170000084
Figure BDA0003125119170000085
Figure BDA0003125119170000086
One step derivation
Figure BDA0003125119170000087
Figure BDA0003125119170000088
Wherein
Figure BDA0003125119170000089
Jx,k(X)={Xk-S-1(F(S(Xk-1)))}ε2-1{Xk-S-1(F(S(Xk -1)))},Jy,k(X)={WXk-Yt2-1{WXk-YtGet it out using convex optimization algorithm
Figure BDA00031251191700000810
In addition, the method also comprises a process of collecting and sorting observation data and sample data for inversion, wherein the data is divided into observation data and state data, and the sample data is a data pair with both observation data and state data. The sample data is used for training and improving an inversion algorithm, the observation data is used as the input of the inversion algorithm, the state data is the data which is not needed yet, and the state data is used as the target of inversion and may have a certain relation with the observation data. For convenience of explanation, the state data is X, and the observation data is Y. The sample data is a data pair of X and Y. X and Y are 2-dimensional arrays, each column represents a timing curve, each row represents a sampling point of the whole timing curve, the rows of X and Y are the same, but the columns can be different, and the number of columns of Y is larger than or equal to that of columns of X. Each row of X and Y represents a feature vector whose values are continuous variables over a range of values. If time domain data and depth domain number exist, time-depth calibration needs to be done in advance, and in addition, normal processing, standardization and normalization work needs to be done. The state data X and the observation data Y correspond to X, Y in expression (1) and expression (2), respectively.
The invention relates to a well-seismic joint multi-target simultaneous inversion method based on a state space model and support vector regression, which has the following characteristics: the method comprises the steps of establishing model parameters by using a machine learning method by taking a state space model as a link, combining an earthquake inversion and geological statistics method, and simultaneously performing inversion and prediction on a plurality of related target parameters, so that all target parameters meet earthquake and geological constraint conditions, and contradictions among the related target parameters are reduced to the maximum extent, so that lithological inversion and physical property prediction are coordinated and verified mutually, and useful information is mined from massive data such as earthquake, well logging, statistical prediction and the like, the inversion and prediction period is greatly shortened, the decision precision is improved, the development cost is saved, and the generation efficiency is improved.
In addition, compared with the background technology, the invention also has the following beneficial effects: the well-seismic joint multi-target simultaneous inversion method based on the state space model and the support vector regression is applied to seismic inversion, so that the difficulty of establishing the state space model can be greatly reduced, and manpower, material resources and time are saved.
Description of the drawings:
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a graph of measured sample curves, wherein FIG. 2(a) is longitudinal wave velocity, FIG. 2(b) is transverse wave velocity, and FIG. 2(c) is density;
FIG. 3 is a diagram of AVO forward results;
FIG. 4 is a graph showing the comparison between the AVO results of well-seismic joint multi-objective simultaneous inversion based on a state space model and support vector regression for a sample well and the actually measured sample curves, wherein FIG. 4(a) is the longitudinal wave velocity, FIG. 4(b) is the transverse wave velocity, and FIG. 4(c) is the density;
FIG. 5 is a cross-sectional view of a line of incident angles superimposed, wherein FIG. 5(a) is 5-10 degrees, FIG. 5(b) is 10-20 degrees, and FIG. 5(c) is 20-35 degrees;
fig. 6 is a diagram of the result of simultaneous inversion of a line AVO by combining multiple targets based on a state space model and a support vector regression to obtain well-seismic, where fig. 6(a) is a longitudinal wave velocity, fig. 6(b) is a transverse wave velocity, fig. 6(c) is a density, and fig. 6(d) is a longitudinal wave velocity ratio.
The specific implementation mode is as follows:
the invention is further illustrated by the following examples in conjunction with the accompanying drawings:
in order to make the purpose, technical scheme and advantages of the invention clearer, the following will take the three-dimensional actual earthquake work area of the Songliao basin in Daqing exploration area as an example, and the embodiment of the invention will be further described in detail with reference to the attached drawings.
As shown in fig. 1, a well-seismic joint multi-objective simultaneous inversion method based on state space model and support vector regression includes the following steps:
the work area has a small number of sample wells, each well has actually measured sample curve longitudinal wave velocity, transverse wave velocity and density, as shown in fig. 2, and fig. 3 is an AVO forward result of the sample curve shown in fig. 2; the work area also has a seismic data volume with incident angles of 5-10 degrees, 10-20 degrees and 20-35 degrees, for example, as shown in fig. 5, the longitudinal wave velocity, the transverse wave velocity and the density volume data of the work area are inverted when the target is inverted. It is assumed that the data has been calibrated to the time domain.
The first step is as follows: and establishing a state space model based on a K-Means discretization method.
In order to establish all state space models for inversion, firstly, discretization processing is carried out on the measured sample curve longitudinal wave velocity, transverse wave velocity and density of a sample well based on a K-Means discretization method, and { x is madetIs sample data, where xt∈R3,xt 1Is the velocity of longitudinal wave, xt 2Is the transverse wave velocity, xt 3Is the density. The targets for setting K-Means discretization have 200 types, and the 200 types are used as the state space of the state space model and are identified by 200 natural numbers from 1 to 200.
K-Means discretization sample data { x }tThe process is as follows: 1) because sheets on object property sets in a datasetBits may be different, so x must be added before clusteringtNormalizing the continuous attribute values of the data of each dimension to mean 0 and variance 1. 2) During initialization, each object is regarded as a cluster; and calculating the distance between the clusters, selecting two clusters with the minimum distance to combine to generate a new object cluster, for example, combining two clusters to form a cluster ab when the distance between the cluster a and the cluster b is minimum. 3) And calculating the distance between the merged cluster and the rest other clusters, and then carrying out the next round of merging. When there is only one object in each cluster of objects, the distance between two clusters is defined as the euclidean distance between the objects. The distance between two clusters and another cluster after they are merged is given by the following formula da(bc)=d(bc)a=αbdabcdac+βdbc+γ|dab-dacL wherein
Figure BDA0003125119170000101
Figure BDA0003125119170000102
γ is 0. During merging, all the clusters of classes constitute a partition of the domain of discourse U, the elements belonging to the same cluster of classes being considered indistinguishable. Therefore, we should continue the merging of the clusters until the inconsistency of the data generated after merging is higher than that of the original data, so that the clustering process is completed. And in the second stage, the clustering results are sorted and combined, wherein r is the number of generated clusters, and K is a certain cluster generated by clustering. The elements in this class cluster are in attribute AjThe set of projections on is:
Figure BDA0003125119170000111
thus, the definition is according to the object at attribute AjThe value of above defines a class cluster KjDividing intervals
Figure BDA0003125119170000112
Comprises the following steps:
Figure BDA0003125119170000113
for a given oneDefining the domain of the class cluster K may be domain intersections defining other class clusters, in which case the domains contained in the other domains are removed and the remaining domains are sorted so as not to intersect each other. The post-processing procedure involves merging adjacent intervals to reduce the number of object clusters. The process of merging includes calculating the class entropy of all adjacent intervals on all attributes. And selecting two adjacent intervals with the minimum class entropy to be combined until the inconsistency of the data exceeds a certain threshold value. The K-Means discretization sample data process is marked as S (). Let Xt=ft(Xt-1)+QtWherein X ist∈RnIs a model state, R is a real number, n is a state XtBecause the dimension of (1) needs to simultaneously invert the longitudinal wave velocity, the transverse wave velocity and the density of the stratum, n is 3, and X ist∈R3
Figure BDA0003125119170000114
Wherein
Figure BDA0003125119170000115
As the velocity of the longitudinal wave,
Figure BDA0003125119170000116
as the velocity of the transverse wave,
Figure BDA0003125119170000117
is the density. To set up the state space, it is necessary to align successive states XtDiscretizing, setting St∈{1、2、3...k},k=200,St=S(Xt),
Figure BDA0003125119170000118
Wherein S () is a K-Means discretization sample data process called discretization transform, S-1() Is an inverse discretization transform. Assume sample data { xtIs sufficient, Xt=ft(Xt-1)+QtRewritable as st-1=s(Xt-1),
Figure BDA0003125119170000119
{StIs regarded asStationary Markov chain stochastic Process, Ft() Is { s }tThe state transition transformation matrix is a time-independent square matrix with 200 rows and 200 columns, so that Ft() Which can be written as F ().
The second step is that: and establishing a state transition transformation matrix.
The state transition transformation matrix F () is a markov chain state transition probability matrix. Assume sample data { xtIt is sufficient that the samples are first discretized with a discretization transform S (), i.e. { S }t}=S({xt}) of which stIs a discrete state sequence of sample data, which can be viewed as sufficient statistical data for a Markov chain state transition probability matrix F (), from { s }tF () can be counted out in the equation). F () is a 200 th order square matrix with each element FijRepresenting the probability of a state transition from i to j in a markov chain.
The third step: and establishing a state residual error random variable probability density function.
According to Xt=S-1(Ft(St-1))+QtCan obtain the product
Figure BDA0003125119170000121
Wherein QtIs a state residual random variable, due to St-1Being discrete random variables, St=Ft(St-1) Also referred to as a discrete random variable,
Figure BDA0003125119170000122
as state StCorresponding to a continuous random variable XtExpected value of (2) XtViewed as a k-dimensional Gaussian distribution, i.e.
Figure BDA0003125119170000123
Wherein epsilon2Is XtThe covariance matrix of (2) is a k-th order square matrix because
Figure BDA0003125119170000124
So QtAlso of k-dimensional Gaussian distribution, i.e. Qt~N(0,ε2) Wherein 0 is thereofMean value of epsilon2Is a covariance matrix of XtThe covariance matrix of (a) is the same. According to sufficient sample data { xtThe state transition transformation matrix F (), the discretization transformation S () and the discretization inverse transformation S established in the first step and the second step-1() And calculating to obtain a data residual error sequence { qtWherein q ist=xt-S-1(st),st=S(xt) I.e. qt=xt-S-1(S(xt) Namely { q) }t}={xt-S-1(S(xt))}. Because of qtIs sufficient sample data due to Qt~n(0,ε2) Can be according to { qtConveniently, to statistically infer epsilon2
The fourth step: and establishing an observation function based on a support vector regression method.
Support Vector Regression (SVR) is an application of SVM (support vector machine) to the regression problem. Observing function Y according to state space modelt=ot(Xt)+RtWherein o ist() Representing a state observation function, order
Figure BDA0003125119170000125
To obtain
Figure BDA0003125119170000126
{xtIs the sequence of state data in the sample, where xt∈R3,xt 1Is the velocity of longitudinal wave, xt 2Is the transverse wave velocity, xt 3Is density, { ytIs the corresponding observed data sequence, where yt∈R3,yt 1Impedance data corresponding to the data are superposed for 5-10 degrees of incidence angle beside the sample well, yt 2Impedance data corresponding to the data are superposed for the incidence angle of 10-20 degrees beside the sample well, yt 3Superimposing the impedance data corresponding to the data for a 20-35 degree angle of incidence next to the sample well, { xtAnd { y }tComposition of State Observation Pair sequences { (x)t,yt) From { (x) using SVRt,yt) Learn ot(). The learning process is as follows: 1) to XtPerforming dimension increasing treatment, namely increasing the dimension of the original random variable with 3 elements and 1 time into 3 elements and 3 times of random variables
Figure BDA0003125119170000131
Is shown, i.e.
Figure BDA0003125119170000132
Is a 20-dimensional vector; y istIs a 3-dimensional random variable, obviously
Figure BDA0003125119170000133
RtAnd is also a 3-dimensional vector,
Figure BDA0003125119170000134
in the method of the inventiont() Viewed as a linear transformation, i.e. ot() A matrix of 3 rows and 20 columns, denoted W,
Figure BDA0003125119170000135
referred to as W as the observation matrix. Therefore sample data { (x)t,yt) Can be expressed as
Figure BDA0003125119170000136
That is, the objective function of SVR can be established according to the sample data
Figure BDA0003125119170000137
Wherein
Figure BDA0003125119170000138
I.e. the objective function is:
Figure BDA0003125119170000139
where | W | is the sum of the modes of the row vectors of the matrix W, and c is the regularization factor. W is obtained by SVR learning.
The fifth step: and establishing a probability density function of the random variable of the observation error.
According to
Figure BDA00031251191700001310
Can obtain the product
Figure BDA00031251191700001311
Wherein
Figure BDA00031251191700001312
RtIn order to observe the residual random variable,
Figure BDA00031251191700001313
as a state
Figure BDA00031251191700001314
Corresponding observed random variable YtExpected value of (2) YtViewed as a 3-dimensional Gaussian distribution, i.e.
Figure BDA00031251191700001315
Wherein mu2Is YtThe covariance matrix of (2) is a 3 rd order square matrix because
Figure BDA00031251191700001316
So RtAlso 3-dimensional Gaussian distribution, i.e. Rt~N(0,μ2) Where 0 is the mean value, ε2Is a covariance matrix of it, with YtThe covariance matrix of (a) is the same. According to sufficient sample data ytCalculating with a state transition transformation matrix F (), a discretization transformation S (), a discretization inverse transformation S-1() and an observation matrix W established in the first, second, third and fourth steps to obtain an observation data residual sequence { r }tTherein of
Figure BDA00031251191700001317
For the slave sample state data xtData obtained in ascending dimensions, i.e.
Figure BDA00031251191700001318
Namely, it is
Figure BDA00031251191700001319
Because of rtIs sufficient sample data due to Rt~N(0,μ2) Can be according to { rtConveniently, statistically infer mu2
And a sixth step: and performing multi-target simultaneous inversion based on the state space model.
The state space model is: xt=ft(Xt-1)+Qt,Yt=ot(Xt)+RtThe method of the invention comprisest、Qt、Yt、RtTreated as random vectors obeying a Gaussian distribution, let ot() The model is regarded as a linear transformation matrix, so the model is a linear Gaussian state model, according to the proof of the predecessor, when the model is used for carrying out prediction statistical inference on the linear Gaussian state model, the maximum posterior method and the Bayesian statistical inference method are equivalent, and for the sake of simplicity, the maximum posterior method is adopted in the method. Based on Bayesian framework, random variable X can be establishedtA posterior probability density function p (X)t|y,θ)=c*p(y|Xt,θ)*p(XtI θ), where a posterior probability density function p (X)tI Y, θ) is determined, a random sequence X is established as X ═ X1,X2,...,Xt-1,XtThe joint posterior probability density function of: p (X | Y, θ) ═ p (X)1,X2,...,Xt-1,Xt|Y,θ)=p(X1|Y,θ)*p(X2|X1,Y,θ)*p(X3|X2,Y,θ)*...*p(Xt|Xt-1Y, θ). Solving state sequences by maximum a posteriori
Figure BDA0003125119170000141
Namely, it is
Figure BDA0003125119170000142
Figure BDA0003125119170000143
Since X is ═ X1,X2,...,Xt-1,XtIs a markov chain, so p (X | Y,θ)=p(X1|Y1,θ)p(X2|X1,Y2,θ)....p(Xt|Xt-1,Ytθ), where p (X)t|Xt-1,Yt,θ)=p(Xt|Xt-1,θ)p(Yt|Xtθ), p (X | Y, θ) can be obtained as p (X)1|θ)*p(Y1|X1,θ)*p(X2|X1,θ)*p(Y2|X2,θ)*...*p(Xt|Xt-1,θ)*p(Yt|Xtθ), can be obtained
Figure BDA0003125119170000144
Due to the fact that
Figure BDA0003125119170000145
And
Figure BDA0003125119170000146
equivalent, can obtain
Figure BDA0003125119170000147
Figure BDA0003125119170000148
Figure BDA0003125119170000149
One step derivation
Figure BDA00031251191700001410
Figure BDA00031251191700001411
Wherein
Figure BDA00031251191700001412
Jx,k(X)={Xk-S-1(F(S(Xk -1)))}ε2-1{Xk-S-1(F(S(Xk-1)))},Jy,k(X)={WXk-Yt2-1{WXk-Yt}. This is an unconstrained convexSolving the problem by using a convex optimization algorithm
Figure BDA00031251191700001413
FIG. 4 is a comparative display of the results of inverting AVO based on the method of the present invention on a sample well with a measured sample curve; FIG. 5 is a cross-sectional view of a line of incident angle stack, wherein FIG. 5(a) is 5-10 degrees, FIG. 5(b) is 10-20 degrees, and FIG. 5(c) is 20-35 degrees and FIG. 6 is the result of inverting a line of AVO based on the method of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A well-to-seismic joint multi-target simultaneous inversion method based on a state space model and support vector regression comprises the following steps:
the first step is as follows: establishing a state space model by using a discretization method based on the measured data of the sample oil well, wherein the measured data of the sample oil well comprises the longitudinal wave speed, the transverse wave speed and the density of a measured sample curve;
the second step is that: establishing a state transition transformation matrix, wherein firstly, discretizing measured data of a sample oil well by utilizing discretization transformation;
the third step: establishing a state residual error random variable probability density function;
the fourth step: establishing an observation function based on a support vector regression method;
the fifth step: establishing a probability density function of an observation error random variable;
and a sixth step: and performing multi-target simultaneous inversion based on the state space model.
2. Method according to claim 1, characterized in that in the first step the discretization method is a K-Means discretization method, in particular given { x [ ]tOf state data X of sample dataTime sequence representation, discretizing sample data { x ] in clustering K-MeanstBefore, will { x }tNormalizing continuous attribute values of all the dimensional data to a value range with the mean value of 0 and the variance of 1; the process of discretizing sample data by K-Means is recorded with S (), and X is sett=ft(Xt-1)+QtWherein X ist∈RnIs a model state, R is a real number, n is a state XtFor setting the state space, it is necessary to set the dimensions of (A) for the continuous states XtDiscretizing, setting StE.g. {1, 2, 3.. k }, k being an integer greater than 1, St=S(Xt),
Figure FDA0003468733160000011
Wherein S () is the K-Means discretization sample data process, S-1() For the inverse discretization transform, it is assumed that there is sufficient sample data { x }t},Xt=ft(Xt-1)+QtRewritable to St-1=S(Xt-1),
Figure FDA0003468733160000012
{StIs regarded as a stationary Markov chain stochastic process, Ft() Is { StThe state transition transformation matrix is a time-independent transformation matrix, so Ft() The state space model can be written as F (), and after the state data X of the sample data is discretized by K-Means, the state space model can be adjusted as the following formulas (3), (4), (5) and (6):
St=S(Xt) (3)
St=Ft(St-1) (4)
Xt=S-1(St-1)+Qt (5)
Yt=ot(Xt)+Rt (6)
wherein o ist() A matrix of 4 rows and 9 columns, denoted W, i.e.
Figure FDA0003468733160000021
Is called WAnd observing the matrix.
3. The method of claim 2, wherein in the first step, the compressional velocity, shear velocity, and density of the formation are simultaneously inverted, where n is 3, Xt∈R3
Figure FDA0003468733160000022
Wherein
Figure FDA0003468733160000023
As the velocity of the longitudinal wave,
Figure FDA0003468733160000024
as the velocity of the transverse wave,
Figure FDA0003468733160000025
is the density.
4. The method according to claim 2, wherein in the second step, the state transition transformation matrix is established by assuming sufficient sample data { x }tFirstly, discretizing the sample by discretizing transform S (), i.e. { S }t}=S({xt}) of which stIs a discrete state sequence of sample data, for a continuous state X according to equation (3)tDiscretizing to obtain state space {1, 2, 3.. k }, StE {1, 2, 3.. k } is a state of the state space, k is an integer greater than 1, and F () is a k-th order square matrix of which each element FijRepresents the probability of a state transition from i to j in a Markov chain, in stTaking the statistic data sample of the Markov chain state transition probability matrix F as a statistic data sample, and counting each element F of the Fij=P(st=j|st-1I) wherein
Figure FDA0003468733160000026
Which represents the probability that the t-th state is j under the condition that the t-1 th state is i.
5. Method according to claim 2, wherein in the third step a state residual random variable probability density function is established, in particular according to Xt=S-1(Ft(St-1))+QtCan obtain the product
Figure FDA0003468733160000027
Wherein QtIs a state residual random variable, due to St-1Being discrete random variables, St=Ft(St-1) Also referred to as a discrete random variable,
Figure FDA0003468733160000028
as state StCorresponding to a continuous random variable XtExpected value of (2) XtViewed as a k-dimensional Gaussian distribution, i.e.
Figure FDA0003468733160000031
Wherein epsilon2Is XtThe covariance matrix of (2) is a k-th order square matrix because
Figure FDA0003468733160000032
So QtAnd k-dimensional Gaussian distribution, namely obtaining a state residual error random variable probability density function:
Qt~N(0,ε2) (7)
where 0 is the mean value,. epsilon2Is a covariance matrix of XtThe same covariance matrix, based on sufficient sample data { xtThe state transition transformation matrix F (), the discretization transformation S () and the discretization inverse transformation S established in the first step and the second step-1() And calculating to obtain a data residual error sequence { qtWherein q ist=xt-S-1(st),st=S(xt) I.e. qt=xt-S-1(S(xt) Namely { q) }t}={xt-S-1(S(xt) Q) according to { q }tConveniently, to statistically infer epsilon2
6. Method according to claim 2, wherein in the fourth step an observation function is established based on a support vector regression method, in particular an observation function Y according to a state space modelt=ot(Xt)+RtWherein o ist() Representing a state observation function, order
Figure FDA0003468733160000033
To obtain
Figure FDA0003468733160000034
{xtIs the sequence of state data in the sample, { ytIs the corresponding observed data sequence, { xtAnd { y }tComposition of State Observation Pair sequences { (x)t,yt) Using SVR from { (x)t,yt) Learn ot() The learning process is as follows: to XtPerforming dimension increasing treatment, namely, increasing the dimension of the original random variable of n units for 1 time into a random variable of n units for 3 times; if order XtIs a 2-dimensional random variable, i.e.
Figure FDA0003468733160000035
Set the upgraded random variable to
Figure FDA0003468733160000036
Namely, it is
Figure FDA0003468733160000037
Namely, it is
Figure FDA0003468733160000038
Is a 9-dimensional vector; is provided with YtIs a 4-dimensional random variable, obviously
Figure FDA0003468733160000039
RtAnd is also a 4-dimensional vector,
Figure FDA00034687331600000310
to get o thereint() Viewed as a linear transformation, i.e. ot() A matrix of 4 rows and 9 columns, denoted W, i.e.
Figure FDA00034687331600000311
Called W the observation matrix, so sample data { (x)t,yt) Can be expressed as
Figure FDA00034687331600000312
That is, the objective function of SVR can be established according to the sample data
Figure FDA00034687331600000313
Wherein
Figure FDA00034687331600000314
I.e. the objective function is:
Figure FDA00034687331600000315
wherein | W | is the sum of the modes of the row vectors of the matrix W, and c is a regularization factor, and W is obtained by an SVR learning method.
7. The method according to claim 6, wherein in the fifth step the probability density function of the random variables of the observed error is established, in particular, according to
Figure FDA0003468733160000041
Can obtain the product
Figure FDA0003468733160000042
Wherein
Figure FDA0003468733160000043
RtIn order to observe the residual random variable,
Figure FDA0003468733160000044
as a state
Figure FDA0003468733160000045
Corresponding observed random variable YtExpected value of (2) YtViewed as an m-dimensional Gaussian distribution, i.e.
Figure FDA0003468733160000046
Wherein mu2Is YtThe covariance matrix is an m-order matrix, and the probability density function of the random variable of the observation error can be obtained:
Rt~N(0,μ2) (8)
where 0 is the mean value, μ2Is a covariance matrix of it, with YtThe same covariance matrix, based on sufficient sample data { ytThe method comprises the steps of (1) and a state transition transformation matrix F (), a discretization transformation S (), a discretization inverse transformation S () established in the first step, the second step, the third step and the fourth step-1() And an observation matrix W, and calculating to obtain an observation data residual error sequence { rtTherein of
Figure FDA0003468733160000047
Figure FDA0003468733160000048
For the slave sample state data xtData obtained in ascending dimensions, i.e.
Figure FDA0003468733160000049
Namely, it is
Figure FDA00034687331600000410
According to { rtStatistical inference of μ2
8. The method according to claim 1, wherein the multi-objective simultaneous inversion is performed based on a state space model, in particular the state space model is: xt=ft(Xt-1)+Qt,Yt=ot(Xt)+RtWherein X ist、Qt、Yt、RtSubject to a Gaussian distributed random vector, ot() Taking the random variable X as a linear transformation matrix, adopting a maximum posterior method and based on a Bayesian framework to establish a random variable XtA posterior probability density function p (X)t|y,θ)=c*p(y|Xt,θ)*p(XtI θ), where a posterior probability density function p (X)tI Y, θ) is determined, a random sequence X is established as X ═ X1,X2,...,Xt-1,XtThe joint posterior probability density function of:
p(X|Y,θ)=p(X1,X2,...,Xt-1,Xt|Y,θ)
=p(X1|Y,θ)*p(X2|X1,Y,θ)*p(X3|X2,Y,θ)*...*p(Xt|Xt-1y, θ) solving the sequence of states by maximum a posteriori
Figure FDA00034687331600000411
Namely, it is
Figure FDA00034687331600000412
Since X is ═ X1,X2,...,Xt-1,XtIs a markov chain, so p (X | Y, θ) is p (X)1|Y1,θ)p(X2|X1,Y2,θ)....p(Xt|Xt-1,Ytθ), where p (X)t|Xt-1,Yt,θ)=p(Xt|Xt-1,θ)p(Yt|Xtθ), p (X | Y, θ) can be obtained as p (X)1|θ)*p(Y1|X1,θ)*p(X2|X1,θ)*p(Y2|X2,θ)*...*p(Xt|Xt-1,θ)*p(Yt|Xtθ), can be obtained
Figure FDA0003468733160000051
Due to the fact that
Figure FDA0003468733160000052
And
Figure FDA0003468733160000053
equivalent, can obtain
Figure FDA0003468733160000054
Figure FDA0003468733160000055
Further derivation
Figure FDA0003468733160000056
Wherein
Figure FDA0003468733160000057
Jx,k(X)={Xk-S-1(F(S(Xk-1)))}ε2-1{Xk-S-1(F(S(Xk -1)))},Iy,k(X)={WXk-Yt2-1{WXk-YtGet it out using convex optimization algorithm
Figure FDA0003468733160000058
CN202110687122.2A 2021-06-21 2021-06-21 Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression Active CN113419278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110687122.2A CN113419278B (en) 2021-06-21 2021-06-21 Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687122.2A CN113419278B (en) 2021-06-21 2021-06-21 Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression

Publications (2)

Publication Number Publication Date
CN113419278A CN113419278A (en) 2021-09-21
CN113419278B true CN113419278B (en) 2022-03-08

Family

ID=77789598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110687122.2A Active CN113419278B (en) 2021-06-21 2021-06-21 Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression

Country Status (1)

Country Link
CN (1) CN113419278B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900140B (en) * 2021-09-30 2022-08-23 中国石油大学(北京) Seismic data optimization method and device based on space-time combination

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407173A (en) * 2018-09-29 2019-03-01 核工业北京地质研究院 Lithology fining and automatic identification method based on Logging Curves
CN111985093A (en) * 2020-08-03 2020-11-24 哈尔滨工程大学 Adaptive unscented Kalman filtering state estimation method with noise estimator
CN112070073A (en) * 2020-11-12 2020-12-11 北京中恒利华石油技术研究所 Logging curve abnormity discrimination method based on Markov chain transition probability matrix eigenvalue classification and support vector machine
CN112162316A (en) * 2020-09-28 2021-01-01 北京中恒利华石油技术研究所 High-resolution well-seismic fusion prestack inversion method driven by AVO waveform data
CN112230283A (en) * 2020-10-12 2021-01-15 北京中恒利华石油技术研究所 Seismic porosity prediction method based on logging curve support vector machine modeling
CN112529031A (en) * 2020-07-28 2021-03-19 新汶矿业集团有限责任公司 Microseismic signal clustering method and device based on improved K-means

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347823B (en) * 2019-08-09 2024-05-03 中国石油天然气股份有限公司 Deposition phase boundary identification method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407173A (en) * 2018-09-29 2019-03-01 核工业北京地质研究院 Lithology fining and automatic identification method based on Logging Curves
CN112529031A (en) * 2020-07-28 2021-03-19 新汶矿业集团有限责任公司 Microseismic signal clustering method and device based on improved K-means
CN111985093A (en) * 2020-08-03 2020-11-24 哈尔滨工程大学 Adaptive unscented Kalman filtering state estimation method with noise estimator
CN112162316A (en) * 2020-09-28 2021-01-01 北京中恒利华石油技术研究所 High-resolution well-seismic fusion prestack inversion method driven by AVO waveform data
CN112230283A (en) * 2020-10-12 2021-01-15 北京中恒利华石油技术研究所 Seismic porosity prediction method based on logging curve support vector machine modeling
CN112070073A (en) * 2020-11-12 2020-12-11 北京中恒利华石油技术研究所 Logging curve abnormity discrimination method based on Markov chain transition probability matrix eigenvalue classification and support vector machine

Also Published As

Publication number Publication date
CN113419278A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
Karimpouli et al. Image-based velocity estimation of rock using convolutional neural networks
Mousavi et al. Deep-learning seismology
Jia et al. What can machine learning do for seismic data processing? An interpolation application
US11693139B2 (en) Automated seismic interpretation-guided inversion
EP3894902B1 (en) Subsurface models with uncertainty quantification
Wang et al. A segmentation approach for stochastic geological modeling using hidden Markov random fields
CN110609320B (en) Pre-stack seismic reflection pattern recognition method based on multi-scale feature fusion
CN107688201B (en) RBM-based seismic prestack signal clustering method
CN103282747B (en) For producing the system and method for the renewal of geological model
CN104101902B (en) Seismic properties clustering method and device
CN109272029B (en) Well control sparse representation large-scale spectral clustering seismic facies partitioning method
Wang et al. Seismic velocity inversion transformer
Tzu-hao et al. Reservoir uncertainty quantification using probabilistic history matching workflow
CN110895348A (en) Method, system and storage medium for extracting low-frequency information of seismic elastic impedance
Bedi et al. PP-NFR: an improved hybrid learning approach for porosity prediction from seismic attributes using non-linear feature reduction
CN113419278B (en) Well-seismic joint multi-target simultaneous inversion method based on state space model and support vector regression
Shen et al. Distributed Markov chain Monte Carlo method on big-data platform for large-scale geosteering inversion using directional electromagnetic well logging measurements
Liang et al. Uncertainty quantification of geologic model parameters in 3D gravity inversion by Hessian-informed Markov chain Monte Carlo
Alameedy et al. Evaluating machine learning techniques for carbonate formation permeability prediction using well log data
US20230088307A1 (en) Hierarchical Building and Conditioning of Geological Models with Machine Learning Parameterized Templates and Methods for Using the Same
Chaki Reservoir characterization: A machine learning approach
Verma et al. Quantification of sand fraction from seismic attributes using Neuro-Fuzzy approach
CN110554427B (en) Lithology combination prediction method based on forward modeling of seismic waveforms
Xie et al. A back analysis scheme for refined soil stratification based on integrating borehole and CPT data
Dai et al. Stratigraphic automatic correlation using SegNet semantic segmentation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant