CN117192604A - Full waveform inversion method for multi-scale deep learning optimization - Google Patents

Full waveform inversion method for multi-scale deep learning optimization Download PDF

Info

Publication number
CN117192604A
CN117192604A CN202311183250.9A CN202311183250A CN117192604A CN 117192604 A CN117192604 A CN 117192604A CN 202311183250 A CN202311183250 A CN 202311183250A CN 117192604 A CN117192604 A CN 117192604A
Authority
CN
China
Prior art keywords
inversion
neural network
deep learning
gradient
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311183250.9A
Other languages
Chinese (zh)
Inventor
方金伟
章俊
刘盛东
李乾坤
杨彩
曹海涛
孙超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Earth Science And Engineering Yunlong Lake Laboratory
Original Assignee
Deep Earth Science And Engineering Yunlong Lake Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Earth Science And Engineering Yunlong Lake Laboratory filed Critical Deep Earth Science And Engineering Yunlong Lake Laboratory
Priority to CN202311183250.9A priority Critical patent/CN117192604A/en
Publication of CN117192604A publication Critical patent/CN117192604A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a full waveform inversion method for multi-scale deep learning optimization, which utilizes a deep neural network to represent parameters to be inverted, utilizes a traditional full waveform inversion high-performance calculation scheme based on a convolution type objective function to realize gradient calculation, and inputs the gradient of inversion parameters into a deep convolution neural network model to realize multi-scale inversion; meanwhile, a characterization mode of the scale inversion parameters by a deep learning optimization method is adopted, so that the simultaneous characterization of inversion parameter information of a plurality of scales by a deep convolutional neural network model is effectively ensured; in addition, the invention introduces a first-order variable density velocity stress equation as a forward wave field, not only has high simulation precision, but also can adaptively consider density change in an underground medium, can provide direct physical information of velocity and density for the underground medium, realizes simultaneous modeling of the velocity and the density, and finally improves the efficiency, the precision and the stability of inversion of the whole full waveform.

Description

Full waveform inversion method for multi-scale deep learning optimization
Technical Field
The invention relates to a petrophysical parameter imaging method of an underground medium, in particular to a full waveform inversion method for multi-scale deep learning optimization, and belongs to the crossing field of deep learning and seismic exploration speed modeling technology.
Background
Full waveform inversion is a hotspot in the field of seismic exploration and plays an important role in high-precision exploration of oil and gas resources. Full waveform inversion techniques based on classical numerical solutions have tended to mature over the past few decades, with significant advances in both computational efficiency and computational accuracy. In recent years, with the rapid development of artificial intelligence technology, a deep learning and full waveform inversion cross-fusion technology is a development trend.
The application of the deep learning technology in the aspect of full waveform inversion technology mainly comprises the following three aspects: 1. a full waveform inversion method driven by data, which requires training of large sample data; 2. full waveform inversion of deep learning optimization is carried out, and the method adopts deep learning optimization and network automatic differential technology to solve the problem of accurate wave equation inversion; 3. the method mainly takes a wave equation as a constraint term, and realizes simulation and inversion of the equation through training a network through a spatially discrete wave field value. In the above research direction, the second point research has the potential of introducing classical full-waveform inversion technology and optimizing on a deep learning framework to realize high-precision inversion.
The characterization of the inversion parameter model is effectively realized by using a proper network architecture in deep learning optimization, and the method is a way for realizing high-precision inversion. Because full waveform inversion is a process of reconstructing wide wave numbers, when the initial model lacks abundant middle and low wave number information, the deep learning optimization strategy can not solve the problem, and the quality of an inversion result is poor and the resolution is insufficient. Even if the classical full waveform inversion has a multi-scale inversion strategy, the conventional full waveform inversion is difficult to effectively integrate into a multi-scale objective function, so that the multi-scale inversion strategy cannot be directly used; meanwhile, when the depth network is used for representing the inversion parameter model, how to enable the network to effectively represent inversion parameter information of multiple scales at the same time is also a problem worthy of deep consideration. In addition, the simultaneous modeling of the speed and the rock density cannot be realized by the full waveform inversion based on the deep learning at present, so that the inversion process is complex, a large amount of network parameter storage is needed during the speed inversion, and finally, the efficiency and the stability of the whole full waveform inversion are still to be improved.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a full waveform inversion method for multi-scale deep learning optimization, which utilizes a deep neural network to represent parameters to be inverted, utilizes a traditional full waveform inversion high-performance calculation scheme based on a convolution objective function to realize gradient calculation, and inputs the gradient of inversion parameters into a deep training network to realize multi-scale inversion; meanwhile, a representation mode of the sub-scale inversion parameters of the depth neural network is adopted, so that the depth neural network is effectively ensured to simultaneously represent inversion parameter information of a plurality of scales; in addition, by introducing a first-order variable density speed stress equation, simultaneous modeling of speed and density is realized, so that the inversion process is simplified, and the efficiency and stability of the whole full waveform inversion are improved.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a full waveform inversion method for multi-scale deep learning optimization comprises the following specific steps:
A. firstly determining a region needing inversion, and then acquiring observation data by adopting an observation system;
B. constructing a positive transmission wave field equation, wherein the positive transmission wave field equation is a two-dimensional first-order variable density acoustic wave equation;
C. constructing a convolution objective function, wherein the convolution objective function has good robustness;
D. determining an anti-transmission wave field equation of the convolution target function based on the adjoint theory according to the positive transmission wave field equation of the step B and the convolution target function of the step C;
E. constructing gradients of model parameters in the convolution objective function by using the forward wave field in the step B and the backward wave field in the step D, wherein the gradients consist of velocity gradients and density gradients;
F. establishing a deep convolutional neural network model, wherein the deep convolutional neural network model comprises a speed model parameter and a density model parameter, and is used for mapping random characteristic variables to a plurality of physical parameters;
G. inputting the gradient of the model parameters in the step E to the gradient of the last layer in the deep convolutional neural network model in the step F;
H. performing model parameter optimization on the deep convolutional neural network model in the step G by adopting a deep learning optimization method, and reversely transmitting the gradient of the model parameter to the correction quantity of the network parameter in a mode of training the deep convolutional neural network model to realize full waveform inversion, so as to finally obtain an inverted speed inversion result and a density inversion result;
I. setting a plurality of main frequency parameters, obtaining gradients corresponding to the current main frequency parameters by using convolution type objective functions in the frequency band of each main frequency parameter, and repeating the steps E to H respectively for the gradients corresponding to each main frequency parameter, thereby realizing full waveform inversion of different scale characteristics.
Further, the two-dimensional first-order variable density acoustic wave equation in the step B specifically includes:
wherein ψ= [ σ, v] T Is the positive wavefield variable, σ is the particle shock stress corresponding to the positive wavefield, v= [ v ] x ,v z ]Indicating the velocity of the particles along both x and z, s is the source term,representing the partial derivative of time T, T representing the transpose of the matrix, the chaining matrix +.>ρ represents the density of the medium, v represents the speed of the medium, +.>And->Representing partial derivatives of x and z.
Further, the convolution objective function in the step C specifically includes:
where d is the observed data, v is the synthetic wavefield data, x r And x ref The position of the detection point and the position of the reference track are respectively represented, and the space coordinate x= [ x, z]Model parameter m= [ v, ρ] T
Further, the inverse wave field equation in the step D specifically includes:
wherein the wave field is counter-transmittedWherein-> Is the particle vibration stress corresponding to the counter-transmitted wave field, < ->And->Indicating the vibration velocity of the particle in both x and z directions,/and>is an anti-transmission source term, which represents convolution operation,>representing a cross-correlation operation.
Further, the gradient of the model parameter in the step E specifically includes:
wherein the method comprises the steps ofIs the gradient of velocity,/->Is the gradient of density.
Further, the deep convolutional neural network model in the step F specifically includes:
wherein, the model parameter gamma=v or ρ,a deep convolution neural network model for representing model parameters gamma, theta representing network parameters, L representing the total number of layers of the neural network, W l And b l Weights and offsets representing convolutional or fully-connected layers, where the subscript l e 1, l]L epsilon Z, lambda represents the random eigenvector of the network input.
Further, the step G specifically includes: assigning the gradient of the model parameter in the step E to the gradient of the network parameter of the last layer, and designating the initial speed, the density value and the gradient, wherein the specific formula is as follows:
wherein v is ini And ρ ini Representing the initial velocity and density, respectively.
Further, the specific process of training the deep convolutional neural network model to realize full waveform inversion in the step H is as follows: firstly, taking observation data as a random characteristic variable lambda, using a deep convolutional neural network model to represent a model parameter gamma, and transmitting the model parameter gamma into an external model parameter gradientOptimization iteration is carried out through a deep learning optimization method, and parameters theta of a deep convolutional neural network model are modified γ Further, an updated model parameter gamma is shown, and a gradient of the model parameter is obtained by high performance calculation +.>Re-optimizing and correcting depth convolution network model parameter theta γ And outputting a characterization result of the depth convolution network model until the iteration times reach the set times or the objective function value to meet the precision requirement, namely obtaining a final inverted model parameter gamma.
Compared with the prior art, the invention has the following advantages:
(1) The method utilizes the depth neural network to represent the parameters to be inverted, utilizes a traditional full waveform inversion high-performance calculation scheme based on a convolution type objective function to realize gradient calculation, and inputs the gradient of the inversion parameters into a depth convolution neural network model to realize multi-scale inversion; meanwhile, a characterization mode of the deep learning optimization method for the scale inversion parameters is adopted, so that the depth convolutional neural network model is effectively ensured to simultaneously characterize inversion parameter information of a plurality of scales, a multi-scale inversion function is realized, the problem of fundamental low-frequency dependence in full waveform inversion can be solved or alleviated, and the precision of parameter modeling is remarkably improved.
(2) In the inversion process of different scales, the independent characterization of the parameter variation of the model is carried out in the frequency band corresponding to each scale, so that the problem that the characteristics of certain scales are difficult to accurately acquire during continuous learning of a plurality of frequency bands is avoided, multi-scale characteristic learning under deep learning optimization is ensured, and the inversion precision is effectively improved.
(3) The invention uses a first-order variable density speed stress equation as a forward wave field, then constructs a convolution type objective function, determines a corresponding reverse wave field, finally uses deep learning optimization to realize full waveform inversion based on the convolution objective function, and executes multi-scale inversion under the first-order variable density speed stress equation to realize simultaneous modeling of speed and density.
Drawings
FIG. 1 is a schematic diagram of a deep learning optimized network architecture of the present invention;
FIG. 2 is a schematic flow chart of the deep learning optimization inversion algorithm of the present invention;
FIG. 3 is a graph of actual model parameters in the present invention;
wherein (a) and (b) are true velocity and density, respectively;
FIG. 4 is an initial model parameter of the present invention;
wherein (a) and (b) are initial velocity and density, respectively;
FIG. 5 is a single-scale inversion result of the deep learning optimization of the present invention;
wherein (a) and (b) are the single-scale velocity and density inversion results, respectively, of the deep learning optimization;
FIG. 6 is a multi-scale inversion result of the deep learning optimization of the present invention;
wherein (a) and (b) are the multi-scale velocity and density inversion results of the deep learning optimization, respectively;
FIG. 7 is the inversion result of a multi-scale deep learning optimization in accordance with the present invention;
wherein (a) and (b) are the speed and density inversion results, respectively, of the multi-scale deep learning optimization.
Detailed Description
The present invention will be further described below.
As shown in fig. 1 and 2, the specific steps of the present invention are:
A. firstly determining a region needing inversion, and then acquiring observation data by adopting an observation system;
B. constructing a positive transmission wave field equation, wherein the positive transmission wave field equation is a two-dimensional first-order variable density acoustic wave equation, and the specific formula is as follows:
wherein ψ= [ σ, v] T Is the positive wavefield variable, σ is the particle shock stress corresponding to the positive wavefield, v= [ v ] x ,v z ]Indicating the velocity of the particles along both x and z, s is the source term,representing the partial derivative of time T, T representing the transpose of the matrix, the chaining matrix +.>ρ represents the density of the medium, v represents the speed of the medium, +.>And->Representing partial derivatives of x and z;
C. constructing a convolution objective function, wherein the convolution objective function has good robustness; the convolution objective function is specifically:
where d is the observed data, v is the synthetic wavefield data, x r And x ref The position of the detection point and the position of the reference track are respectively represented, and the space coordinate x= [ x, z]Model parameter m= [ v, ρ] T
D. According to the forward wavefield equation in the step B and the convolution objective function in the step C, determining an inverse wavefield equation of the convolution objective function based on the accompanying theory, wherein the specific formula is as follows:
wherein the wave field is counter-transmittedWherein-> Is the particle vibration stress corresponding to the counter-transmitted wave field, < ->Andindicating the vibration velocity of the particle in both x and z directions,/and>is an anti-transmission source term, which represents convolution operation,>representing a cross-correlation operation;
E. constructing gradients of model parameters in the convolution objective function by using the forward wave field in the step B and the backward wave field in the step D, wherein the gradients consist of velocity gradients and density gradients, and specifically comprise:
wherein the method comprises the steps ofIs the gradient of velocity,/->Is a gradient of density;
F. establishing a deep convolutional neural network model, wherein the deep convolutional neural network model comprises a speed model parameter and a density model parameter, and is used for mapping random characteristic variables to a plurality of physical parameters; the method comprises the following steps:
wherein, the model parameter gamma=v or ρ,a deep convolution neural network model for representing model parameters gamma, theta representing network parameters, L representing the total number of layers of the neural network, W l And b l Weights and offsets representing convolutional or fully-connected layers, where the subscript l e 1, l]L epsilon Z, lambda represents the random feature vector of the network input;
G. assigning the gradient of the model parameter in the step E to the gradient of the network parameter of the last layer, and designating the initial speed, the density value and the gradient, wherein the specific formula is as follows:
wherein v is ini And ρ ini Representing the initial velocity and density, respectively;
H. and C, carrying out model parameter optimization on the deep convolutional neural network model in the step G by adopting a deep learning optimization method, wherein the deep learning optimization method is one of RMSprop, adagrad, ASGD and Adam, and the gradient of the model parameter is reversely transmitted to the correction quantity of the network parameter in a mode of training the deep convolutional neural network model to realize full waveform inversion, and specifically comprises the following steps: firstly, taking observation data as a random characteristic variable lambda, using a deep convolutional neural network model to represent a model parameter gamma, and transmitting the model parameter gamma into an external model parameter gradientOptimization iteration is carried out through a deep learning optimization method, and parameters theta of a deep convolutional neural network model are modified γ Further, an updated model parameter gamma is shown, and a gradient of the model parameter is obtained by high performance calculation +.>Re-optimizing and correcting depth convolution network model parameter theta γ And outputting a characterization result of the depth convolution network model until the iteration times reach the set times or the objective function value and meet the precision requirement, namely, obtaining a final inverted model parameter gamma (namely, an inversion result, including inversion of speed and inversion of density).
I. The main frequencies corresponding to different scales are different, so that a plurality of main frequency parameters are set, and the different scale features are gradients corresponding to the different main frequency parameters, so that in the frequency band of each main frequency parameter, the gradient corresponding to the current main frequency parameter is obtained by utilizing a convolution objective function, and the gradients corresponding to the main frequency parameters are respectively repeated in steps E to H, so that full waveform inversion of the different scale features is realized. Meanwhile, in order to ensure the effectiveness of network learning, the representation of the independent model parameter variation is carried out in each frequency band, so that the problem that the characteristics of certain scales are difficult to accurately acquire during continuous learning of a plurality of frequency bands is avoided.
The test proves that:
the numerical method is adopted to test single-scale inversion under the optimization of the deep learning, multi-scale inversion effect and inversion effect of the optimization of the multi-scale deep learning:
the method of the invention adopts a network architecture as shown in fig. 1, and designs an inversion framework according to fig. 2. The deep convolutional neural network model of the present invention is used to represent the mapping from random variables to model parameters, and the network output is directly added with the initial model to output model parameters. By means of external high-performance gradient calculation, multi-scale and network parameterized deep learning optimization inversion is realized on the basis of a convolution type objective function. The scheme is mainly used for comparing the inversion effect of single-scale and multi-scale deep learning optimization with the inversion effect of multi-scale deep learning optimization. The true velocity and density models tested are shown in fig. 3 (a) and 3 (b), and the velocity and density of the initial model used for inversion are shown in fig. 4 (a) and 4 (b).
The main frequency of single-scale inversion under the deep learning optimization is set to be 10Hz, the iteration is performed 300 times, the other two inversion strategies adopt the same multi-scale inversion strategy, the multi-scale inversion is divided into 3 frequency bands, the main frequency is respectively 8, 10 and 12Hz, and each frequency band is iterated 100 times. The multi-scale inversion implementation strategy is realized by selecting a plurality of wavelets with ascending dominant frequencies and performing adaptive filtering based on a convolution type objective function. Fig. 5 (a) and 5 (b) show the speed and density results of single-scale inversion under the deep learning optimization, fig. 6 (a) and 6 (b) show the speed and density results of multi-scale inversion under the deep learning optimization, and fig. 7 (a) and 7 (b) show the inversion results of multi-scale deep learning optimization of the present invention.
Comparing the inversion results of the three modes can clearly show that the resolution of the single-scale inversion result is insufficient under the optimization of the deep learning; the multi-scale inversion process under the deep learning optimization introduces a multi-scale inversion strategy which is conventionally used for improving the inversion precision into the inversion of the deep learning optimization, but the multi-scale inversion result of the method is poor, and the inversion precision is not remarkably improved. The inversion method based on the multi-scale deep learning optimization combining the convolution objective function and the two-dimensional first-order variable density acoustic wave equation can obviously improve the resolution of the inversion result. Specifically, the inversion result of the method has more uniform velocity distribution, more obvious low-velocity layer, stronger three-dimensional structure of the density model and richer wave number information; in contrast, the inversion results of the other two methods have large inversion errors in the deep layers, and lack low wave number information at some interface structures. From the inversion result, the inversion method provided by the invention can effectively recover reliable and high-resolution model parameters from the background model. The advantages of the inversion result of the invention can thus also be explained.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (8)

1. A full waveform inversion method for multi-scale deep learning optimization is characterized by comprising the following specific steps:
A. firstly determining a region needing inversion, and then acquiring observation data by adopting an observation system;
B. constructing a positive transmission wave field equation, wherein the positive transmission wave field equation is a two-dimensional first-order variable density acoustic wave equation;
C. constructing a convolution objective function, wherein the convolution objective function has good robustness;
D. determining an anti-transmission wave field equation of the convolution target function based on the adjoint theory according to the positive transmission wave field equation of the step B and the convolution target function of the step C;
E. constructing gradients of model parameters in the convolution objective function by using the forward wave field in the step B and the backward wave field in the step D, wherein the gradients consist of velocity gradients and density gradients;
F. establishing a deep convolutional neural network model, wherein the deep convolutional neural network model comprises a speed model parameter and a density model parameter, and is used for mapping random characteristic variables to a plurality of physical parameters;
G. inputting the gradient of the model parameters in the step E to the gradient of the last layer in the deep convolutional neural network model in the step F;
H. performing model parameter optimization on the deep convolutional neural network model in the step G by adopting a deep learning optimization method, and reversely transmitting the gradient of the model parameter to the correction quantity of the network parameter in a mode of training the deep convolutional neural network model to realize full waveform inversion, so as to finally obtain an inverted speed inversion result and a density inversion result;
I. setting a plurality of main frequency parameters, obtaining gradients corresponding to the current main frequency parameters by using convolution type objective functions in the frequency band of each main frequency parameter, and repeating the steps E to H respectively for the gradients corresponding to each main frequency parameter, thereby realizing full waveform inversion of different scale characteristics.
2. The full waveform inversion method of multi-scale deep learning optimization according to claim 1, wherein the two-dimensional first-order variable density acoustic wave equation in the step B is specifically:
wherein ψ= [ σ, v] T Is the positive wavefield variable, σ is the particle shock stress corresponding to the positive wavefield, v= [ v ] x ,v z ]Indicating the velocity of the particles along both x and z, s is the source term,representing the partial derivative of time T, T representing the transpose of the matrix, the chaining matrix +.>ρ represents the density of the medium, v represents the speed of the medium, +.>And->Representing partial derivatives of x and z.
3. The full waveform inversion method of multi-scale deep learning optimization according to claim 2, wherein the convolution objective function in step C is specifically:
where d is the observed data, v is the synthetic wavefield data, x r And x ref The position of the detection point and the position of the reference track are respectively represented, and the space coordinate x= [ x, z]Model parameter m= [ v, ρ] T
4. The full waveform inversion method of multi-scale deep learning optimization of claim 3 wherein the inverse wavefield equation in step D is specifically:
wherein the wave field is counter-transmittedWherein-> Is the particle vibration stress corresponding to the counter-transmitted wave field, < ->And->Indicating that the particle is along both x and zVibration speed of direction>Is an anti-transmission source term, which represents convolution operation,>representing a cross-correlation operation.
5. The full waveform inversion method of multi-scale deep learning optimization of claim 4 wherein the gradient of model parameters in step E is specifically:
wherein the method comprises the steps ofIs the gradient of velocity,/->Is the gradient of density.
6. The full waveform inversion method of multi-scale deep learning optimization according to claim 5, wherein the deep convolutional neural network model in step F is specifically:
wherein, the model parameter gamma=v or ρ,a deep convolution neural network model for representing model parameters gamma, theta representing network parameters, L representing the total number of layers of the neural network, W l And b l Representing convolutional or fully-concatenated layersWherein the subscript l e 1, l]L epsilon Z, lambda represents the random eigenvector of the network input.
7. The full waveform inversion method of multi-scale deep learning optimization of claim 6, wherein step G specifically comprises: assigning the gradient of the model parameter in the step E to the gradient of the network parameter of the last layer, and designating the initial speed, the density value and the gradient, wherein the specific formula is as follows:
wherein v is ini And ρ ini Representing the initial velocity and density, respectively.
8. The full waveform inversion method of multi-scale deep learning optimization according to claim 7, wherein the specific process of training the deep convolutional neural network model to implement full waveform inversion in the step H is as follows: firstly, taking observation data as a random characteristic variable lambda, using a deep convolutional neural network model to represent a model parameter gamma, and transmitting the model parameter gamma into an external model parameter gradientOptimization iteration is carried out through a deep learning optimization method, and parameters theta of a deep convolutional neural network model are modified γ Further, an updated model parameter gamma is shown, and a gradient of the model parameter is obtained by high performance calculation +.>Re-optimizing and correcting depth convolution network model parameter theta γ And outputting a characterization result of the depth convolution network model until the iteration times reach the set times or the objective function value to meet the precision requirement, namely obtaining a final inverted model parameter gamma.
CN202311183250.9A 2023-09-14 2023-09-14 Full waveform inversion method for multi-scale deep learning optimization Pending CN117192604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311183250.9A CN117192604A (en) 2023-09-14 2023-09-14 Full waveform inversion method for multi-scale deep learning optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311183250.9A CN117192604A (en) 2023-09-14 2023-09-14 Full waveform inversion method for multi-scale deep learning optimization

Publications (1)

Publication Number Publication Date
CN117192604A true CN117192604A (en) 2023-12-08

Family

ID=88984690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311183250.9A Pending CN117192604A (en) 2023-09-14 2023-09-14 Full waveform inversion method for multi-scale deep learning optimization

Country Status (1)

Country Link
CN (1) CN117192604A (en)

Similar Documents

Publication Publication Date Title
CN112083482B (en) Seismic super-resolution inversion method based on model-driven depth learning
CN106405651B (en) Full waveform inversion initial velocity model construction method based on logging matching
CN111596366B (en) Wave impedance inversion method based on seismic signal optimization processing
KR20130060231A (en) Artifact reduction in method of iterative inversion of geophysical data
CN110954945B (en) Full waveform inversion method based on dynamic random seismic source coding
CN109738952B (en) Passive source direct offset imaging method based on full waveform inversion driving
US11828894B2 (en) Multi-scale unsupervised seismic velocity inversion method based on autoencoder for observation data
CN105319581A (en) Efficient time domain full waveform inversion method
CN107894618B (en) A kind of full waveform inversion gradient preprocess method based on model smoothing algorithm
CN114839673B (en) Separation method, separation system and computer equipment for multi-seismic-source efficient acquisition wave field
CN102901985A (en) Depth domain layer speed correcting method suitable for undulating surface
CN111722283B (en) Stratum velocity model building method
CN109507726A (en) The inversion method and system of time-domain elastic wave multi-parameter Full wave shape
CN115598697A (en) Thin-layer structure high-resolution seismic inversion method, device, medium and equipment
CN109541691B (en) Seismic velocity inversion method
CN116011338A (en) Full waveform inversion method based on self-encoder and deep neural network
CN112130199A (en) Optimized least square reverse time migration imaging method
CN117192604A (en) Full waveform inversion method for multi-scale deep learning optimization
CN111273349A (en) Transverse wave velocity extraction method and processing terminal for seabed shallow sediment layer
Rusmanugroho et al. 3D velocity model building based upon hybrid neural network
CN116819602B (en) Full waveform inversion method of variable density acoustic wave equation for deep learning optimization
CN110376642B (en) Three-dimensional seismic velocity inversion method based on conical surface waves
CN110161561A (en) A kind of controllable layer position sublevel interbed multiple analogy method in oil and gas reservoir
CN108680957A (en) Local cross-correlation time-frequency domain Phase-retrieval method based on weighting
CN114185090B (en) Lithology and elastic parameter synchronous inversion method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination