CN115659773A - Full waveform inversion acceleration method based on depth network and related device - Google Patents

Full waveform inversion acceleration method based on depth network and related device Download PDF

Info

Publication number
CN115659773A
CN115659773A CN202210796150.2A CN202210796150A CN115659773A CN 115659773 A CN115659773 A CN 115659773A CN 202210796150 A CN202210796150 A CN 202210796150A CN 115659773 A CN115659773 A CN 115659773A
Authority
CN
China
Prior art keywords
data
model
iteration
fwi
velocity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210796150.2A
Other languages
Chinese (zh)
Inventor
陆文凯
王永浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210796150.2A priority Critical patent/CN115659773A/en
Publication of CN115659773A publication Critical patent/CN115659773A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Geophysics And Detection Of Objects (AREA)

Abstract

The application provides a full waveform inversion acceleration method based on a depth network and a related device, and relates to the technical field of seismic exploration. The method comprises the following steps: acquiring first offset data of a first velocity model; inputting the first speed model and the first offset data into a pre-trained deep learning network, and outputting a second speed model; the deep learning network is obtained by training based on iteration data and offset data, the iteration data is iteration data generated when full waveform inversion FWI is carried out on the iteration data, the forward data is obtained by carrying out forward operation on the iteration data, and the offset data is obtained by offsetting the iteration data; and performing FWI based on the second speed model to obtain a target model. By using the target model obtained in the above manner to perform FWI iteration, convergence in the FWI iteration process can be accelerated, and the divergence problem in the FWI iteration process can be improved.

Description

Full waveform inversion acceleration method based on depth network and related device
Technical Field
The application relates to the technical field of seismic exploration, in particular to a full waveform inversion acceleration method based on a depth network and a related device.
Background
Full Waveform Inversion (FWI) is a high-resolution seismic exploration imaging method for iteratively inverting parameters of a subsurface medium according to full-wave waveform information of seismic data, and is widely applied to the field of oil and gas earth and the research of regional, plate and global-scale seismology at present.
FWI is mathematically a strongly nonlinear ill-conditioned inverse problem that, from a computational efficiency perspective, relies on local gradient iteration to solve, and is therefore limited by the phenomenon of "cycle skip". "cycle skipping" means that the observed and predicted signals are more than half a cycle apart, resulting in FWI being prone to local minima and large-scale computation of FWI due to dimensionality.
Therefore, the existing FWI method is easy to disperse in the iteration process, and the FWI iteration is long in time consumption and slow in convergence speed.
Disclosure of Invention
The application provides a full waveform inversion accelerating method based on a depth network and a related device, which are used for solving the problems that the existing FWI method is easy to disperse in an iteration process, and the FWI iteration is long in time consumption and slow in convergence speed.
According to a first aspect of the application, a full waveform inversion acceleration method based on a depth network is provided, which comprises the following steps:
acquiring first offset data of a first velocity model; inputting the first speed model and the first offset data into a pre-trained deep learning network, and outputting a second speed model; the deep learning network is obtained by training based on iterative data and offset data, the iterative data is iterative data generated when full waveform inversion FWI is carried out on iterative data, the forward data is obtained by carrying out forward modeling on the iterative data, and the offset data is obtained by offsetting the forward data based on the iterative data; and performing FWI based on the second speed model to obtain a target model.
Optionally, before obtaining the first offset data of the first velocity model, the method further includes: obtaining a training set, wherein the training set comprises M sample speed models; forward modeling is carried out on the M sample velocity models respectively to obtain M pre-stack seismic data; performing full waveform inversion on the M pre-stack seismic data to obtain M × N iterative velocity models; when any one of the pre-stack seismic data is subjected to full waveform inversion, N iterative velocity models are obtained; for any iteration velocity model, performing depth migration on the pre-stack seismic data corresponding to any iteration velocity model to obtain migration data of any iteration velocity model; taking the M x N iteration speed models and the offset data of the M x N iteration speed models as the double-channel input of the deep network to be trained, and training the deep network to be trained to obtain the deep learning network; wherein M and N are both natural numbers.
Optionally, when FWI is performed on any one of the M pre-stack seismic data, N iterations are performed in sequence; in the N iterations, when the ith iteration is performed, model updating is performed by using the difference between any pre-stack seismic data and forward data corresponding to the iterative velocity model generated by the (i-1) th iteration to obtain the iterative velocity model generated by the ith iteration.
Optionally, the loss function of the deep network to be trained satisfies the following formula:
Figure BDA0003735923220000021
wherein G represents a deep learning network of the deep network to be trained, V i Representing said iterative velocity model, P i Offset data, V, representing the iterative velocity model T Representing the sample velocity model, j representing the jth of the sample velocity model, x representing surface space coordinates, and z representing imaging depth.
Optionally, before performing FWI based on the second velocity model to obtain a target model, the method further includes: judging whether the difference between the first speed model and the data of the training set is larger than a difference threshold value or not; the inputting the first velocity model and the first offset data into a pre-trained deep learning network comprises: when the difference between the first speed model and the data of the training set is smaller than or equal to the difference threshold value, inputting the first speed model and the first offset data into a pre-trained deep learning network.
Optionally, performing FWI based on the second velocity model to obtain a target model, including: when the difference between the data of the first speed model and the data of the training set is larger than the difference threshold value, taking the imaging result of the first speed model as a guide image, taking the imaging result of the second speed model as an input image, and performing guide filtering to obtain a filtered speed model; calculating a residual error of the second velocity model and the filtered velocity model; superposing the residual error with the first speed model to obtain a third speed model; and performing FWI on the third speed model to obtain the target model.
According to a second aspect of the present application, there is provided a full waveform inversion acceleration apparatus based on a depth network, comprising:
an obtaining module for obtaining first offset data of a first velocity model;
the input and output module is used for inputting the first speed model and the first offset data into a pre-trained deep learning network and outputting a second speed model; the deep learning network is obtained based on iterative data and offset data training, the iterative data is iterative data generated when full waveform inversion FWI is carried out on iterative data, the forward data is obtained by carrying out forward operation on the iterative data, and the offset data is obtained by offsetting the forward data based on the iterative data;
and the inversion module is used for carrying out FWI on the basis of the second velocity model to obtain a target model.
Optionally, the obtaining module is further configured to obtain a training set, where the training set includes M sample speed models; forward modeling is carried out on the M sample velocity models respectively to obtain M pre-stack seismic data; the reverse modeling module is further used for performing full waveform inversion on the M pre-stack seismic data respectively to obtain M × N iterative velocity models; when full-waveform inversion is carried out on any pre-stack seismic data, N iterative velocity models are obtained; for any iteration velocity model, carrying out depth migration on the pre-stack seismic data corresponding to any iteration velocity model to obtain migration data of any iteration velocity model; the input and output module is further used for taking the offset data of the M x N iterative velocity models and the M x N iterative velocity models as the dual-channel input of the deep network to be trained, training the deep network to be trained, and obtaining the deep learning network; wherein M and N are both natural numbers.
Optionally, the inversion module is further configured to perform N iterations in sequence when performing FWI on any one of the M pre-stack seismic data; in the N iterations, when the ith iteration is performed, model updating is performed by using the difference between any pre-stack seismic data and forward data corresponding to the iterative velocity model generated by the (i-1) th iteration to obtain the iterative velocity model generated by the ith iteration.
Optionally, the loss function of the deep network to be trained satisfies the following formula:
Figure BDA0003735923220000031
wherein G represents a deep learning network of the deep network to be trained, V i Representing said iterative velocity model, P i Offset data, V, representing the iterative velocity model T Representing the sample velocity model, j representing the jth of the sample velocity model, x representing surface space coordinates, and z representing imaging depth.
Optionally, the input/output module is further configured to determine whether a difference between the first speed model and the data of the training set is greater than a difference threshold; the inputting the first velocity model and the first offset data into a pre-trained deep learning network comprises: when the difference between the first speed model and the data of the training set is smaller than or equal to the difference threshold value, inputting the first speed model and the first offset data into a pre-trained deep learning network.
Optionally, the input/output module is further configured to, when a difference between the data of the first velocity model and the data of the training set is greater than the difference threshold, use an imaging result of the first velocity model as a guide image, use an imaging result of the second velocity model as an input image, and perform guided filtering to obtain a filtered velocity model; calculating a residual error between the second velocity model and the filtered velocity model; superposing the residual error with the first speed model to obtain a third speed model; and the inversion module is also used for carrying out FWI on the third speed model to obtain the target model.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the memory stored computer-executable instructions causes the at least one processor to perform the method for full waveform inversion acceleration based on a depth network as described above in the first aspect.
According to a third aspect of the present application, there is provided a computer-readable storage medium having stored therein computer-executable instructions for implementing the full waveform inversion acceleration method based on a depth network as described above in the first aspect when the computer-executable instructions are executed by a processor.
According to a fourth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method for accelerating a depth network based full waveform inversion as described above in relation to the first aspect.
According to the full waveform inversion acceleration method based on the deep learning network and the related device, an iteration speed model obtained through FWI iteration and corresponding offset data are used as input of the deep learning network, and the deep learning network is trained. When the speed model and the offset data thereof are input into the trained deep learning network, the deep learning network can output a new speed model approaching to the real speed model, and the new speed model is used for FWI, so that the convergence of an iteration process after the FWI can be accelerated, and the divergence problem in the FWI iteration process can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram of an application scenario related to an embodiment of the present application;
fig. 2 is a schematic flowchart of a full waveform inversion acceleration method based on a depth network according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of training a deep learning network according to an embodiment of the present disclosure;
FIG. 4 is a schematic process diagram of a guided filter migration-based algorithm according to an embodiment of the present application;
fig. 5 is a schematic process diagram of training a deep learning network according to an embodiment of the present application;
FIG. 6 is a diagram of a velocity model applied to certain FWI iterative convergence according to an embodiment of the present application;
FIG. 7 is a diagram of a velocity model applied to a FWI iteration that does not converge according to an embodiment of the present application;
fig. 8 is a schematic graph of a change of the PCC and MAE with iteration steps corresponding to the speed model provided in the embodiment of the present application;
fig. 9 is a schematic diagram of local PCC and MAE corresponding to a certain FWI iterative convergence speed model provided in an embodiment of the present application;
fig. 10 is a schematic diagram of local PCC and MAE corresponding to a certain FWI iteration unconverged speed model provided in the embodiment of the present application;
fig. 11 is a comparison graph of average PCC and MAE indexes of three deep network models provided in the embodiment of the present application;
FIG. 12 is a diagram illustrating the updated result of the initial velocity model of a certain FWI iterative converged velocity model provided by an embodiment of the present application;
fig. 13 is a schematic diagram of PCC and MAE corresponding to an initial velocity model update of a certain FWI iterative converged velocity model provided in an embodiment of the present application;
FIG. 14 is a diagram illustrating the updated result of the initial velocity model of a certain FWI iteration non-converged velocity model provided by an embodiment of the present application;
fig. 15 is a schematic diagram of PCC and MAE corresponding to an initial velocity model update of a certain FWI iteration unconverged velocity model provided in the embodiment of the present application;
FIG. 16 is a diagram illustrating the results of an embodiment of the present application applied to a Marmousi model;
FIG. 17 is a diagram illustrating the processing results of the guided filter migration algorithm applied to the Marmousi initial velocity model;
fig. 18 is a schematic diagram of PCC and MAE corresponding to FWI iteration after the application is applied to a Marmousi initial velocity model in the embodiment of the present application;
fig. 19 is a schematic structural diagram of a full waveform inversion accelerating device based on a depth network according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the present application.
With the development of computer technology and the improvement of computer performance, FWI has been successfully and practically applied to marine seismic data, in 2008 BP company has used FWI in Valhall oil field in norway to obtain high-precision imaging result, while land seismic data still has great difficulty in practical application due to the problems of lack of low-frequency information and the like.
Mora P indicates in 1989 that FWI is equal to tomography deepening migration, the tomography realizes background velocity updating, the depth migration realizes reflecting surface homing, and finally, a velocity model is matched with a reflecting horizon on a seismic depth migration section. However, how to acquire seismic horizon information and how to seamlessly integrate the seismic horizon information into the FWI process to guide velocity updating is a challenging problem. At present, the deep learning technology is widely applied to seismic inversion and velocity modeling.
Das et al first proposed to learn the inverse mapping of seismic data to wave impedance using a two-layer Convolutional Neural Network (CNN) in 2018, with a training data set generated by sequential gaussian simulation. Alfarraj proposes a semi-supervised learning model, models the course of evolution and inversion at the same time, and increases the amount of training samples. Guo uses RNN's memory to propose a one-dimensional wave impedance inversion problem combining CNN and bidirectional LSTM processing. The horizontal continuity effect of the inversion result is better than that of 1DCNN, but a large amount of labeled data is needed, and the inversion capability of the inversion result to high-frequency information is weak; ge combines geostatistics with closed-loop network structure; li et al propose a method of generating a distribution space consistent with logging data by using a generated countermeasure Network (GAN) as training data for subsequent applications; wang et al propose that speed inversion is carried out on one-dimensional (D) seismic data by using one-dimensional Cycle-GAN, and the diversity of a training sample set is expanded by using the characteristic that closed-loop networks such as Cycle-GAN can simultaneously carry out modeling learning on the course of inversion; in subsequent work, wang adds bilateral filtering constraint on the basis of a one-dimensional closed-loop network, thereby further improving the inversion effect; subsequent Q Wang et al utilize 2D CNN to invert, add geological constraints to the inversion process, improve lateral continuity. In the aspect of velocity modeling, korean bright et al uses the reflection waveform data and the velocity spectrum in combination as the input of a full convolution neural network to realize the mapping of the seismic data to the velocity model. The mao bloo of jilin university provides a velocity modeling convolutional neural network that inputs self-excited self-collected seismic data and outputs velocity distribution of a target region.
Applying deep learning to full-waveform inversion is also a research hotspot at present, sun proposes that a cyclic Neural Network (RNN) is used for modeling an inversion process, and realizes unsupervised full-waveform inversion, and the liu thinking tung of the university of harbin industry proves that automatic gradient derivation based on deep learning is equivalent to an accompanying state method commonly used in full-waveform inversion. However, in the L2-norm scalar function, the observed and predicted signal time difference caused by the inaccurate background velocity is more than half a cycle (called "cycle skip"), which makes FWI prone to be locally minimal and is not solved.
At present, the mainstream scheme for solving the 'cycle jump' and helping the FWI convergence can be summarized into three aspects of objective function optimization, model space expansion and low-frequency signal reconstruction, and the schemes are essentially error functional estimation under the prior information constraint. However, in practical applications, the a priori information used in the above manner may not meet the complex and variable practical situation, and the inversion result is prone to be inaccurate.
For ease of understanding, an application scenario of the embodiment of the present application is first described.
Fig. 1 is a schematic diagram of an application scenario related to an embodiment of the present application. As shown in fig. 1, an application scenario of the embodiment of the present application relates to a shot point 101, a wave detector 102, and a server 103.
Blasting at a shot point 101, simulating an earthquake, measuring actual observation seismic data by using the detector 101, and performing full waveform inversion by using the actual observation seismic data and the initial velocity model in a server to obtain an underground velocity model. The initial velocity model is usually obtained by using ray chromatography, migration velocity analysis or acoustic logging interpolation.
However, since FWI is a highly nonlinear and ill-posed problem, and may be affected by "cycle skip" or initial velocity model, and several forward seismic wave calculations are required in the iterative process, improving the divergence problem and increasing the calculation efficiency in the FWI iterative process are two major challenges in FWI.
Based on this, the embodiment of the application provides a full waveform inversion acceleration method based on a depth network, so as to solve the problems of divergence and low calculation efficiency in the FWI iterative process. In the embodiment of the application, an iteration speed model obtained by FWI iteration and corresponding offset data are used as the input of a deep learning network to train the deep learning network. When the speed model and the offset data thereof are input into the trained deep learning network, the deep learning network can output a new speed model approaching to the real speed model, and the new speed model is used for FWI, so that convergence of an iteration process after the FWI can be accelerated, and a divergence problem in the FWI iteration process can be improved.
The following describes the technical solution of the present application and how to solve the above technical problem with specific examples. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a full waveform inversion acceleration method based on a depth network according to an embodiment of the present disclosure. As shown in fig. 2, the method according to the embodiment of the present application, in which the execution subject is a server, includes the following steps:
s201, first offset data of the first speed model are obtained.
In the embodiment of the application, the first speed model may be any iteration speed model in a FWI iteration process, and the first speed model may be V I (x, z) where x represents the surface space coordinates, z represents the imaging depth, and I represents the number of iteration steps.
The first offset data is offset data corresponding to the first velocity model, and the data can be represented by P I (x, z) tableShown in the figure. In the embodiment of the present application, the seismic data before stacking may be shifted by using a Depth Migration (DM) technique based on the first velocity model, so as to obtain first shifted data. This process can be represented by the following formula:
P I (x,z)=DM(V I (x,z),S(x s ,x r ,t)) (1)
wherein P represents offset data; s (x) s ,x r T) represents prestack seismic data, which may also be referred to as actual observed seismic data, and in actual applications the prestack seismic data may be measured directly by the geophones, x s Representing the surface space coordinates, x, of the seismic source r Representing surface space coordinates of the demodulator probe; t represents time; DM represents a depth migration operator, and in this embodiment, the first migration data may be obtained by using a Reverse Time Migration (RTM) technique in the depth migration technique.
In a possible implementation manner, the server obtains first offset data corresponding to the first velocity model by using a depth offset technology.
S202, inputting the first speed model and the first offset data into a pre-trained deep learning network, and outputting a second speed model; the deep learning network is obtained by training based on iteration data and offset data, the iteration data is iteration data generated when full waveform inversion FWI is carried out on the iteration data, the forward data is obtained by carrying out forward operation on the iteration data, and the offset data is obtained by offsetting the iteration data.
In the embodiment of the present application, the deep learning network may be a Generative Adaptive Network (GAN) or a Convolutional Neural Network (CNN). The second speed model is obtained by inputting the first speed model and the first offset data into a pre-trained deep learning network, and can be used
Figure BDA0003735923220000091
And (4) showing. Compared with the first speed model, the quality of the second speed model is obviously improved, and moreApproaching the true velocity model. Taking the deep learning network as GAN as an example, the process of obtaining the second velocity model can be represented by the following formula:
Figure BDA0003735923220000092
the deep learning network is obtained by training based on the iterative data and the offset data. The training process of the deep learning network can refer to the description in fig. 3, and is not described herein.
In a possible implementation manner, the server inputs the first speed model and the first offset data into a pre-trained deep learning network, and outputs a second speed model approaching to the real speed model.
S203, FWI is carried out based on the second speed model to obtain a target model.
In the embodiment of the application, the target model is a velocity model obtained through full waveform inversion. In practical application, because a speed model specific to the underground structure is unknown, the existing method mainly obtains the speed model specific to the underground structure, namely a target model, by using the FWI method.
In a possible implementation manner, after the first speed model is replaced by the second speed model, the server performs the next FWI iteration based on the second speed model to obtain the target model.
Therefore, in the embodiment of the application, the first speed model obtained in the FWI iteration process and the corresponding first offset data are input into the pre-trained deep learning network to obtain the second speed model approaching the real speed model, and the second speed model is applied to the next FWI iteration process instead of the first speed model, so that convergence of the iteration process after FWI can be accelerated, and the divergence problem in the FWI iteration process can be improved.
On the basis of the above embodiments, the embodiment of the present application further provides a training method for a deep learning network. As shown in fig. 3, the method comprises the steps of:
s301, a training set is obtained, wherein the training set comprises M sample speed models.
In the embodiment of the application, the sample speed model in the training set can be generated directly through the server. The training set comprises M sample speed models for training the deep learning network. The velocity model, which may also be referred to as a true velocity model, refers to velocity parameters of the subsurface structure. For velocity models
Figure BDA0003735923220000101
Where T is an abbreviation for truth, representing the true velocity model, and j represents the jth velocity model.
S302, forward modeling is carried out on the M sample velocity models respectively to obtain M pre-stack seismic data.
In the embodiment of the present application, forward evolution means: knowing the velocity parameters of the subsurface structure, the seismic waveform is obtained by forward modeling.
In one possible implementation, M sample velocity models are forward-modeled in the server, respectively, to obtain corresponding M pre-stack seismic data. For example, for the jth sample velocity model
Figure BDA0003735923220000102
Forward modeling is carried out to obtain corresponding pre-stack seismic data S j (x s ,x r ,t)。
S303, performing full waveform inversion on the M pieces of pre-stack seismic data respectively to obtain M × N iteration velocity models; when any pre-stack seismic data is subjected to full-waveform inversion, N iterative velocity models are obtained. Wherein M and N are both natural numbers.
In N iterations, when the ith iteration is performed, model updating is performed by using the difference between any pre-stack seismic data and forward data corresponding to the iteration velocity model generated by the (i-1) th iteration to obtain the iteration velocity model generated by the ith iteration.
In the embodiment of the present application, the iterative velocity model refers to: and performing full waveform inversion on the pre-stack seismic data obtained by forward modeling of the velocity model in the training set, and inverting any velocity model obtained in the iterative process. The iterative velocity model can be used
Figure BDA0003735923220000103
And (4) showing. And obtaining an iteration speed model once each iteration, and obtaining N iteration speed models after N times of iteration.
In the FWI iteration process, the ith iteration of the FWI is model updating by using the difference between any pre-stack seismic data and forward data corresponding to an iteration velocity model generated by the (i-1) th iteration. For example, at the jth velocity model
Figure BDA0003735923220000104
For the purpose of example, it is preferred that,
Figure BDA0003735923220000105
corresponding pre-stack seismic data is S j (x s ,x r T), using S j (x s ,x r T) obtaining by full waveform inversion
Figure BDA0003735923220000111
Wherein the content of the first and second substances,
Figure BDA0003735923220000112
is to utilize
Figure BDA0003735923220000113
And S j (x s ,x r T) is obtained. This process can be represented by the following formula:
Figure BDA0003735923220000114
wherein, FWI is a full waveform inversion operator.
It is understood that, when i =1,
Figure BDA0003735923220000115
is composed of
Figure BDA0003735923220000116
The corresponding forward-acting data is stored in the database,
Figure BDA0003735923220000117
is an initial velocity model.
S304, for any iteration velocity model, carrying out depth migration on the pre-stack seismic data corresponding to any iteration velocity model to obtain migration data of any iteration velocity model.
The detailed description of the depth offset method may refer to the description of step S201. For example, the velocity model is iterated for step i of FWI
Figure BDA0003735923220000118
Corresponding pre-stack seismic data S j (x s ,x r T) shifting to obtain shifted data as
Figure BDA0003735923220000119
S305, taking the M x N iterative velocity models and the offset data of the M x N iterative velocity models as the dual-channel input of the deep network to be trained, and training the deep network to be trained to obtain the deep learning network.
Wherein, the loss function of the deep network to be trained satisfies the following formula:
Figure BDA00037359232200001110
wherein G represents a pre-trained deep learning network.
In one possible implementation, the deep network to be trained is trained by using the M × N iterative velocity models and the offset data of the M × N iterative velocity models as the two-channel input of the deep network to be trained. For example, in the case where the deep learning network is a GAN, the data pairs are constructed
Figure BDA00037359232200001111
Figure BDA00037359232200001112
Dual channel input as a deep networkTraining the deep learning network GAN to make its output approach the corresponding true velocity model
Figure BDA00037359232200001113
In the training process, the output sum of the deep learning network can be measured by adopting an L1 loss function
Figure BDA00037359232200001114
And adjusting parameters of the deep learning network to obtain the trained deep learning network.
Therefore, the trained deep learning network can output a target model meeting a preset relation, namely an approximation relation, with the real speed model according to the input data.
On the basis of the above embodiment, the embodiment of the present application further provides a migration learning method based on guided filtering. As shown in fig. 4, before FWI is performed based on the second velocity model to obtain the target model, the method includes the following steps:
s401, the server judges whether the difference between the first speed model and the data of the training set is larger than a difference threshold value.
In the embodiment of the present application, the difference between the first velocity model and the data of the training set may mean that the two velocity structures are significantly different or the velocity range is significantly different.
S402, inputting the first speed model and the first offset data into a pre-trained deep learning network, wherein the deep learning network comprises: when the difference between the first speed model and the data of the training set is less than or equal to a difference threshold value, the first speed model and the first offset data are input into a pre-trained deep learning network.
In one possible implementation, when the server determines that the difference between the first speed model and the data of the training set is less than or equal to a difference threshold, the first speed model and the first offset data are input to a pre-trained deep learning network to obtain a second speed model. The target model may be obtained by performing FWI based on the second velocity model.
And S403, when the difference between the data of the first speed model and the data of the training set is larger than a difference threshold value, taking the imaging result of the first speed model as a guide image, taking the imaging result of the second speed model as an input image, and performing guide filtering to obtain a filtered speed model. And calculating the residual error of the second speed model and the filtered speed model. And superposing the residual error and the first speed model to obtain a third speed model. And performing FWI on the third speed model to obtain a target model.
In one possible implementation, with V new (x, z) represents a first velocity model differing from the training set data by more than a difference threshold, and the corresponding offset data may be represented by P new (x, z) represents. Will V new (x, z) and P new (x, z) is input into a pre-trained deep learning network to obtain a second speed model
Figure BDA0003735923220000121
Will V new (x, z) as a guide image,
Figure BDA0003735923220000122
the velocity imaging result of (1) is used as an input image, and guiding filtering is carried out to obtain V Guided (x, z). Computing
Figure BDA0003735923220000123
And V Guided (x, z), using the residual as horizon information, and summing the residual with the horizon information
Figure BDA0003735923220000124
Add to obtain
Figure BDA0003735923220000125
By using
Figure BDA0003735923220000126
Alternative V new (x, z) subsequent FWI iterations.
This process can be represented by the following formula:
Figure BDA0003735923220000127
Figure BDA0003735923220000128
Figure BDA0003735923220000129
where GF represents the guided filter operator. The filter kernel diameter of the guided filtering may be 11 and the normalization parameter e may be set to 0.01.
Therefore, when the difference between the first speed model and the speed model of the training set is larger than the difference threshold value, the inaccurate output of the deep learning network caused by the overlarge difference between the two speed models can be improved through the processing of the guided filtering migration algorithm, and further the iteration process of the subsequent FWI can be improved.
On the basis of the above embodiments, the embodiment of the present application further provides a more specific full waveform inversion acceleration method based on a depth network, which includes the following steps:
s501, the server randomly generates 190 speed models
Figure BDA0003735923220000131
S502, model for each speed
Figure BDA0003735923220000132
Shot data was generated using a rake wavelet with a dominant frequency of 20Hz with a lateral and longitudinal sample interval of dx = dz =10.0m, a number of lateral and longitudinal sample points of 788 and 266, respectively, a shot start coordinate of 6 with an interval of 8, for a total of 98 shots.
Among them, the rake wavelet is one of seismic wavelets, which is a short impulse shock. The shot data refers to actually observed seismic data, and in practical application, the shot data is data acquired by the detectors during blasting, and the reflected data is stratum data of coordinates corresponding to the detectors and shot points. The shot point refers to the location of the shot.
S503, model for each speed
Figure BDA0003735923220000133
Performing FWI iteration for 300 times, wherein the corresponding one-dimensional initial speed in the FWI iteration process is
Figure BDA0003735923220000134
According to the obtained speed model after every 10 iterations
Figure BDA0003735923220000135
Performing reverse time migration to obtain migration seismic data
Figure BDA0003735923220000136
Wherein, the data corresponding to the first 150 speed models are selected
Figure BDA0003735923220000137
Figure BDA0003735923220000138
As training set, data corresponding to 20 velocity models
Figure BDA0003735923220000139
As a verification set, data corresponding to the remaining 20 velocity models
Figure BDA00037359232200001310
As a test set, all velocity models and RTM seismic data were 266 x 788 in size.
S504, mixing
Figure BDA00037359232200001311
As a two-channel input to the deep neural network, the deep neural network is trained, and the process of training the deep neural network may be as shown in fig. 5.
Wherein, the deep neural network may use a Pix2Pix network model. The Pix2Pix network model is a generative countermeasure network model having a generator (G) and a discriminator (D), and the Pix2Pix network model in the embodiment of the present application uses a U-Net network structure.
In one possible implementation, the velocity model is used
Figure BDA00037359232200001312
And corresponding depth migration seismic data
Figure BDA00037359232200001313
Normalized separately and then merged into the input of both channels. Since onshore geological velocity ranges are typically in the range of 1000m/s and 8000m/s, the depth-shifted seismic data ranges (signal amplitudes are typically between-0.2 and 0.2), which are too different, may result in unstable model values. So that a velocity model is required
Figure BDA00037359232200001314
And corresponding depth migration seismic data
Figure BDA00037359232200001315
Respectively carrying out standardized treatment.
The process of the standardization treatment is as follows: for is to
Figure BDA00037359232200001316
Data is subjected to mean value reduction and variance removal processing, and the data is transformed into data with a mean value of 0 and a variance of 1; for is to
Figure BDA0003735923220000141
And performing linear transformation, namely transforming the data to be between (-11), and performing linear change according to the following formula:
Figure BDA0003735923220000142
s505, the generator G outputs the updated speed image
Figure BDA0003735923220000143
And delivered to the velocity model generated by the judgment of the discriminator DModel and true velocity model
Figure BDA0003735923220000144
To a similar degree.
Figure BDA0003735923220000145
Figure BDA0003735923220000146
Figure BDA0003735923220000147
Figure BDA0003735923220000148
Wherein n represents noise, realized by a Dropout layer, avoiding model overfitting; g * Representing the loss function of GAN.
In the embodiment of the application, when the deep learning network is trained, input data are cut into 256 × 256 areas at random, the initial learning rate is 0.0002, and 200 epochs are trained by using an Adam optimizer.
The Pix2Pix using the U-Net network structure is a network model based on a full convolution network, can be directly applied to updating of a velocity model of any size, and when a pre-trained deep learning network is used, a first velocity model and corresponding RTM seismic data are directly input into the network to obtain a second velocity model.
When the quality of the velocity model output by the pre-trained deep learning network is checked, the quality may be checked by using a Pearson Correlation Coefficient (PCC) and an average absolute error (MAE).
Wherein, PCC is used to measure the similarity between two variables, and for two images X and Y with length H and width W, the PCC index can be calculated as follows:
Figure BDA0003735923220000149
MAE is the average of the absolute errors and is calculated as follows:
Figure BDA00037359232200001410
the embodiment of the application can be realized based on Pythroch, and can be deployed and operated on Intel (R) Core (TM) i9-10900X CPU @3.70GHz,64GB, RTX 2080Ti machine.
FIG. 6 is a velocity model applied to certain FWI iterative convergence according to an embodiment of the present application
Figure BDA0003735923220000151
Is shown schematically. Where c represents the velocity model number. FIG. 6 (a) is a velocity model obtained at 50 th iteration of FWI
Figure BDA0003735923220000152
FIG. 6 (b) is RTM migration seismic data obtained by using (a)
Figure BDA0003735923220000153
In FIG. 6, (c) is a graph using
Figure BDA0003735923220000154
And
Figure BDA0003735923220000155
resulting updated velocity model
Figure BDA0003735923220000156
FIG. 6 (d) is a true velocity model
Figure BDA0003735923220000157
FIG. 6 (e) is a velocity model
Figure BDA0003735923220000158
And with
Figure BDA0003735923220000159
MAE and PCC of (2).
As can be seen from fig. 6, in the case that the FWI iteration converges, the method provided in the embodiment of the present application may input the velocity model obtained based on the FWI iteration and the corresponding offset data into the deep learning network trained in advance, so as to obtain the velocity model updated by the deep learning network, where the velocity model has a higher approximation degree with the true velocity model, and the method may be used in the subsequent FWI iteration to improve the output result of the FWI.
FIG. 7 is a velocity model applied to certain FWI iteration unconvergence according to an embodiment of the present application
Figure BDA00037359232200001510
Schematic representation. Where n represents the velocity model number. FIG. 7 (a) is a velocity model obtained at 50 th iteration of FWI
Figure BDA00037359232200001511
FIG. 7 (b) is RTM imaging seismic data obtained by (a)
Figure BDA00037359232200001512
FIG. 7 (c) shows the utilization of the method
Figure BDA00037359232200001513
And
Figure BDA00037359232200001514
updating the derived velocity model
Figure BDA00037359232200001515
FIG. 7 (d) is a true velocity model
Figure BDA00037359232200001516
FIG. 7 (e) is a velocity model
Figure BDA00037359232200001517
And
Figure BDA00037359232200001518
MAE and PCC of (2).
As can be seen from fig. 7, in the case that the FWI iteration does not converge, the method provided in the embodiment of the present application may input the velocity model obtained based on the FWI iteration and the corresponding offset data into the deep learning network trained in advance, to obtain the velocity model updated by the deep learning network, where the velocity model has a higher approximation degree with the true velocity model, and is used in the subsequent FWI iteration to improve the output result of the FWI.
Fig. 8 is a schematic graph of a change curve of the PCC and MAE according to the speed model provided in the embodiment of the present application along with the number of iteration steps. For velocity model
Figure BDA00037359232200001519
And
Figure BDA00037359232200001520
respectively carrying out FWI iteration for 300 rounds by using a one-dimensional initial speed model, starting from the 10 th round and obtaining the speed values every 10 rounds
Figure BDA00037359232200001521
And
Figure BDA00037359232200001522
updating by using the method provided by the embodiment of the application to obtain
Figure BDA00037359232200001523
And
Figure BDA00037359232200001524
wherein the number of iteration rounds i =10, 20, \8230290, 290, 300.
FIG. 8 (a) is
Figure BDA00037359232200001525
And
Figure BDA00037359232200001526
the change curves of PCC and MAE with the iteration number i of FIG. 8 (b)
Figure BDA00037359232200001527
And
Figure BDA00037359232200001528
the PCC and MAE of (1) is plotted against the number of iteration rounds i. It can be seen from fig. 8 (a) that when the FWI iteration converges, the method provided in the embodiment of the present application can improve the quality of the speed model output by the FWI, and the subsequent iteration of the FWI can improve the output result of the FWI. As can be seen from fig. 8 (b), when the FWI iteration diverges, the method provided in the embodiment of the present application can correct the divergent velocity model result, which is beneficial for the FWI subsequent iteration.
Fig. 9 is a schematic diagram of local PCCs and MAEs corresponding to a certain FWI iterative convergence speed model provided in an embodiment of the present application. In fig. 9, the velocity models are calculated separately using 5 by 5 sliding windows
Figure BDA0003735923220000161
And
Figure BDA0003735923220000162
local PCC and MAE indices. Wherein (a) of FIG. 9 is a velocity model
Figure BDA0003735923220000163
FIG. 9 (b) is
Figure BDA0003735923220000164
The local PCC index of (1), FIG. 9 (c) is
Figure BDA0003735923220000165
The local MAE index of (1), FIG. 9 (d) is a velocity model
Figure BDA0003735923220000166
FIG. 9 (e) is
Figure BDA0003735923220000167
Local PC ofIndex C, FIG. 9 (f)
Figure BDA0003735923220000168
Local MAE index of (1). As can be seen from fig. 9, the method provided by the embodiment of the present application can effectively improve a locally occurring error portion in the velocity model obtained by FWI iteration, and can effectively correct an obviously inaccurate condition occurring in a deep portion of the model.
Fig. 10 is a schematic diagram of local PCCs and MAEs corresponding to a certain FWI iteration non-convergence speed model provided in an embodiment of the present application. In fig. 10, the velocity models are calculated separately by taking 5 × 5 sliding windows
Figure BDA0003735923220000169
And
Figure BDA00037359232200001610
local PCC and MAE indices. Wherein (a) of FIG. 10 is a velocity model
Figure BDA00037359232200001611
FIG. 10 (b) is
Figure BDA00037359232200001612
The local PCC index of (1), FIG. 10 (c) is
Figure BDA00037359232200001613
The local MAE index of (1), FIG. 10 (d) is a velocity model
Figure BDA00037359232200001614
FIG. 10 (e) is
Figure BDA00037359232200001615
The local PCC index of (c), FIG. 10 (f)
Figure BDA00037359232200001616
Local MAE index of (1). As can be seen from fig. 10, in the case that FWI iteration diverges, the method provided in the embodiment of the present application may be applied to a large-area error portion occurring in the velocity modelAn effective correction is performed.
Fig. 11 is a comparison graph of average PCC and MAE indexes of three deep network models provided in the embodiment of the present application. The three deep network models are respectively: pix2Pix, U-Net and M-RUDSR. Fig. 11 is a comparison of average PCC and MAE indexes on the test set provided in step S503 for the three deep network models, and it can be seen from fig. 11 that a relatively better effect can be achieved by using the Pix2Pix model.
FIG. 12 is a velocity model for certain FWI iterative convergence provided by embodiments of the present application
Figure BDA00037359232200001617
One-dimensional initial velocity model of
Figure BDA00037359232200001618
And (5) the updated result is shown schematically. FIG. 12 (a) is a one-dimensional initial velocity model
Figure BDA00037359232200001619
FIG. 12 (b) is a graph obtained by
Figure BDA00037359232200001620
The obtained RTM seismic data was imaged, and fig. 12 (c) is a real velocity model
Figure BDA00037359232200001621
FIG. 12 (d) is the updated velocity model
Figure BDA0003735923220000171
Fig. 12 (e) shows a PCC and MAE index comparison before and after updating. As can be seen from fig. 12, the initial velocity model updated by the method provided in the embodiment of the present application has a greatly improved PCC and MAE, which is beneficial to the subsequent FWI iteration.
Fig. 13 is a schematic diagram of PCC and MAE corresponding to an initial velocity model update of a certain FWI iterative converged velocity model provided in the embodiment of the present application. Using the updated velocity model shown in FIG. 12
Figure BDA0003735923220000172
And the original one-dimensional velocity model
Figure BDA0003735923220000173
And respectively performing FWI iteration as initial speeds, and comparing the PCC and MAE indexes of the speed model obtained in each step. As can be seen from fig. 13, compared with the results of FWI using the original one-dimensional velocity model, FWI using the initial velocity model updated by the method provided in the embodiment of the present application can converge faster and achieve better effect.
FIG. 14 is a velocity model for certain FWI iteration unconvergence provided by the embodiments of the present application
Figure BDA0003735923220000174
One-dimensional initial velocity model of
Figure BDA0003735923220000175
And (5) the updated result is shown schematically. FIG. 14 (a) is a one-dimensional initial velocity model
Figure BDA0003735923220000176
FIG. 14 (b) is a graph obtained by
Figure BDA0003735923220000177
The obtained RTM seismic data imaging result, fig. 14 (c) is a true velocity model
Figure BDA0003735923220000178
FIG. 14 (d) is the velocity model after update
Figure BDA0003735923220000179
Fig. 14 (e) is a PCC and MAE index comparison before and after updating. As can be seen from fig. 14, the initial velocity model quality after being updated by the method provided in the embodiment of the present application is improved, which is beneficial to the subsequent iteration of the FWI.
Fig. 15 is a schematic diagram of PCC and MAE corresponding to an initial velocity model update of a certain FWI iteration unconverged velocity model provided in the embodiment of the present application. Using the updated velocity model shown in FIG. 14
Figure BDA00037359232200001710
FWI was performed as the initial speed. FIG. 15 (a) shows the use
Figure BDA00037359232200001711
And
Figure BDA00037359232200001712
and respectively performing FWI iteration as initial speeds, and comparing the PCC and MAE indexes of the speed model obtained in each step. FIG. 15 (b) shows the use
Figure BDA00037359232200001713
And performing FWI iteration as an initial speed, updating the speed model obtained in the step 10 by using the method provided by the embodiment of the application again, and continuing to perform PCC and MAE index results obtained in each step by the FWI iteration. As can be seen from FIG. 15 (a), in use
Figure BDA00037359232200001714
In the case of performing FWI without convergence, performing FWI with the initial speed updated by the method provided in the embodiment of the present application can effectively correct the divergent iterative process, as can be seen from (b) of fig. 15, using the initial speed updated by the method provided in the embodiment of the present application
Figure BDA00037359232200001715
After a certain number of steps, the velocity model obtained by updating the method provided by the embodiment of the present application is used again, so that the subsequent FWI iteration can be further improved, and the convergence direction is adjusted.
Fig. 16 is a schematic diagram of a result of applying the embodiment to a Marmousi model. FIG. 16 (a) shows an initial velocity model V of Marmousi marm (x, z) in FIG. 16, and (b) is a graph using V marm RTM seismic data P obtained by (x, z) marm (x, z), FIG. 16 (c) is a Marmousi true velocity model
Figure BDA0003735923220000181
FIG. 16 (d) is a velocity model obtained by directly using a network update
Figure BDA0003735923220000182
FIG. 16 (e) is
Figure BDA0003735923220000183
Residual V from the guided filtering result res (x, z), graph (d) is the result using the migration algorithm
Figure BDA0003735923220000184
Fig. 17 is a schematic diagram of a processing result of applying the guided filter migration algorithm to the Marmousi initial velocity model.
As can be seen from fig. 16 and 17, the PCC and MAE indexes of the velocity model obtained by using the guided filter migration algorithm are significantly improved compared with the velocity model obtained by directly applying the deep learning network update. And compared with the initial velocity, although the PCC and MAE indexes of the velocity model obtained by the guided filter migration algorithm are not significantly improved, it can be seen from (f) of fig. 16 that the velocity model obtained by the guided filter migration algorithm has richer horizon structure information, which is beneficial to the subsequent FWI iteration.
Fig. 18 is a schematic diagram of PCC and MAE corresponding to FWI iteration after the application of the embodiment to the Marmousi initial velocity model. The FWI used here has a primary frequency of 10Hz. FIG. 18 (a) shows the use of V marm (x, z) and
Figure BDA0003735923220000185
curves of the change of PCC and MAE with iteration step number as initial speed for FWI iteration, FIG. 18 (b) shows the use
Figure BDA0003735923220000186
And performing FWI iteration for 5 rounds as an initial speed, then updating by using the deep learning network and the migration algorithm again, and continuing to perform PCC and MAE change conditions of subsequent FWI iteration on the obtained result. As can be seen from (a) of fig. 18, the initial velocity model V is used marm (x, z) FWI iterations are not performedIn case of convergence, updating the result by using the network and the migration algorithm
Figure BDA0003735923220000187
The process of performing FWI iteration can effectively alleviate the divergence of FWI iteration, and it can be seen from (b) of fig. 18 that the use of FWI iteration
Figure BDA0003735923220000188
The initial velocity model is used for FWI iteration, and the velocity model obtained by updating the method provided by the embodiment of the application is reused after a small number of iteration steps, so that the subsequent FWI iteration can be further improved.
Fig. 19 is a schematic structural diagram of a full waveform inversion acceleration apparatus 500 based on a depth network according to an embodiment of the present application. The apparatus of the embodiments of the present application may be in the form of software and/or hardware. As shown in fig. 19, an embodiment of the present application provides a full waveform inversion acceleration apparatus based on a depth network, including: an acquisition module 501, an input-output module 502 and an inversion module 503. Wherein:
an obtaining module 501 is configured to obtain first offset data of a first velocity model.
An input/output module 502, configured to input the first velocity model and the first offset data into a pre-trained deep learning network, and output a second velocity model; the deep learning network is obtained by training based on iterative data and offset data, the iterative data is iterative data generated when the iterative data is subjected to full waveform inversion FWI, the forward data is obtained by performing forward modeling on the iterative data, and the offset data is obtained by offsetting the forward data based on the iterative data.
An inversion module 503, configured to perform FWI based on the second velocity model to obtain a target model
Optionally, the obtaining module 501 is further configured to obtain a training set, where the training set includes M sample speed models; forward modeling is carried out on the M sample velocity models respectively to obtain M pre-stack seismic data; the inversion module 503 is further configured to perform full waveform inversion on the M prestack seismic data, respectively, to obtain M × N iterative velocity models; when any pre-stack seismic data is subjected to full waveform inversion, N iterative velocity models are obtained; for any iteration speed model, carrying out depth migration on the prestack seismic data corresponding to any iteration speed model to obtain migration data of any iteration speed model; the input/output module 502 is further configured to train the deep network to be trained by using the migration data of the M × N iterative velocity models and the M × N iterative velocity models as a dual-channel input of the deep network to be trained, so as to obtain a deep learning network; wherein M and N are both natural numbers.
Optionally, the inversion module 503 is further configured to perform N iterations in sequence when performing FWI on any pre-stack seismic data of the M pre-stack seismic data; in N iterations, when the ith iteration is performed, model updating is performed by using the difference between any pre-stack seismic data and forward data corresponding to the iterative velocity model generated by the (i-1) th iteration to obtain the iterative velocity model generated by the ith iteration.
Optionally, the loss function of the deep network to be trained satisfies the following formula:
Figure BDA0003735923220000191
wherein G represents a deep learning network of the deep network to be trained, V i Representing an iterative velocity model, P i Offset data, V, representing an iterative velocity model T Representing the sample velocity model, j representing the jth sample velocity model, x representing the surface space coordinates, and z representing the imaging depth.
Optionally, the input/output module 502 is further configured to determine whether a difference between the first speed model and the data of the training set is greater than a difference threshold; inputting a first velocity model and first migration data into a pre-trained deep learning network, comprising: when the difference between the first speed model and the data of the training set is less than or equal to a difference threshold value, the first speed model and the first offset data are input into a pre-trained deep learning network.
Optionally, the input/output module 502 is further configured to, when a difference between the data of the first velocity model and the data of the training set is greater than a difference threshold, take an imaging result of the first velocity model as a guide image, take an imaging result of the second velocity model as an input image, and perform guided filtering to obtain a filtered velocity model; calculating a residual error between the second speed model and the filtered speed model; superposing the residual error with the first speed model to obtain a third speed model; the inversion module 503 is further configured to perform FWI on the third velocity model to obtain a target model.
The full waveform inversion accelerating device based on the depth network provided in the embodiment of the present application can be used for executing the full waveform inversion accelerating method based on the depth network provided in any method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
In the technical scheme of the application, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user all conform to the regulations of related laws and regulations and do not violate the good custom of the public order.
Fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 20, the electronic device 600 may include: at least one processor 601 and memory 602.
The memory 602 is used for storing programs. In particular, the program may include program code comprising computer operating instructions.
The memory 602 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 601 is configured to execute computer-executable instructions stored in the memory 602 to implement the device control method described in the foregoing method embodiments. The processor 601 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention. Specifically, when the device control method described in the foregoing method embodiment is implemented, the electronic device may be, for example, an electronic device with a processing function, such as a terminal or a server.
Optionally, the electronic device 600 may also include a communication interface 603. In a specific implementation, if the communication interface 603, the memory 602 and the processor 601 are implemented independently, the communication interface 603, the memory 602 and the processor 601 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. Buses may be classified as address buses, data buses, control buses, etc., but do not represent only one bus or type of bus.
Alternatively, in a specific implementation, if the communication interface 603, the memory 602, and the processor 601 are integrated into a chip, the communication interface 603, the memory 602, and the processor 601 may complete communication through an internal interface.
The embodiment of the present application further provides a computer-readable storage medium, in which computer instructions are stored, and when the processor executes the computer instructions, the steps in the method in the foregoing embodiment are implemented.
Embodiments of the present application further provide a computer program product, which includes computer instructions, and when the computer instructions are executed by a processor, the computer instructions implement the steps of the method in the above embodiments.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the disclosure of the present application may be executed in parallel, may be executed sequentially, or may be executed in different orders, as long as the desired result of the technical solution disclosed in the present application can be achieved, which is not limited herein.
The above-described embodiments are not intended to limit the scope of the present disclosure. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions are possible, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A full waveform inversion acceleration method based on a depth network, the method comprising:
acquiring first offset data of a first velocity model;
inputting the first speed model and the first offset data into a pre-trained deep learning network, and outputting a second speed model; the deep learning network is obtained by training based on iterative data and offset data, the iterative data is iterative data generated when full waveform inversion FWI is carried out on iterative data, the forward data is obtained by carrying out forward modeling on the iterative data, and the offset data is obtained by offsetting the forward data based on the iterative data;
and carrying out FWI based on the second speed model to obtain a target model.
2. The method of claim 1, wherein prior to obtaining the first offset data for the first velocity model, the method further comprises:
acquiring a training set, wherein the training set comprises M sample speed models;
forward modeling is carried out on the M sample velocity models respectively to obtain M pre-stack seismic data;
performing full waveform inversion on the M pre-stack seismic data to obtain M × N iterative velocity models; when full-waveform inversion is carried out on any pre-stack seismic data, N iterative velocity models are obtained;
for any iteration velocity model, carrying out depth migration on the pre-stack seismic data corresponding to any iteration velocity model to obtain migration data of any iteration velocity model;
taking the M x N iteration speed models and the offset data of the M x N iteration speed models as the double-channel input of the deep network to be trained, and training the deep network to be trained to obtain the deep learning network; wherein M and N are both natural numbers.
3. The method according to claim 2, wherein N iterations are performed in sequence while performing FWI on any one of the M pre-stack seismic data;
in the N iterations, when the ith iteration is performed, model updating is performed by using the difference between any pre-stack seismic data and forward data corresponding to the iterative velocity model generated by the (i-1) th iteration to obtain the iterative velocity model generated by the ith iteration.
4. The method of claim 3, wherein the loss function of the deep network to be trained satisfies the following formula:
Figure FDA0003735923210000021
wherein G represents a deep learning network of the deep network to be trained, V i Representing said iterative velocity model, P i Offset data, V, representing the iterative velocity model T Representing the sample velocity model, j representing the jth of the sample velocity model, x representing surface space coordinates, and z representing imaging depth.
5. The method according to any of claims 1-4, wherein before performing FWI based on the second velocity model to obtain a target model, the method further comprises:
judging whether the difference between the first speed model and the data of the training set is larger than a difference threshold value or not;
the inputting the first velocity model and the first offset data into a pre-trained deep learning network comprises: when the difference between the first speed model and the data of the training set is less than or equal to the difference threshold, inputting the first speed model and the first offset data into a pre-trained deep learning network.
6. The method of claim 5, wherein performing FWI based on the second velocity model to obtain a target model comprises:
when the difference between the data of the first speed model and the data of the training set is larger than the difference threshold value, taking the imaging result of the first speed model as a guide image, taking the imaging result of the second speed model as an input image, and performing guide filtering to obtain a filtered speed model;
calculating a residual error between the second velocity model and the filtered velocity model;
superposing the residual error with the first speed model to obtain a third speed model;
and carrying out FWI on the third speed model to obtain the target model.
7. A full waveform inversion acceleration apparatus based on a depth network, comprising:
an obtaining module for obtaining first offset data of a first velocity model;
the input and output module is used for inputting the first speed model and the first offset data into a pre-trained deep learning network and outputting a second speed model; the deep learning network is obtained by training based on iteration data and offset data, the iteration data is iteration data generated when full waveform inversion FWI is carried out on iteration data, the forward data is obtained by carrying out forward operation on the iteration data, and the offset data is obtained by offsetting the forward data based on the iteration data;
and the inversion module is used for carrying out FWI based on the second velocity model to obtain a target model.
8. An electronic device, comprising: at least one processor and memory;
the memory stores computer execution instructions;
the at least one processor executing the memory stored computer-executable instructions to cause the at least one processor to perform the depth-network based full waveform inversion acceleration method of any one of claims 1-6.
9. A computer-readable storage medium having stored therein computer-executable instructions for implementing the method for accelerating depth-network-based full waveform inversion according to any one of claims 1 to 6 when executed by a processor.
10. A computer program product comprising a computer program that when executed by a processor implements the method for full waveform inversion acceleration based on a depth network of any one of claims 1-6.
CN202210796150.2A 2022-07-07 2022-07-07 Full waveform inversion acceleration method based on depth network and related device Pending CN115659773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210796150.2A CN115659773A (en) 2022-07-07 2022-07-07 Full waveform inversion acceleration method based on depth network and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210796150.2A CN115659773A (en) 2022-07-07 2022-07-07 Full waveform inversion acceleration method based on depth network and related device

Publications (1)

Publication Number Publication Date
CN115659773A true CN115659773A (en) 2023-01-31

Family

ID=85023507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210796150.2A Pending CN115659773A (en) 2022-07-07 2022-07-07 Full waveform inversion acceleration method based on depth network and related device

Country Status (1)

Country Link
CN (1) CN115659773A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117492079A (en) * 2024-01-03 2024-02-02 中国海洋大学 Seismic velocity model reconstruction method, medium and device based on TDS-Unet network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117492079A (en) * 2024-01-03 2024-02-02 中国海洋大学 Seismic velocity model reconstruction method, medium and device based on TDS-Unet network
CN117492079B (en) * 2024-01-03 2024-04-09 中国海洋大学 Seismic velocity model reconstruction method, medium and device based on TDS-Unet network

Similar Documents

Publication Publication Date Title
CN103238158B (en) Utilize the marine streamer data source inverting simultaneously that mutually related objects function is carried out
CN112083482A (en) Seismic super-resolution inversion method based on model-driven depth learning
CN110031896B (en) Seismic random inversion method and device based on multi-point geostatistics prior information
CN106054244B (en) The LPF of window multiple dimensioned full waveform inversion method when blocking
US20230305177A1 (en) Multi-scale unsupervised seismic velocity inversion method based on autoencoder for observation data
CN103630933A (en) Nonlinear optimization based time-space domain staggered grid finite difference method and device
CN112013286B (en) Method and device for positioning pipeline leakage point, storage medium and terminal
CN110031895B (en) Multipoint geostatistical stochastic inversion method and device based on image stitching
CN113189561B (en) Sea clutter parameter estimation method, system, equipment and storage medium
CN113687433B (en) Bi-LSTM-based magnetotelluric signal denoising method and system
CN111580163B (en) Full waveform inversion method and system based on non-monotonic search technology
CN113962244A (en) Rayleigh wave seismic data noise removal method, storage medium and electronic device
CN115659773A (en) Full waveform inversion acceleration method based on depth network and related device
CN104391324A (en) Seismic trace set dynamic correction stretching correction pre-processing technology before AVO inversion depending on frequency
CN112231974A (en) TBM rock breaking seismic source seismic wave field characteristic recovery method and system based on deep learning
CN111597753A (en) Data depth change characteristic self-adaptive two-dimensional resistivity inversion method and system
CN112882123B (en) CNN well-seismic joint inversion method, system and application based on two-step method
KR102120150B1 (en) Learning method and learning device for variational interference using neural network and test method and test device for variational interference using the same
CN113866827B (en) Interpretation velocity modeling seismic imaging method, system, medium and equipment
CN111273346A (en) Method, device, computer equipment and readable storage medium for removing deposition background
CN112016956B (en) Ore grade estimation method and device based on BP neural network
KR102110316B1 (en) Method and device for variational interference using neural network
CN113031072A (en) Method, device and equipment for suppressing multiple between virtual homophase axial layers
CN112649869A (en) Reservoir characteristic parameter prediction method and system based on GA-WNN
CN112580181A (en) Random inversion method and inversion system based on gradient hybrid search algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination