WO2023087451A1 - 基于观测数据自编码的多尺度无监督地震波速反演方法 - Google Patents

基于观测数据自编码的多尺度无监督地震波速反演方法 Download PDF

Info

Publication number
WO2023087451A1
WO2023087451A1 PCT/CN2021/137890 CN2021137890W WO2023087451A1 WO 2023087451 A1 WO2023087451 A1 WO 2023087451A1 CN 2021137890 W CN2021137890 W CN 2021137890W WO 2023087451 A1 WO2023087451 A1 WO 2023087451A1
Authority
WO
WIPO (PCT)
Prior art keywords
observation data
seismic
network
wave velocity
inversion
Prior art date
Application number
PCT/CN2021/137890
Other languages
English (en)
French (fr)
Inventor
刘斌
任玉晓
蒋鹏
杨森林
王清扬
许新骥
李铎
Original Assignee
山东大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东大学 filed Critical 山东大学
Priority to US18/031,289 priority Critical patent/US11828894B2/en
Publication of WO2023087451A1 publication Critical patent/WO2023087451A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/30Analysis
    • G01V1/303Analysis for determining velocity profiles or travel times
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V2210/00Details of seismic processing or analysis
    • G01V2210/60Analysis
    • G01V2210/62Physical property of subsurface
    • G01V2210/622Velocity, density or impedance
    • G01V2210/6222Velocity; travel time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V2210/00Details of seismic processing or analysis
    • G01V2210/60Analysis
    • G01V2210/66Subsurface modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Definitions

  • the invention belongs to the technical field of geophysical exploration, and in particular relates to a multi-scale unsupervised seismic wave velocity inversion method based on observation data self-encoding.
  • Seismic exploration technology has played an important role in production practices such as oil and gas resource exploration, coal field exploration, and tunnel unfavorable geological detection.
  • the basic working principle of reflected wave seismic exploration is to excite seismic waves through artificial seismic sources, and the seismic waves encounter rock interface or geological structures to reflect, and then these seismic wave signals containing geological information are received by geophones arranged on the ground, and used for subsequent Processing and interpretation work to determine the location and geological conditions of structures.
  • Full Waveform Inversion As a well-recognized wave velocity inversion method in the field of seismic detection, uses all the waveform information in seismic records to iteratively optimize model parameters, Essentially, it is a local optimization algorithm for solving the seismic data fitting problem, but the mapping between seismic data and seismic wave velocity is strongly nonlinear, which makes the inversion results of full waveform inversion highly dependent on the initial wave velocity model.
  • the establishment of the initial model generally relies on human experience, and the initial wave velocity model that is greatly different from the actual situation can easily cause the inversion result to fall into a local minimum, which seriously affects the accuracy of the inversion result.
  • Multi-scale seismic full waveform inversion extracts low-frequency information from seismic data, first inverts the large-scale structure in the corresponding velocity model, and then gradually uses higher-frequency information to invert the small-scale fine structure in the velocity model.
  • the need for an initial model For example, Yuqing Chen and Erdinc Saygin introduced the self-encoder structure in deep learning, and extracted the information in the observation data through this structure, replacing the link of extracting the observation data information through the filtering algorithm in the traditional multi-scale seismic full waveform inversion, and also realized the FWI The effect of getting rid of the dependence on the initial model.
  • the seismic wave velocity inversion network has achieved effective inversion of more complex wave velocity models in numerical experiments, showing a wave velocity inversion effect better than traditional FWI.
  • the present invention proposes a multi-scale unsupervised seismic velocity inversion method based on self-encoding of observation data.
  • the present invention extracts large-scale information in the data through self-encoding of observation data, and uses this information to guide the inversion network Complete the recovery of different scale features in the velocity model and reduce the nonlinearity of the inversion.
  • the trained observation data self-encoder encoding structure is embedded in the inversion network to complete the front-end of the inversion network for seismic observation data information. Effective extraction enables the inversion network to better analyze the information contained in the seismic data and better establish the mapping relationship between the seismic data and the velocity model.
  • the location code is added to the data to assist the layout of the network-aware observation system, which is convenient for practical engineering applications. This method can achieve a more accurate inversion effect of the seismic velocity model without the real geological model as the network training label.
  • the present invention adopts the following technical solutions:
  • a multi-scale unsupervised seismic wave velocity inversion method based on self-encoding of observation data comprising the following steps:
  • Calculate the residual of predicted observation data and simulated earthquake observation data respectively input the predicted observation data and simulated earthquake observation data into the self-encoder coding part of each trained observation data, obtain encoded low-dimensional vectors respectively, and calculate each low-dimensional vector
  • the residual error calculate the residual error of the linear gradient velocity model and the predicted wave velocity model output by the inversion network;
  • the three residuals are summed in proportion to the number of training rounds to form a multi-scale unsupervised loss function, which guides the network to recover information of different scales in the speed model in different training stages, and uses the gradient of the loss function to return and update the convolution- The parameters of the fully connected network;
  • the inversion results are obtained by processing the field observation data by using the convolutional-fully connected network with updated parameters.
  • the wave field simulation is carried out for each geological wave velocity model with a fixed source, geophone position and observation time, and the wave field data is recorded at the geophone position to obtain Actual seismic data corresponding to the geowave velocity model.
  • the wave equation is used to calculate and obtain the corresponding simulated seismic observation data.
  • the actual observation data is self-encoded by a plurality of regular autoencoders, and the output of the encoder part of the regular autoencoder is composed of a vector whose parameter value is lower than that of the seismic observation data, and the vector contains the seismic observation
  • the global key information in the data corresponds to the large-scale information in the velocity model.
  • a trigonometric function position feature information code is added to each seismic track of the actual observation data, and the position feature information code is obtained by inputting a formula composed of a sine and cosine function through the shot point and the geophone position of the seismic track Two values can be used to calibrate the position of any source and receiver.
  • the convolution-fully-connected network includes a feature encoder, a feature generator, and a feature decoder.
  • the encoding structure of each observation data self-encoder and other network structures together form a feature encoder.
  • the network uses It is used to establish the mapping from observational seismic observation data to velocity model.
  • the encoder includes a global feature encoder and a neighborhood information encoder, the observation data input into the network are respectively input into the above two parts, and the outputs of the two parts are spliced and input into the feature generator; the global feature The encoder is the encoding structure of the self-encoder for each observation data, and the neighborhood information encoder is composed of three layers of sequentially cascaded convolutional structures.
  • the feature generator includes 5 fully connected layers, the input of the feature generator is the output of the encoder, and the output of the feature generator is the input of the feature decoder.
  • the feature decoder includes a 6-layer sequentially cascaded convolutional structure, wherein the fourth layer of convolutional structure is 4 parallel convolutional layers.
  • a wave equation forward modeling network to convert the predicted wave velocity model into corresponding seismic observation data: building a wave equation forward modeling network based on a deep neural network, and the final output of the convolution-fully connected network Seismic wave field forward modeling is carried out to obtain seismic observation data corresponding to the predicted wave velocity model.
  • the specific process of constructing the wave equation forward modeling network based on the deep neural network includes: in the time-space domain, the constant-density acoustic wave equation is discretized, and the process of seismic wave field propagation over time is based on the discretization
  • the iterative process of the forward calculation operator in the processed equation; the seismic wave field propagation operation at each time step is used as a layer of deep neural network, the seismic wave velocity model is used as the trainable parameter of the deep neural network, and the wave field propagation process is used as The convolution operation and the simple operation between the corresponding elements of the matrix are used as the internal operation process of the network to realize the construction of the forward modeling network of the wave equation.
  • Each network layer of the wave equation forward modeling network takes the seismic wavefields at the first two moments as input, and obtains the wavefield at the next moment and the corresponding observation data by introducing the source wavefield at the current moment.
  • weight coefficients that vary with the number of training rounds are added to the three items of observation data residuals, low-dimensional vector residuals, and linear model residuals that make up the loss function; the three residuals contain small-scale information, Different degrees of large-scale information and basic prior information.
  • gradually increasing weight coefficients are added to the observation data residuals, and gradually decreasing weight coefficients are added to the low-dimensional vector residuals and linear model residuals, so as to guide the initial network training to use the large-scale information of the inversion model as the basis.
  • the target, the middle and late training process is aimed at the fine structure of the inversion model.
  • a multi-scale unsupervised seismic wave velocity inversion system based on self-encoding of observation data including:
  • the inversion database construction module is configured to construct and calculate corresponding seismic observation data according to actual geological conditions, and form an unsupervised seismic wave velocity inversion database based on each seismic observation data and geological wave velocity model;
  • the earthquake observation data self-encoding module is configured to use the simulated earthquake observation data to train multiple autoencoders.
  • Different autoencoders encode the global key information in the earthquake observation data into low-dimensional vectors of different lengths.
  • the parameters of the above vectors are equal to Smaller than the seismic observation data, the vector with the smaller parameter amount corresponds to the larger-scale information in the velocity model;
  • the building block of the predicted wave velocity model is configured to add a position feature information code to each seismic channel of the actual observation data, which is used to determine the position information of each channel of observation data; construct a convolution-full connection network, and combine the trained
  • the observation data is embedded in the front end of the above network structure from the encoding part of the encoder; the seismic observation data encoded by the position information is input into the convolution-full connection network, and the predicted wave velocity model corresponding to the seismic observation data is output;
  • the conversion module is configured to construct a wave equation forward modeling network to convert the predicted wave velocity model into corresponding seismic observation data
  • the parameter update module is configured to calculate the residual error of the predicted observation data and the simulated earthquake observation data; the predicted observation data and the simulated earthquake observation data are respectively input into the coding part of the self-encoder of the trained observation data, and the encoded low-dimensional Vector, calculate the residual of each low-dimensional vector; calculate the residual of the linear gradient velocity model and the predicted wave velocity model output by the inversion network; sum the above three residuals in proportion to the number of training rounds to form a multi-scale infinite
  • the supervised loss function guides the network to restore the information of different scales in the speed model at different training stages, and uses the gradient of the loss function to return and update the parameters of the convolution-fully connected network;
  • the wave velocity inversion module is configured to use the convolution-full connection network after updating the parameters to process the field observation data and obtain the inversion results.
  • a computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor of a terminal device and executing the steps of the above-mentioned method.
  • a terminal device including a processor and a computer-readable storage medium, the processor is used to implement instructions; the computer-readable storage medium is used to store a plurality of instructions, and the instructions are suitable for being loaded by the processor and executing one of the above methods step.
  • the present invention aims at the problem that the mapping from the observation data to the velocity model in the deep learning seismic wave velocity inversion method has strong nonlinearity, and the difficulty of the inversion task makes it difficult for the algorithm to get rid of the demand for label data, that is, the real wave velocity model, and it is difficult to realize unsupervised.
  • the large-scale structural features reduce the nonlinearity and difficulty of the inversion task, creating conditions for the algorithm to be completely unsupervised.
  • the present invention directly embeds the partial structure of the self-encoder into the inversion network, thereby effectively improving the performance of the inversion network structure itself.
  • the present invention constructs a feature encoder, a feature generator, and a feature decoder by using fully connected and convolutional neural networks to form an inversion network, and embeds the coding structure of each observation data self-encoder into the feature encoder, so that the inversion
  • the network can effectively extract the global key information of the observation data at the input end of the seismic data, and it is easy for the neural network to learn the mapping relationship between the observation data and the velocity model.
  • the present invention aims at the problem that the network training of the existing seismic wave velocity deep learning inversion method relies on real wave velocity data or more accurate wave velocity prior information, but the above data is difficult to obtain or the process is complicated in actual engineering, forming a new model based on observation data.
  • Self-encoded multi-scale unsupervised seismic wave velocity inversion method to get rid of the need for real wave velocity model or more accurate wave velocity prior information during the training process of seismic wave velocity inversion network.
  • the present invention aims at the problem that credible inversion results can hardly be obtained when the residuals of seismic observation data obtained by using the physical laws of the wave equation directly guide the deep learning network to directly invert the wave velocity model.
  • a multi-scale unsupervised loss function based on the physical laws of the wave equation is formed.
  • This loss function combines the large-scale information extracted by the autoencoder and the small-scale information contained in the observation data, which fully fits the inherent characteristics of the process of deep neural network learning the mapping relationship between images, that is, the large-scale structure in the image is established first. After the mapping, the mapping of the small-scale structure is established, so that the loss function and the neural network can achieve efficient cooperation.
  • a wave velocity unsupervised learning inversion scheme completely guided by the propagation law of the seismic wave field has been formed, which provides a feasible means for the application of seismic wave velocity deep learning inversion in actual data.
  • the invention aims at the problem that the deep neural network cannot perceive the relationship and position information of each channel in the seismic data, so that the input data of the general neural network seismic inversion method can only adopt a fixed observation form, which is difficult for practical application, and realizes the trigonometric function coding method of the observation system , adding trigonometric function position feature information codes for each seismic trace of the actual observation data.
  • This approach allows the network structure to be adapted to input observations from different observing systems. This makes the observation data in the training set of the unsupervised method in practical applications come from different projects using different observation methods, which greatly reduces the restrictions on the training set of the unsupervised method, and helps the unsupervised method to obtain sufficient training set data and then in Good results have been achieved in practical engineering applications.
  • Fig. 1 is the method flowchart of the present embodiment
  • Fig. 2 is the structural representation of the observation data self-encoder of the present embodiment
  • Fig. 3 is a schematic diagram of the convolution-fully connected network of the present embodiment
  • FIG. 4 is a schematic diagram of multi-scale unsupervised seismic wave velocity inversion network training based on self-encoding of observation data in this embodiment
  • Fig. 5 (a) is the schematic diagram of the geological velocity model used in the present embodiment, (b) is the seismic observation data corresponding to (a),
  • Fig. 6 is the inversion result of seismic unsupervised learning wave velocity in this embodiment.
  • the invention provides a multi-scale unsupervised seismic wave velocity inversion method based on self-encoding of observation data, aiming at the strong nonlinearity in the mapping from observation data to velocity model in the seismic wave velocity inversion method, the difficulty of inversion task is high, and it is difficult to realize the algorithm Unsupervised problem.
  • This method extracts the global key information in the seismic data from the self-encoder of multiple observation data, and embeds the trained encoding structure into the inversion network to effectively improve the performance of the inversion network itself; for the existing seismic wave velocity deep learning inversion method Relying on real wave velocity data or relying on more accurate wave velocity prior information is difficult for engineering applications.
  • extracting the global key information in seismic data from the autoencoder it is further combined with the driving thinking of physical laws to form the inherent characteristics of matching neural networks.
  • the multi-scale unsupervised loss function is used to restore the characteristics of different scales from large-scale structure to fine structure in the geological model, and the algorithm is completely unsupervised; for the current deep neural network cannot perceive the relationship and position information of each channel in seismic data As a result, the seismic data input into the network must be fixed in the form of observation. Adding position codes to the observation data input into the network to assist the layout of the network-aware observation system is convenient for practical engineering applications. Under the condition of the network training label, the inversion effect of the seismic velocity model is more accurate.
  • a multi-scale unsupervised seismic wave velocity inversion method based on self-encoding of observation data comprising the following steps:
  • the seismic observation data is input into the convolution-full connection network, and the predicted wave velocity model corresponding to the seismic observation data is output;
  • Calculate the residual of predicted observation data and simulated earthquake observation data respectively input the predicted observation data and simulated earthquake observation data into the self-encoder coding part of each trained observation data, obtain encoded low-dimensional vectors respectively, and calculate each low-dimensional vector
  • add basic wave velocity information for the inversion provide a linear gradient background wave velocity model to constrain the network inversion results, and calculate the residual error between the linear gradient velocity model and the predicted wave velocity model output by the inversion network;
  • the above three residuals respectively contain small-scale information, different degrees of large-scale information and basic prior information.
  • the above three residuals are summed according to the ratio of the number of training rounds to form a multi-scale unsupervised loss function.
  • the information of different scales in the speed model is restored, and the parameters of the convolution-fully connected network are updated by using the gradient of the loss function to return;
  • the inversion results are obtained by processing the field observation data by using the convolutional-fully connected network with updated parameters.
  • wave field simulations are performed for each geological wave velocity model with a fixed source, geophone position, and observation time, and wave field data is recorded at the geophone position to obtain the same Actual seismic data corresponding to the geological wave velocity model.
  • the actual observation data is self-encoded by a plurality of regular autoencoders, and the output of the encoder part of the regular autoencoder is composed of a vector with a parameter value lower than that of the seismic observation data, which contains the seismic observation data
  • the global key information in corresponds to the large-scale information in the velocity model.
  • the position encoding method of the Transformer model it is possible to refer to the position encoding method of the Transformer model, and add trigonometric function position feature information encoding to each seismic trace of the actual observation data.
  • the two values obtained by the formula composed of cosine function can realize the calibration of any seismic source and receiver position.
  • each network layer of the wave equation forward modeling network takes the seismic wavefields at the first two moments as input, and obtains the wavefield at the next moment and corresponding observation data by introducing the source wavefield at the current moment.
  • the deep learning network in view of the inherent characteristics of the deep learning network itself, that is, the deep learning network tends to first learn the large-scale structural information in the image and then gradually restore the small-scale fine structure in the image, adding a gradually increasing Weight coefficients, adding gradually decreasing weight coefficients to low-dimensional vector residuals and linear model residuals, to guide the initial stage of network training to invert the large-scale information of the model as the goal, and in the middle and late training process to invert the fine structure of the model as the goal.
  • the parameters of the convolutional-fully connected network are updated using the gradient of the loss function.
  • the method provided in this embodiment includes the following steps:
  • Step S1 obtain the geological wave velocity model by intercepting two-dimensional slices from the three-dimensional SEG/EAGE nappe model released by the International Society of Exploration Geophysicists SEG/European Association of Geoscientists and Engineers EAGE, and obtain the corresponding seismic observation data database through computer numerical simulation ;
  • the size of the nappe body model intercepted in this embodiment is 1600m ⁇ 5000m, and the horizontal and vertical grid sizes are both 25m.
  • a sponge-absorbing boundary of 50 meshes is set around the model.
  • the seismic wave velocity models all contain a water layer with a depth of 9 grids, and the seismic wave velocity is 1800m/s.
  • the geological structure under the water layer mainly includes folds, faults, etc.
  • the wave velocity is set according to the original SEG/EAGE nappe model, and the wave velocity range of the model is 1800m/s to 5500m/s.
  • this embodiment adopts the surface observation method, with 21 source points with a spacing of 250m, and 201 geophones with a spacing of 25m, which are evenly distributed on the first row of grids in the wave velocity model.
  • the 6Hz Reich wavelet source is used for excitation, the unit time step of the geophone record is 1ms, and the total time is 2s.
  • the finite difference method is used to perform forward modeling on the intercepted geological wave velocity model according to the constant density acoustic wave equation to obtain seismic observation data. .
  • wave field simulation is performed for each geological wave velocity model with a fixed seismic source, geophone position and observation time, and wave field data is recorded at the geophone position to obtain seismic data corresponding to the geological wave velocity model.
  • wave field data is recorded at the geophone position to obtain seismic data corresponding to the geological wave velocity model.
  • FIG. 5(a) A geological wave velocity model in the database of this embodiment is shown in Fig. 5(a), and the corresponding seismic observation data is shown in Fig. 5(b).
  • the geological model database of this embodiment includes 2000 sets of geological wave velocity models, and forward modeling is performed to obtain observation data.
  • the above 2000 groups are divided into 1200 groups, 400 groups, and 400 groups according to 3:1:1. Normalize all velocity model wave velocities to the [0,1] range, and normalize the amplitudes in the observed data to the [-1,1] range.
  • Step S2 constructs multiple seismic observation data autoencoders to complete the encoding of the global key information in the seismic observation data.
  • a regular self-encoder is used to encode seismic observation data.
  • the regular self-encoder includes two parts: an encoder and a decoder. Both the encoder and the decoder are composed of multi-layer sequentially cascaded convolution structures. Links, different regular autoencoders have different fully connected layer parameters, but they are all smaller than the seismic observation data, and the overall structure is symmetrical. The output of the fully connected layer in the middle is the global key information extracted from the seismic observation data.
  • the regularized autoencoder takes the simulated seismic observation data as input and output at the same time, and uses all the seismic observation data in the above database to train the network parameters.
  • trigonometric function position feature information coding is added to each seismic trace of the actual observation data.
  • trigonometric function position feature information coding is added to each seismic trace of actual observation data.
  • the position feature information coding is passed through the following The formula is obtained:
  • n is the position of the shot point or receiver point of the seismic trace
  • d is the dimension of the vector, which must be divisible by two. In this embodiment, it is set to 2, and k is 0 or 1.
  • Step S3 constructs a convolution-fully connected network for seismic unsupervised inversion
  • the convolution-fully connected neural network consists of three parts: a feature encoder, a feature generator, and a feature decoder;
  • the feature encoder is composed of two parts: the global feature encoder and the neighborhood information encoder.
  • the global feature encoder is the encoding part structure of each regular autoencoder trained in step S2.
  • the observation data input into the network are respectively input into the above two parts , the output of the two parts is concatenated and fed into the feature generator.
  • the residuals of the observation data are respectively extracted based on the convolution operation on the single-shot single-trace seismic record and its adjacent trace records, and the neighborhood information of the trace is extracted based on the convolution operation on the single-shot seismic record.
  • Global information for set records are composed of two parts: the global feature encoder and the neighborhood information encoder.
  • the entire feature encoder can effectively extract large-scale information (such as geological structure type, stratum stratification, etc.) and detailed structure reflecting geological structure in the residual of observation data.
  • the global feature encoder is the encoding part of each data self-encoder.
  • the neighborhood information encoder consists of 3 layers of sequentially cascaded convolutional structures.
  • the feature generator consists of 5 fully connected layers, which can map the enhanced vector from the encoder network to a high-dimensional feature space, and then connect with the decoder to complete the prediction of the real wave velocity model.
  • the feature decoder is composed of 6 layers of sequentially cascaded convolutional structures, and the fourth layer of convolutional structures is 4 parallel convolutional layers.
  • the output of the feature generator is the final output of the entire convolutional-fully connected network, which is the predicted wave velocity model.
  • Step S4 constructing a wave equation forward modeling network based on the deep neural network, performing seismic wave field forward modeling on the final output of the convolution-fully connected network, and obtaining seismic observation data corresponding to the predicted wave velocity model.
  • t and z represent time and depth, respectively, u represents the sound wave field, and v represents the sound wave velocity.
  • the discretization of the acoustic wave equation can be expressed as:
  • u n+1 Gu n -u n-1 +s n+1
  • u represents the discretized acoustic wave field
  • G represents the forward operator
  • s represents the discretized source wave field
  • n represents a certain moment.
  • the forward modeling process can be decomposed into simple operations such as calculating the Laplace value of the seismic wave field and adding, subtracting, multiplying and dividing between corresponding elements of the matrix. Among them, the calculation of the Laplace value of the wave field is performed through the convolution operation commonly used in deep neural networks.
  • the seismic wave field propagation operation at each time step is used as a layer of deep neural network
  • the seismic wave velocity model is used as the trainable parameter of the deep neural network
  • the convolution operation in the wave field propagation process and the simple operation between the corresponding elements of the matrix As the internal operation process of the network, the construction of the wave equation forward modeling network is realized.
  • the loss function of the unsupervised seismic wave velocity inversion network is composed of the observation data loss function L d , the low-dimensional vector loss function L l and the linear gradient wave velocity model loss function L m .
  • L d is defined as the mean square error (MSE) on the observation data :
  • d syn and d obs respectively represent the real observation data and the simulated observation data of the prediction model output by the unsupervised inversion network
  • nt, nr, ns represent the time steps, the number of geophones and the number of sources of the observation data, respectively.
  • the low-dimensional vector loss function can be defined as:
  • E i (d syn ) represents the low-dimensional vector output after the simulated observation data of the synthetic prediction model is input into the encoder part of the respective encoder
  • E i (d obs ) represents the actual observation data of the real geological model input into the encoder of the respective encoder
  • the low-dimensional vector output after the part, l is the size of the low-dimensional vector
  • ⁇ i is the coefficient of change, which is used to control the timing of different low-dimensional vectors.
  • the smaller the vector parameter, the value of ⁇ i is in The larger the initial stage of network training, the smaller the later stage.
  • the loss function of the linear gradient wave velocity model can be defined as:
  • m est represents the predicted model wave velocity output by the unsupervised inversion network
  • m 0 represents the linear gradient wave velocity model
  • nx and nz represent the horizontal and vertical size of the wave velocity model.
  • the overall objective function can be expressed as:
  • m 0 provides prior knowledge of the background wave velocity, which plays a certain guiding role in the early stage of training, but as the network training process progresses gradually, the prediction results output by the network gradually fit the large-scale information in the velocity model, and this role will become a A negative effect, at this time the low-dimensional vector loss function will play a key guiding role.
  • the prediction results output by the network gradually fit the fine structure in the velocity model, and the observed data loss function plays a leading role at this time.
  • the coefficients before the above three loss functions are the largest in the early stage of network training, the largest in the middle of ⁇ network training, and the largest in the later stage of ⁇ network training.
  • Step S5 trains a convolutional-fully connected network.
  • the network training of this embodiment adopts the Adam optimizer, and a total of 100 rounds of training are carried out. Among them, the learning rate decreases exponentially from 5 ⁇ 10-5 to 0, and the Batchsize is set to 8, and for each wave velocity model in the Batchsize, randomly select 5 seismic sources as the input of the wave equation forward modeling network and calculate the loss of observation data function. Among them, the seismic wavefield forward modeling based on the wave equation forward modeling network adopts the second-order space tenth-order finite difference scheme.
  • a loss function based on the background wave velocity m0 is adopted and ⁇ is set to linearly decrease from 1 to 0.
  • the calculation of this experiment is based on 4 NVIDIA TITAN RTX graphics cards with 24G memory, and each card contains 4608 stream processing units. The network parameters corresponding to the lowest observed data loss function on the validation set will be saved for subsequent experiments on the test set.
  • step S6 test the inversion effect on the test set with the trained convolution-fully connected network. Some results on the test set are shown in Figure 6. The test results show that the unsupervised deep learning method based on seismic wavefield forward modeling can effectively train the wave velocity inversion network without the real wave velocity model as the label, and can accurately reflect the wave velocity distribution of the underground medium, whether it is geological The position and shape of the structure or the change trend of the wave velocity distribution, this method can better fit the real wave velocity model.
  • a multi-scale unsupervised seismic wave velocity inversion system based on self-encoding of observation data characterized by: including:
  • the inversion database construction module is configured to construct and calculate corresponding seismic observation data according to actual geological conditions, and form an unsupervised seismic wave velocity inversion database based on each seismic observation data and geological wave velocity model;
  • the earthquake observation data self-encoding module is configured to use the simulated earthquake observation data to train multiple self-encoders, and different self-encoders encode the global key information in the earthquake observation data into low-dimensional vectors of different lengths;
  • the building block of the predicted wave velocity model is configured to add a position feature information code to each seismic channel of the actual observation data, which is used to determine the position information of each channel of observation data; construct a convolution-full connection network, and combine the trained
  • the observation data is embedded in the front end of the above network structure from the encoding part of the encoder; the seismic observation data encoded by the position information is input into the convolution-full connection network, and the predicted wave velocity model corresponding to the seismic observation data is output;
  • the conversion module is configured to construct a wave equation forward modeling network to convert the predicted wave velocity model into corresponding seismic observation data
  • the parameter update module is configured to calculate the residual error of the predicted observation data and the simulated earthquake observation data; the predicted observation data and the simulated earthquake observation data are respectively input into the coding part of the self-encoder of the trained observation data, and the encoded low-dimensional Vector, calculate the residual of each low-dimensional vector; calculate the residual of the linear gradient velocity model and the predicted wave velocity model output by the inversion network; sum the above three residuals in proportion to the number of training rounds to form a multi-scale infinite
  • the supervised loss function guides the network to restore the information of different scales in the speed model at different training stages, and uses the gradient of the loss function to return and update the parameters of the convolution-fully connected network;
  • the wave velocity inversion module is configured to use the convolution-full connection network after updating the parameters to process the field observation data and obtain the inversion results.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • Acoustics & Sound (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

一种基于观测数据自编码的多尺度无监督地震波速反演方法,通过对观测数据自编码以提取数据中的大尺度信息,利用此信息引导反演网络完成对速度模型中不同尺度特征的恢复,降低反演的非线性程度,在此基础上将训练好的观测数据自编码器编码结构嵌入反演网络以完成反演网络前端对地震观测数据信息的有效提取,使得反演网络能更好的解析地震数据包含的信息,更好的建立地震数据与速度模型之间的映射关系,实现了反演方法完全无监督化,并为输入网络的观测数据添加位置编码以辅助网络感知观测系统的布设形式,便于实际工程应用。上述方法能在无真实地质模型作网络训练标签的条件下取得地震速度模型较准确的反演效果。

Description

基于观测数据自编码的多尺度无监督地震波速反演方法 技术领域
本发明属于地球物理勘探技术领域,具体涉及一种基于观测数据自编码的多尺度无监督地震波速反演方法。
背景技术
本部分的陈述仅仅是提供了与本发明相关的背景技术信息,不必然构成在先技术。
地震勘探技术在油气资源勘探、煤田勘探、隧道不良地质探测等生产实际中起到了重要作用。其中,反射波地震勘探的基本工作原理是通过人工震源激发地震波,地震波遇到岩层分界面或地质构造发生反射,之后这些包含地质信息的地震波信号被在地面布设的检波器接收,并用于之后的处理和解释工作,以此判断结构体的位置及地质状况。为实现复杂地质构造的高精度、高分辨率成像,全波形反演(Full Waveform Inversion,FWI)作为地震探测领域公认的波速反演方法,其利用地震记录中的全部波形信息迭代优化模型参数,本质上是求解地震数据拟合问题的局部优化算法,而地震数据与地震波速间的映射是强非线性的,这导致全波形反演的反演结果高度依赖初始波速模型。初始模型的建立一般依赖人为经验,与实景情况相差较大的初始波速模型极易导致反演结果陷入局部极小值,严重影响反演结果的准确程度。
部分学者提出采用多尺度反演思路降低反演的非线性程度,以摆脱FWI对初始模型的需求。多尺度地震全波形反演通过提取地震数据中的低频信息,首先反演与其对应的速度模型中的大尺度构造,再逐步利用更高频信息反演速度模型中的小尺度精细结构,摆脱了对初始模型的需要。例如Yuqing Chen与Erdinc Saygin引入深度学习中的自编码器结构,通过该结构提取观测数据中的信息,替代传统多尺度地震全波形反演中通过滤波算法提取观测数据信息的环节,同样实现了FWI摆脱对初始模型的依赖的效果。除传统FWI方法应用深度学习算法进行改进,鉴于深度学习算法展现出的对强非线性映射的高精度模拟能力,以全连接网络、卷积神经网络、对抗神经网络等多种深度神经网络为基础的地震波速反演网络已在数值实验中取得了对较复杂波速模型的有效反演,展现出了优于传统FWI的波速反演效果。
但上述深度学习地震波速反演方法仍存在问题,目前大部分基于深度学习的地震波速直接反演方法需要依靠数据驱动,属于有监督学习或半监督学习范畴,仅有个别无监督方法。采用有监督学习或半监督学习训练深度神经网络完全依赖或部分依赖真实波速模型作为地震 观测数据反演的标签,标签在反演过程中起到导向作用。实际情况下,获得实际地层波速分布并构建相应的训练数据集十分困难。现有一种基于背景波速的无监督反演方法,该方法通过为网络添加先验的大尺度背景波速模型,降低反演过程的非线性映射程度。因此该方法需要与实际观测数据对应并与真实速度模型较为相近的大尺度背景波速模型,该背景波速模型的获得过程较为复杂,该方法一定程度上依赖较准确地先验信息作支撑,否则无法直接建立观测数据到速度模型的映射。此外,当前深度学习类地震反演方法对输入网络的数据观测系统布置形式有严格要求,由于当前的地震反演网络无法感知观测数据炮点、检波点的位置信息,采集观测数据必须采用同一种观测系统,观测系统这对实际情况来说应用条件较为严格。所以现有地震深度学习反演方法实际应用依然存在很大难度。
实现能应用于实际工程的无监督学习地震波速反演方法仍存在以下三个难题:
其一,如何应对无监督条件下深度学习地震波速反演过程非线性程度高、实现难度大的难题。直接从观测数据得到地下实际波速模型是一个强非线性反演任务,具有不适定性,映射关系十分复杂,再加上地震深度学习反演方法需要在同一网络结构、参数的情况下完成对不同地区采集获得的地震数据的反演任务,如果不提供真实模型中的波速信息,直接建立观测数据到地下速度模型的映射任务难度极高。
其二,若不依赖较准确波速先验,将依靠什么信息、采取什么措施实现无监督学习地震波速反演方法。现有深度学习有监督与半监督类方法中均需要地下真实波速模型作为反演方法的标签数据,标签在以上方法中均起到导向作用,这种标签在实际中难以获得;已有无监督方法在背景波速模型先验条件的前提下,采用对预测模型进行波动方程正演得到的观测数据与实际数据作残差驱动网络训练过程。理论上在完全摆脱标签数据与较准确波速先验知识的情况下,利用波动方程物理规律求得的观测数据残差直接引导深度学习网络直接反演波速模型时很难得到可信的反演结果,以什么方式引导无监督方法成为必须解决的问题。
其三,对于基于深度学习的地震反演方法而言,如何实现深度神经网络匹配不同观测方式的地震数据。对于来自同一工程不同时段或不同工程采用不同观测形式采集到的地震记录,深度神经网络无法感知地震数据中各道的关系与位置信息。目前的深度学习地震反演方法的输入数据必须为同一观测形式。这给深度学习地震反演方法在实际工程中的应用带来困难。
发明内容
本发明为了解决上述问题,提出一种基于观测数据自编码的多尺度无监督地震波速反演方法,本发明通过对观测数据自编码以提取数据中的大尺度信息,利用此信息引导反演网络完成对速度模型中不同尺度特征的恢复,降低反演的非线性程度,在此基础上将训练好的观 测数据自编码器编码结构嵌入反演网络以完成反演网络前端对地震观测数据信息的有效提取,使得反演网络能更好的解析地震数据包含的信息,更好的建立地震数据与速度模型之间的映射关系,进而实现了反演方法完全无监督化,并为输入网络的观测数据添加位置编码以辅助网络感知观测系统的布设形式,便于实际工程应用。该方法能在无真实地质模型作网络训练标签的条件下取得了地震速度模型较准确的反演效果。
根据一些实施例,本发明采用如下技术方案:
一种基于观测数据自编码的多尺度无监督地震波速反演方法,包括以下步骤:
根据实际地质情况构建相应的地质波速模型,通过数值模拟计算对应的模拟地震观测数据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中,以上向量参数量均小于地震观测数据,参数量越小的向量对应速度模型中更大尺度的信息;
为实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道地震观测数据的位置信息(震源位置与检波器);
构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端,使得反演网络在地震数据输入端能有效提取观测数据的全局信息,将经过位置特征信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
构建波动方程正演网络,将预测波速模型转化为对应的预测观测数据;
计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;计算线性渐变速度模型与反演网络输出的预测波速模型的残差;
将三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
作为可选择的实施方式,根据实际地质情况构建相应的地质波速模型时,对于各地质波速模型以固定的震源、检波器位置与观测时间进行波场模拟,在检波器位置记录波场数据,得到与地质波速模型对应的实际地震数据。
作为可选择的实施方式,计算对应的地震观测数据时,利用波动方程计算得到对应的模拟地震观测数据。
作为可选择的实施方式,通过多个正则自编码器对实际观测数据进行自编码,正则自编码器的编码器部分输出由一个参数量低于地震观测数据的向量组成,该向量中包含地震观测数据中的全局关键信息,对应速度模型中的大尺度信息。
作为可选择的实施方式,为实际观测数据的各地震道添加三角函数位置特征信息编码,该位置特征信息编码为通过该地震道的炮点和检波器位置输入正余弦函数组成的公式求得的两个数值,可以实现对任意震源、检波器位置进行标定。
作为可选择的实施方式,所述卷积-全连接网络包括特征编码器、特征生成器与特征解码器,各观测数据自编码器的编码结构与其他网络结构共同组成特征编码器,该网络用于建立观测地震观测数据到速度模型的映射。
作为进一步限定的实施方式,所述的编码器包括全局特征编码器、邻域信息编码器,输入网络的观测数据分别输入以上两部分,两部分的输出拼接后输入特征生成器;所述全局特征编码器即各观测数据自编码器的编码结构,邻域信息编码器由3层依次级联的卷积结构组成。
作为进一步限定的实施方式,所述的特征生成器包括5个全连接层,所述的特征生成器输入为编码器的输出,所述的特征生成器的输出作为特征解码器的输入。
作为进一步限定的实施方式,所述的特征解码器包括6层依次级联的卷积结构,其中第4层卷积结构为4个并行的卷积层。
作为可选择的实施方式,构建波动方程正演网络将预测波速模型转化为对应的地震观测数据的具体过程中:基于深度神经网络构建波动方程正演网络,对卷积-全连接网络最终的输出进行地震波场正演模拟,得到与预测波速模型对应的地震观测数据。
作为进一步限定的实施方式,基于深度神经网络构建波动方程正演网络的具体过程包括:在时间-空间域中,对常密度声波波动方程进行离散化处理,地震波场随时间传播的过程基于离散化处理后的方程中正演算子的迭代过程;将每个时间步上的地震波场传播运算作为一层深度神经网络,以地震波速模型作为该深度神经网络的可训练参数,以波场传播过程中的卷积运算和矩阵相应元素间的简单运算作为网络的内部运算过程,实现波动方程正演网络的构建。波动方程正演网络的每个网络层,由前两个时刻的地震波场为输入,通过引入当前时刻的震源波场,获得后一时刻的波场以及相应的观测数据。
作为可选择的实施方式,为组成损失函数的观测数据残差、低维向量残差与线性模型残差三项分别添加随训练轮数变化权重系数;三项残差分别包含了小尺度信息、不同程度的大尺度信息与基本先验信息。
作为进一步的限定,为观测数据残差添加逐渐增大的权重系数,为低维向量残差与线性模型残差添加逐渐减小的权重系数,以引导网络训练初期以反演模型大尺度信息为目标,中后期训练过程以反演模型精细构造为目标。
一种基于观测数据自编码的多尺度无监督地震波速反演系统,包括:
反演数据库构建模块,被配置为根据实际地质情况构建相应的,计算对应的地震观测数据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
地震观测数据自编码模块,被配置为利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中,以上向量参数量均小于地震观测数据,参数量越小的向量对应速度模型中更大尺度的信息;
预测波速模型构建模块,被配置为在实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道观测数据的位置信息;构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端;将经过位置信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
转化模块,被配置为构建波动方程正演网络将预测波速模型转化为对应的地震观测数据;
参数更新模块,被配置为计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;计算线性渐变速度模型与反演网络输出的预测波速模型的残差;将以上三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
波速反演模块,被配置为利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
一种计算机可读存储介质,其中存储有多条指令,所述指令适于由终端设备的处理器加载并执行上述一种方法的步骤。
一种终端设备,包括处理器和计算机可读存储介质,处理器用于实现各指令;计算机可读存储介质用于存储多条指令,所述指令适于由处理器加载并执行上述一种方法的步骤。
与现有技术相比,本发明的有益效果为:
本发明针对深度学习地震波速反演方法中从观测数据到速度模型的映射具有强非线性,反演任务难度高导致算法难以摆脱对标签数据即真实波速模型的需求,难以实现无监督化的问题,提出一种基于观测数据自编码的无监督反演策略,基于预训练的多个观测数据自编码 器,从数据中提取全局关键信息到低维向量空间,该信息用于恢复地质模型中的大尺度结构特征,降低了反演任务的非线性程度与难度,为算法完全无监督化创造条件。
本发明在训练好的各观测数据自编码器的基础上,将自编码器部分结构直接嵌入反演网络,从而有效提高反演网络结构本身的性能。本发明利用全连接和卷积神经网络构建了特征编码器、特征生成器与特征解码器,组成反演网络,并将各观测数据自编码器中的编码结构嵌入特征编码器中,使得反演网络在地震数据输入端能有效提取观测数据的全局关键信息,易于神经网络学习观测数据到速度模型的映射关系。
本发明针对现有地震波速深度学习反演方法的网络训练依赖真实波速数据或依赖较准确的波速先验信息,但上述数据在实际工程中获取难度高或过程复杂的问题,形成了基于观测数据自编码的多尺度无监督地震波速反演方法,以摆脱地震波速反演网络训练过程中对真实波速模型或较准确的波速先验信息的需求。
本发明针对利用波动方程物理规律求得的地震观测数据残差直接引导深度学习网络直接反演波速模型时几乎无法得到可信的反演结果的问题,在观测数据自编码提取观测数据中的全局关键信息的基础上,与波动方程物理规律直接求得的地震观测数据结合,形成了基于波动方程物理规律的多尺度无监督损失函数。该损失函数结合了自编码器提取的大尺度信息与观测数据中本就包含的小尺度信息,完全契合了深度神经网络学习图像间映射关系的过程的固有特点,即先建立图像中大尺度结构的映射后建立小尺度结构的映射,使得损失函数与神经网络达成高效配合。形成了完全依靠地震波场传播规律引导的波速无监督学习反演方案,为地震波速深度学习反演在实际数据中的应用提供可行手段。
本发明针对深度神经网络无法感知地震数据中各道的关系与位置信息,使得一般神经网络地震反演方法的输入数据只能采用固定观测形式,难以实际应用的问题,实现观测系统三角函数编码方法,为实际观测数据的各地震道添加三角函数位置特征信息编码。此方法可使网络结构适应不同观测系统的输入观测数据。这使得在实际应用中无监督方法训练集中的观测数据可来源于不同工程采用不同观测方式,大大降低了对无监督方法训练集的限制,有助于无监督方法得到充足的训练集数据进而在实际工程应用中取得好的效果。
为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。
图1是本实施例的方法流程图;
图2是本实施例的观测数据自编码器结构示意图;
图3是本实施例的卷积-全连接网络示意图;
图4是本实施例的基于观测数据自编码的多尺度无监督地震波速反演网络训练示意图;
图5(a)为本实施例中所用地质速度模型示意图,(b)为与(a)对应的地震观测数据,
图6是本实施例中的地震无监督学习波速反演结果。
具体实施方式:
下面结合附图与实施例对本发明作进一步说明。
应该指出,以下详细说明都是例示性的,旨在对本发明提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属技术领域的普通技术人员通常理解的相同含义。
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。
本发明提供一种基于观测数据自编码的多尺度无监督地震波速反演方法,针对地震波速反演方法中从观测数据到速度模型的映射具有强非线性,反演任务难度高,难以实现算法无监督化的问题。
该方法通过多个观测数据自编码器提取地震数据中的全局关键信息,并将训练好的编码结构嵌入反演网络中有效提高反演网络自身的性能;针对现有地震波速深度学习反演方法依赖真实波速数据或依赖较准确的波速先验信息,很难工程应用的问题,在自编码器提取地震数据中的全局关键信息的基础上进一步结合物理规律驱动思想,形成了匹配神经网络固有特点的多尺度无监督损失函数以恢复地质模型中的大尺度结构到精细构造不同尺度的特征,实现了算法的完全无监督化;针对当前深度神经网络无法感知地震数据中各道的关系与位置信息导致输入网络的地震数据必须固定观测形式的问题,为输入网络的观测数据添加位置编码以辅助网络感知观测系统的布设形式,便于实际工程应用,该方法在无准确先验、无真实地质模型作网络训练标签的条件下取得了地震速度模型较准确的反演效果。
下面介绍本发明的主要内容:
一种基于观测数据自编码的多尺度无监督地震波速反演方法,包括以下步骤:
根据实际地质情况构建相应的地质波速模型,通过数值模拟计算对应的模拟地震观测数 据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中,以上向量参数量均小于地震观测数据,参数量越小的向量对应速度模型中更大尺度的信息;
为实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道观测数据的位置信息(震源位置与检波器);
构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端,使得反演网络在地震数据输入端能有效提取观测数据的全局信息;将经过位置信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
构建波动方程正演网络,将预测波速模型转化为对应的预测观测数据;
计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;此外,为反演添加基本的波速信息,提供线性渐变的背景波速模型约束网络反演结果,计算线性渐变速度模型与反演网络输出的预测波速模型的残差;
以上三项残差分别包含了小尺度信息、不同程度的大尺度信息与基本先验信息,将以上三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
在部分实施例中,根据实际地质情况构建相应的地质波速模型时,对于各地质波速模型以固定的震源、检波器位置与观测时间进行波场模拟,在检波器位置记录波场数据,得到与地质波速模型对应的实际地震数据。
在部分实施例中,通过多个正则自编码器对实际观测数据进行自编码,正则自编码器的编码器部分输出由一个参数量低于地震观测数据的向量组成,该向量中包含地震观测数据中的全局关键信息,对应速度模型中的大尺度信息。
在部分实施例中,可以参考Transformer模型的位置编码方法,为实际观测数据的各地震道添加三角函数位置特征信息编码,该位置特征信息编码为通过该地震道的炮点和检波器位置输入正余弦函数组成的公式求得的两个数值,可以实现对任意震源、检波器位置进行标定。
当然,在其他实施例中,也可以利用其他位置编码方法来天界位置特征信息编码。
在部分实施例中,波动方程正演网络的每个网络层,由前两个时刻的地震波场为输入,通过引入当前时刻的震源波场,获得后一时刻的波场以及相应的观测数据。
在部分实施例中,鉴于深度学习网络自身的固有特点即深度学习网络倾向于先学习图像中的大尺度结构信息再逐步恢复图像中的小尺度精细结构,为观测数据残差添加逐渐增大的权重系数,为低维向量残差与线性模型残差添加逐渐减小的权重系数,以引导网络训练初期以反演模型大尺度信息为目标,中后期训练过程以反演模型精细构造为目标。利用该损失函数梯度回传更新卷积-全连接网络的参数。
实施例一:
具体的,本实施例提供的方法,如图1所示,包括以下步骤:
步骤S1,通过国际勘探地球物理学家学会SEG/欧洲地球科学家和工程师协会EAGE公开的三维SEG/EAGE推覆体模型上截取二维切片获得地质波速模型,通过计算机数值模拟获得对应地震观测数据数据库;
本实施例截取的推覆体模型尺寸为1600m×5000m,横向和纵向网格大小均为25m。模型周围设置50个网格的海绵吸收边界。地震波速模型均包含深度为9个网格的水层,其地震波速为1800m/s。水层下的地质构造主要包括褶皱、断层等,其波速按照原SEG/EAGE推覆体模型设置,模型波速范围为1800m/s到5500m/s。对于地震观测数据,本实施例采用地表观测方式,震源点为21个,间距为250m,检波器为201个,间距为25m,均匀分布在波速模型第一排网格上。采用6Hz雷克子波震源激发,检波器记录的单位时间步长为1ms,总时长为2s,采用有限差分方法,按照常密度声波波动方程对截取的地质波速模型进行正演模拟以获取地震观测数据。
当然,在其他实施例中,可以通过其他数据建立地质模型数据库。或建立的过程中选取的参数可以不按上述实施例提供的参数,可以进行变换。
在其他实施例中,对于各地质波速模型以固定的震源、检波器位置与观测时间进行波场模拟,在检波器位置记录波场数据,得到与地质波速模型对应的地震数据。与地质模型数据库共同构成无监督反演数据库。
本实施例数据库中的一个地质波速模型如图5(a)所示,与之对应的地震观测数据如图5(b)所示。
本实施例的地质模型数据库包含2000组地质波速模型,分别进行正演模拟求取观测数据。本实施例的训练数据集、验证集、测试集将上述2000组按3:1:1分为1200组、400组、400 组。将所有速度模型波速归一化至[0,1]范围内,并将观测数据中的幅值归一化至[-1,1]范围内。
同样的,上述参数或比例在其他实施例中。
步骤S2,如图2所示,构建多个地震观测数据自编码器,完成对地震观测数据中全局关键信息的编码。本实施例以正则自编码器编码地震观测数据,正则自编码器包括编码器、解码器两部分,编码器与解码器均由多层依次级联的卷积结构组成,两部分通过全连接层链接,不同正则自编码器的全连接层参数量不同,但均小于地震观测数据,整体结构对称,正中间的全连接层输出即为从地震观测数据中提取的全局关键信息。正则自编码器以模拟地震观测数据同时作为输入和输出,利用上述数据库中全部地震观测数据训练网络参数。
本实施例为实际观测数据的各地震道添加三角函数位置特征信息编码,参考Transformer模型的位置编码方法,为实际观测数据的各地震道添加三角函数位置特征信息编码,该位置特征信息编码通过以下公式求得:
Figure PCTCN2021137890-appb-000001
式中n为地震道炮点或检波点的位置,d为向量的维度,需可被二整除,本实施例设置为2,k取0或1。
步骤S3,如图3所示,构建应用用于地震无监督反演的卷积-全连接网络,卷积-全连接神经网络由特征编码器、特征生成器、特征解码器三部分组成;
特征编码器由全局特征编码器、邻域信息编码器两部分组成,其中全局特征编码器即为步骤S2中训练好的各正则自编码器编码部分结构,输入网络的观测数据分别输入以上两部分,两部分的输出拼接后输入特征生成器。以上两部分对观测数据残差分别进行基于单炮单道地震记录及其相邻道记录上的卷积操作提取该道记录的邻域信息,基于单炮地震记录上的卷积操作提取该道集记录的全局信息。
值得注意的是,全局特征编码器的网络参数不随输入数据的不同而改变。整个特征编码器可以有效提取观测数据残差中反映地质结构的大尺度信息(如地质结构类型、地层分层情况等)与细节结构。如上文所述,全局特征编码器即各数据自编码器编码部分。邻域信息编码器由3层依次级联的卷积结构组成。
特征生成器由5个全连接层组成,可将来自于编码器网络的增强向量映射到高维特征空间,进而与解码器相连,完成对真实波速模型的预测。特征解码器由6层依次级联的卷积结 构组成,第4层卷积结构为4个并行的卷积层。特征生成器的输出为整个卷积-全连接网络最终的输出,即预测波速模型。
步骤S4,基于深度神经网络构建波动方程正演网络,对卷积-全连接网络最终的输出进行地震波场正演模拟,得到与预测波速模型对应的地震观测数据。
在时间-空间域中,一维常密度声波波动方程为:
Figure PCTCN2021137890-appb-000002
这里,t和z分别表示时间和深度,u表示声波场,v表示声波波速。声波方程离散化后可表示为:
u n+1=Gu n-u n-1+s n+1
这里,u表示离散化后的声波波场,G表示正演算子,s表示离散化后的震源波场,n表示某一时刻。正演过程可分解为计算地震波场的Laplace值以及矩阵相应元素间的加减乘除等简单运算。其中,波场Laplace值的计算通过深度神经网络中常用的卷积操作进行。将每个时间步上的地震波场传播运算作为一层深度神经网络,以地震波速模型作为该深度神经网络的可训练参数,以波场传播过程中的卷积运算和矩阵相应元素间的简单运算作为网络的内部运算过程,实现波动方程正演网络的构建。
以上操作均具有天然的并行性,基于深度学习平台Pytorch实现上述正演过程的并行计算,大幅加速地震正演计算过程。
无监督地震波速反演网络的损失函数由观测数据损失函数L d,低维向量损失函数L l和线性渐变波速模型损失函数L m构成,L d定义为观测数据上的均方误差(MSE):
Figure PCTCN2021137890-appb-000003
其中,d syn和d obs分别表示真实观测数据和无监督反演网络输出的预测模型的模拟观测数据,nt,nr,ns分别表示观测数据的时间步数,检波器数和震源数。低维向量损失函数可定义为:
Figure PCTCN2021137890-appb-000004
其中,E i(d syn)表示合成预测模型的模拟观测数据输入各自编码器编码器部分后输出的低维向量,E i(d obs)表示真实地质模型的实际观测数据输入各自编码器编码器部分后输出的低维向量,l为低维向量的大小,σ i为变化的系数,用来控制不同低维向量发挥作用的时机, 总的来说,向量参数越小,σ i的值在网络训练初期越大,后期越小。线性渐变波速模型损失函数可定义为:
Figure PCTCN2021137890-appb-000005
这里,m est表示无监督反演网络输出的预测模型波速,m 0表示线性渐变波速模型,nx和nz表示波速模型的横向和纵向大小。总目标函数可表示为:
L=αL d+βL l+γL m
m 0提供背景波速先验知识,在训练初期起一定的引导作用,但随着网络训练过程逐步进行,网络输出的预测结果逐步拟合速度模型中的大尺度信息,这种作用会变成一种负面作用,此时低维向量损失函数将起到关键的引导作用。而在网络训练后期,网络输出的预测结果逐步拟合速度模型中的精细构造,此时观测数据损失函数起到主导作用。以上三项损失函数前的系数α网络训练初期最大,β网络训练中期最大,γ网络训练后期最大。
步骤S5,如图4所示,训练卷积-全连接网络。本实施例的网络训练采用Adam优化器,共开展100轮训练。其中,学习率由5×10-5指数递减到0,Batchsize设为8,且对于Batchsize中的每个波速模型,随机选取5个位置的震源作为波动方程正演网络的输入并计算观测数据损失函数。其中,基于波动方程正演网络的地震波场正演采用时间二阶空间十阶的有限差分格式。此外,在网络训练的前40轮中,采用基于背景波速m0的损失函数并设置λ从1线性递减到0。该实验的计算是基于4块24G显存的NVIDIA TITAN RTX显卡,每个卡上包含4608个流处理单元。对应于验证集上最低观测数据损失函数的网络参数将被保存下来用于后续在测试集上的实验。
步骤S6,将训练好的卷积-全连接网络在测试集上测试反演效果。测试集上的部分结果如图6所示。测试结果说明,在没有真实波速模型作标签的情况下,基于地震波场正演的无监督深度学习方法可有效训练波速反演网络,能较准确的反映出地下介质的波速分布情况,不论是地质结构的位置、形态还是波速分布的变化趋势,该方法均可以较好的拟合真实波速模型。
实施例二:
一种基于观测数据自编码的多尺度无监督地震波速反演系统,其特征是:包括:
反演数据库构建模块,被配置为根据实际地质情况构建相应的,计算对应的地震观测数据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
地震观测数据自编码模块,被配置为利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中;
预测波速模型构建模块,被配置为在实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道观测数据的位置信息;构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端;将经过位置信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
转化模块,被配置为构建波动方程正演网络将预测波速模型转化为对应的地震观测数据;
参数更新模块,被配置为计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;计算线性渐变速度模型与反演网络输出的预测波速模型的残差;将以上三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
波速反演模块,被配置为利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
上述虽然结合附图对本发明的具体实施方式进行了描述,但并非对本发明保护范围的限制,所属领域技术人员应该明白,在本发明的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本发明的保护范围以内。

Claims (14)

  1. 一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:包括以下步骤:
    根据实际地质情况构建相应的地质波速模型,通过数值模拟计算对应的模拟地震观测数据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
    利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中;
    为实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道地震观测数据的位置信息;
    构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端,使得反演网络在地震数据输入端能有效提取观测数据的全局信息,将经过位置特征信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
    构建波动方程正演网络,将预测波速模型转化为对应的预测观测数据;
    计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;计算线性渐变速度模型与反演网络输出的预测波速模型的残差;
    将三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
    利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
  2. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:根据实际地质情况构建相应的地质波速模型时,对于各地质波速模型以固定的震源、检波器位置与观测时间进行波场模拟,在检波器位置记录波场数据,得到与地质波速模型对应的实际地震数据。
  3. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:所述低维向量参数量均小于地震观测数据,参数量越小的向量对应速度模型中更大尺度的信息。
  4. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:通过多个正则自编码器对实际观测数据进行自编码,正则自编码器的编码器部分输出由一个参数量低于地震观测数据的向量组成,该向量中包含地震观测数据中的全局关键信息,对应速度模型中的大尺度信息。
  5. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:为实际观测数据的各地震道添加三角函数位置特征信息编码,该位置特征信息编码为通过该地震道的炮点和检波器位置输入正余弦函数组成的公式求得的两个数值,可以实现对任意震源、检波器位置进行标定。
  6. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:所述卷积-全连接网络包括特征编码器、特征生成器与特征解码器,各观测数据自编码器的编码结构与其他网络结构共同组成特征编码器,该网络用于建立观测地震观测数据到速度模型的映射。
  7. 如权利要求6所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:所述的编码器包括全局特征编码器、邻域信息编码器,输入网络的观测数据分别输入以上两部分,两部分的输出拼接后输入特征生成器;所述全局特征编码器即各观测数据自编码器的编码结构,邻域信息编码器由3层依次级联的卷积结构组成;
    或,所述的特征生成器包括5个全连接层,所述的特征生成器输入为编码器的输出,所述的特征生成器的输出作为特征解码器的输入;
    或,所述的特征解码器包括6层依次级联的卷积结构,其中第4层卷积结构为4个并行的卷积层。
  8. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:构建波动方程正演网络将预测波速模型转化为对应的地震观测数据的具体过程中:基于深度神经网络构建波动方程正演网络,对卷积-全连接网络最终的输出进行地震波场正演模拟,得到与预测波速模型对应的地震观测数据。
  9. 如权利要求1或8所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:基于深度神经网络构建波动方程正演网络的具体过程包括:在时间-空间域中,对常密度声波波动方程进行离散化处理,地震波场随时间传播的过程基于离散化处理后的方程中正演算子的迭代过程;将每个时间步上的地震波场传播运算作为一层深度神经网络,以地震波速模型作为该深度神经网络的可训练参数,以波场传播过程中的卷积运算和矩阵相应元素间的运算作为网络的内部运算过程,实现波动方程正演网络的构建。
  10. 如权利要求1所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:为组成损失函数的观测数据残差、低维向量残差与线性模型残差三项分别添加随训练轮数变化权重系数;三项残差分别包含了小尺度信息、不同程度的大尺度信息与基本先验信息。
  11. 如权利要求10所述的一种基于观测数据自编码的多尺度无监督地震波速反演方法,其特征是:为观测数据残差添加逐渐增大的权重系数,为低维向量残差与线性模型残差添加逐渐减小的权重系数,以引导网络训练初期以反演模型大尺度信息为目标,中后期训练过程以反演模型精细构造为目标。
  12. 一种基于观测数据自编码的多尺度无监督地震波速反演系统,其特征是:包括:
    反演数据库构建模块,被配置为根据实际地质情况构建相应的,计算对应的地震观测数据,基于各地震观测数据和地质波速模型,形成无监督地震波速反演数据库;
    地震观测数据自编码模块,被配置为利用模拟地震观测数据训练多个自编码器,不同自编码器将地震观测数据中的全局关键信息编码到不同长度的低维向量中;
    预测波速模型构建模块,被配置为在实际观测数据的各地震道添加一个位置特征信息编码,该编码用于确定各道观测数据的位置信息;构建卷积-全连接网络,将训练好的各观测数据自编码器编码部分嵌入上述网络结构前端;将经过位置信息编码的地震观测数据输入卷积-全连接网络,输出与地震观测数据相对应的预测波速模型;
    转化模块,被配置为构建波动方程正演网络将预测波速模型转化为对应的地震观测数据;
    参数更新模块,被配置为计算预测观测数据与模拟地震观测数据的残差;将预测观测数据与模拟地震观测数据分别输入训练好的各观测数据自编码器编码部分,分别得到编码后的低维向量,计算各低维向量的残差;计算线性渐变速度模型与反演网络输出的预测波速模型的残差;将以上三项残差按随训练轮数变化的比例求和后组成多尺度无监督损失函数,引导网络不同训练阶段对速度模型中不同尺度信息进行恢复,利用该损失函数梯度回传更新卷积-全连接网络的参数;
    波速反演模块,被配置为利用更新参数后的卷积-全连接网络对现场观测数据进行处理,得到反演结果。
  13. 一种计算机可读存储介质,其特征是:其中存储有多条指令,所述指令适于由终端设备的处理器加载并执行权利要求1-11中任意项所述的方法的步骤。
  14. 一种终端设备,其特征是:包括处理器和计算机可读存储介质,处理器用于实现各指令;计算机可读存储介质用于存储多条指令,所述指令适于由处理器加载并执行权利要求1-11中任意项所述的方法的步骤。
PCT/CN2021/137890 2021-11-19 2021-12-14 基于观测数据自编码的多尺度无监督地震波速反演方法 WO2023087451A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/031,289 US11828894B2 (en) 2021-11-19 2021-12-14 Multi-scale unsupervised seismic velocity inversion method based on autoencoder for observation data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111400849.4 2021-11-19
CN202111400849.4A CN114117906B (zh) 2021-11-19 2021-11-19 基于观测数据自编码的多尺度无监督地震波速反演方法

Publications (1)

Publication Number Publication Date
WO2023087451A1 true WO2023087451A1 (zh) 2023-05-25

Family

ID=80440831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137890 WO2023087451A1 (zh) 2021-11-19 2021-12-14 基于观测数据自编码的多尺度无监督地震波速反演方法

Country Status (3)

Country Link
US (1) US11828894B2 (zh)
CN (1) CN114117906B (zh)
WO (1) WO2023087451A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116610937A (zh) * 2023-07-18 2023-08-18 中国海洋大学 在隐式空间上进行低频信息延拓的方法、装置及电子设备
CN116660992A (zh) * 2023-06-05 2023-08-29 北京石油化工学院 一种基于多特征融合的地震信号处理方法
CN116736372A (zh) * 2023-06-05 2023-09-12 成都理工大学 一种基于谱归一化生成对抗网络的地震插值方法及系统
CN117407712A (zh) * 2023-10-17 2024-01-16 中联煤层气有限责任公司 一种基于多任务学习的低频地震数据补偿方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115060769B (zh) * 2022-06-07 2024-04-02 深圳大学 一种基于智能反演的隧道围岩裂隙及松动检测方法、系统
CN115061200B (zh) * 2022-06-08 2024-05-24 北京大学 基于虚同相轴法和无监督神经网络压制层间多次波的方法
CN115422703B (zh) * 2022-07-19 2023-09-19 南京航空航天大学 一种基于MODIS数据和Transformer网络的地表热红外发射率反演方法
CN115619950B (zh) * 2022-10-13 2024-01-19 中国地质大学(武汉) 一种基于深度学习的三维地质建模方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200183041A1 (en) * 2018-12-11 2020-06-11 Exxonmobil Research And Engineering Company Machine learning-augmented geophysical inversion
CN111562611A (zh) * 2020-04-08 2020-08-21 山东大学 基于波动方程驱动的半监督深度学习地震数据反演方法
CN111723329A (zh) * 2020-06-19 2020-09-29 南京大学 一种基于全卷积神经网络的震相特征识别波形反演方法
CN112444850A (zh) * 2019-08-29 2021-03-05 中国石油化工股份有限公司 地震资料速度建模方法、存储介质及计算设备
CN113176607A (zh) * 2021-04-23 2021-07-27 西安交通大学 基于融入物理规律的稀疏自编码器地震反演方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8688616B2 (en) * 2010-06-14 2014-04-01 Blue Prism Technologies Pte. Ltd. High-dimensional data analysis
US10572800B2 (en) * 2016-02-05 2020-02-25 Nec Corporation Accelerating deep neural network training with inconsistent stochastic gradient descent
US10527699B1 (en) * 2018-08-01 2020-01-07 The Board Of Trustees Of The Leland Stanford Junior University Unsupervised deep learning for multi-channel MRI model estimation
WO2021026545A1 (en) * 2019-08-06 2021-02-11 Exxonmobil Upstream Research Company Petrophysical inversion with machine learning-based geologic priors
CN112764110A (zh) * 2020-07-09 2021-05-07 五季数据科技(北京)有限公司 一种基于限制波尔兹曼机特征编码的聚类地震相分析方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200183041A1 (en) * 2018-12-11 2020-06-11 Exxonmobil Research And Engineering Company Machine learning-augmented geophysical inversion
CN112444850A (zh) * 2019-08-29 2021-03-05 中国石油化工股份有限公司 地震资料速度建模方法、存储介质及计算设备
CN111562611A (zh) * 2020-04-08 2020-08-21 山东大学 基于波动方程驱动的半监督深度学习地震数据反演方法
CN111723329A (zh) * 2020-06-19 2020-09-29 南京大学 一种基于全卷积神经网络的震相特征识别波形反演方法
CN113176607A (zh) * 2021-04-23 2021-07-27 西安交通大学 基于融入物理规律的稀疏自编码器地震反演方法及系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116660992A (zh) * 2023-06-05 2023-08-29 北京石油化工学院 一种基于多特征融合的地震信号处理方法
CN116736372A (zh) * 2023-06-05 2023-09-12 成都理工大学 一种基于谱归一化生成对抗网络的地震插值方法及系统
CN116736372B (zh) * 2023-06-05 2024-01-26 成都理工大学 一种基于谱归一化生成对抗网络的地震插值方法及系统
CN116660992B (zh) * 2023-06-05 2024-03-05 北京石油化工学院 一种基于多特征融合的地震信号处理方法
CN116610937A (zh) * 2023-07-18 2023-08-18 中国海洋大学 在隐式空间上进行低频信息延拓的方法、装置及电子设备
CN116610937B (zh) * 2023-07-18 2023-09-22 中国海洋大学 在隐式空间上进行低频信息延拓的方法、装置及电子设备
CN117407712A (zh) * 2023-10-17 2024-01-16 中联煤层气有限责任公司 一种基于多任务学习的低频地震数据补偿方法

Also Published As

Publication number Publication date
US20230305177A1 (en) 2023-09-28
US11828894B2 (en) 2023-11-28
CN114117906B (zh) 2024-05-10
CN114117906A (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
WO2023087451A1 (zh) 基于观测数据自编码的多尺度无监督地震波速反演方法
CN105954804B (zh) 页岩气储层脆性地震预测方法及装置
CN111562611B (zh) 基于波动方程驱动的半监督深度学习地震数据反演方法
CN103238158B (zh) 利用互相关目标函数进行的海洋拖缆数据同时源反演
CN107894618B (zh) 一种基于模型平滑算法的全波形反演梯度预处理方法
CN104570082B (zh) 一种基于格林函数表征的全波形反演梯度算子的提取方法
CN109992847B (zh) 一种混合机器学习模型的滑坡位移预测方法
CN105319581A (zh) 一种高效的时间域全波形反演方法
CN106526674A (zh) 一种三维全波形反演能量加权梯度预处理方法
CN110058302A (zh) 一种基于预条件共轭梯度加速算法的全波形反演方法
CN114035228B (zh) 一种基于深度学习的隧道地震波速反演方法及系统
CN107765308B (zh) 基于褶积思想与精确震源的重构低频数据频域全波形反演方法
CN104237937B (zh) 叠前地震反演方法及其系统
CN104965222B (zh) 三维纵波阻抗全波形反演方法及装置
CN112231974B (zh) 基于深度学习的tbm破岩震源地震波场特征恢复方法及系统
CN115659848B (zh) 一种基于深度学习网络快速预测二维盆地基底界面的方法
CN104749631A (zh) 一种基于稀疏反演的偏移速度分析方法及装置
CN113359212A (zh) 一种基于深度学习的储层特征预测方法及模型
Huang et al. Initial experiments on improving seismic data inversion with deep learning
Yang et al. Making invisible visible: Data-driven seismic inversion with spatio-temporally constrained data augmentation
Song et al. Simulating multicomponent elastic seismic wavefield using deep learning
Waheed et al. A holistic approach to computing first-arrival traveltimes using neural networks
Zhong et al. Multi-Scale Encoder-Decoder Network for DAS Data Simultaneous Denoising and Reconstruction
Rusmanugroho et al. 3D velocity model building based upon hybrid neural network
CN117031539A (zh) 一种自监督深度学习地震数据低频重建方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964596

Country of ref document: EP

Kind code of ref document: A1