CN113625336B - Seismic wave impedance thin layer inversion method based on full convolution neural network - Google Patents

Seismic wave impedance thin layer inversion method based on full convolution neural network Download PDF

Info

Publication number
CN113625336B
CN113625336B CN202110809060.8A CN202110809060A CN113625336B CN 113625336 B CN113625336 B CN 113625336B CN 202110809060 A CN202110809060 A CN 202110809060A CN 113625336 B CN113625336 B CN 113625336B
Authority
CN
China
Prior art keywords
wave impedance
seismic
inversion
neural network
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110809060.8A
Other languages
Chinese (zh)
Other versions
CN113625336A (en
Inventor
许辉群
王泽峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze University
Original Assignee
Yangtze University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze University filed Critical Yangtze University
Priority to CN202110809060.8A priority Critical patent/CN113625336B/en
Publication of CN113625336A publication Critical patent/CN113625336A/en
Application granted granted Critical
Publication of CN113625336B publication Critical patent/CN113625336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/282Application of seismic models, synthetic seismograms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/30Analysis
    • G01V1/306Analysis for determining physical properties of the subsurface, e.g. impedance, porosity or attenuation profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Geophysics (AREA)
  • Geology (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The invention relates to the technical field of reservoir prediction in seismic exploration, in particular to a seismic wave impedance thin layer inversion method based on a full convolution neural network. According to the method, a model is obtained by training a seismic sample, predicted seismic data is input into the model, and a wave impedance inversion result is obtained through prediction. The known seismic data nonlinear modeling is utilized through the full convolution neural network, and zero padding is utilized by the full convolution neural network, so that each output layer can keep the same size as the input layer to be continuously transmitted, the effect of improving the resolution is achieved, and an intelligent new method is provided for the seismic wave impedance thin layer prediction.

Description

Seismic wave impedance thin layer inversion method based on full convolution neural network
Technical Field
The invention relates to the technical field of reservoir prediction in seismic exploration, in particular to a seismic wave impedance thin layer inversion method based on a full convolution neural network.
Background
The "wave impedance inversion is the final expression form of high-resolution seismic data processing" shows that the seismic wave impedance inversion is particularly important in the seismic exploration technology. Wave impedance is a complex parameter closely related to formation velocity and density, and also closely related to formation lithology, and has good correspondence with oil-bearing reservoirs. Wave impedance inversion is one of the seismic lithology inversion, and the inversion method is a process of modeling by using existing knowledge for seismic inversion, so that the seismic wave impedance inversion is one of extremely important and effective means for reservoir prediction.
Seismic wave impedance inversion is divided into two major categories, narrowband inversion and wideband inversion. The former category is the early conventional approach. Such methods invert directly from seismic recordings to obtain wave impedance. The reflection coefficient sequence is obtained by the inverse filtering method, and then the wave impedance is calculated. Because the seismic records are bandpass limited (i.e., narrowband), the wave impedance inversion results are limited by the bandwidth of the seismic records, so such inversion methods are referred to as narrowband inversion. Model-based seismic wave impedance inversion methods have been developed over the last several years. Because the free setting of the model can make the infinite bandwidth free from the constraint of the frequency bandwidth of the seismic record; the function of the seismic recordings is to fit forward theoretical recordings of the model to them, so this type of inversion method is called bandwidth inversion. Model-based inversion can lead to multiple solutions. That is, forward theoretical recordings of different models can be generally fit well with actual seismic recordings. For the purpose of eliminating the multi-solution, various constraint conditions are usually used, so that the general bandwidth inversion is bandwidth constraint inversion, which is also the most dominant inversion technology in the current industry.
The appearance of thin layers with more and more complicated stratum, and the prediction of the thin layers by conventional wave impedance inversion cannot completely meet the production requirement due to factors such as low processing efficiency, high processing cost, low precision and the like. Researchers in the field have conducted deeper research on seismic wave impedance inversion, and various new methods and new technologies are continuously applied to seismic wave impedance inversion nowadays, so that further development of seismic inversion is promoted, and various achievements are also emerging successively. The full convolutional neural network is a process of nonlinear modeling and optimization using known knowledge, and is very similar to the process of inversion methods, showing many advantages in seismic wave impedance thin layer inversion. Based on the method, a method for seismic wave impedance thin layer inversion based on a full convolution neural network is provided, and tests are carried out in theoretical model seismic data obtained through forward modeling, so that a high-precision inversion result is finally obtained, the effects of predicting a thin layer and improving reservoir prediction are achieved, feasibility and effectiveness of the method are proved, and the method is very important for seismic exploration thin layer prediction.
Disclosure of Invention
Aiming at the problems, the invention provides a seismic wave impedance thin layer inversion method based on a full convolution neural network, which is used for predicting thin layers and improving the effect of reservoir prediction.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the seismic wave impedance thin layer inversion method based on the full convolution neural network is characterized by comprising the following steps of:
s1: extracting seismic model data and corresponding wave impedance from forward data to obtain a part serving as a sample pair;
s2: building a full convolution neural network and giving a group of super parameters;
the full convolution neural network comprises a convolution layer, a dropout layer, an activation layer, a pooling layer, a batch standardization layer, a deconvolution layer and an activation layer which are circulated three times, wherein the deconvolution layer and the activation layer are arranged at last;
s3: the sample pairs are normalized and are input into a full convolution neural network after being subjected to pretreatment of random sampling;
s4: executing a period number epoch by the full convolution neural network to obtain training time, a loss value and a trained inversion model;
s5: inputting the seismic data needing wave impedance inversion into a trained inversion model, and outputting a predicted wave impedance inversion result; predicting to obtain the comparison of the wave impedance and the original wave impedance, and at the same time, observing and training to obtain the loss value and time of the inversion model, thereby reflecting the quality of the obtained inversion model, and if the quality is good, storing the inversion model; if the difference is found, returning to step S2 to tune the reference for retraining.
Further, the super-parameters include training times, batch sample number, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum, convolution kernel size.
Further, in the above super-parameters, the learning rate range is [1e ] -3 ,1e -2 ]The method comprises the steps of carrying out a first treatment on the surface of the The drop rate adjustment range is [0.1,0.5]The adjustment range of the weight attenuation coefficient is [0,1e ] -4 ]。
Further, in the step S3, the specific formula is normalized:
wherein: x is x * Is normalized value; x is the original data; max maximum value in the original data; min is the minimum value in the raw data.
Further, in the step S4, the loss function used herein is an MSE loss function, and the specific formula is:
wherein: l represents a loss value, n is the number of samples, y i Is a predicted value of the current value,is a tag value.
Further, in the step S5, the specific prediction process is as follows:
firstly, loading a trained inversion model, converting seismic data of a line into a three-dimensional tensor, inputting the three-dimensional tensor into the trained inversion model, predicting to obtain wave impedance data of the three-dimensional tensor, and drawing the wave impedance data into a graph to obtain a section of predicted wave impedance;
then, comparing the predicted wave impedance with the original wave impedance, and if the information of the stratum thickness change can be reflected well, the accuracy is high, namely, a model and a loss value are saved; if the formation thickness change information cannot be reflected well or the accuracy is not high, returning to parameter adjustment and retraining to obtain a new model.
Further, the depth of the full convolution neural network is related to the size of the input seismic sample, and the structure of the network can be changed through the size of the input seismic sample; the network structure is changed according to the size of the input seismic sample, wherein the sample size is specifically related to the number of sampling points, the sampling time and the number of channels, and the data are large, so that a plurality of network layers such as convolution layers, deconvolution layers and the like can be added.
The beneficial effects of the invention are as follows:
the known seismic data nonlinear modeling is utilized through the full convolution neural network, and zero padding is utilized by the full convolution neural network, so that each output layer can keep the same size as the input layer to be continuously transmitted, the effect of improving the resolution is achieved, and an intelligent new method is provided for the seismic wave impedance thin layer prediction.
Drawings
FIG. 1 is a schematic diagram of a sheet inversion process of seismic wave impedance for a fully convolutional neural network.
Fig. 2 is a schematic diagram of a fully convolutional neural network structure.
FIG. 3a is a 10 th-30 th sample pair input in an embodiment of the present invention.
FIG. 3b is a 50 th-70 th sample pair entered in an embodiment of the invention.
FIG. 3c is a sample pair input at lanes 90-110 in an embodiment of the present invention.
FIG. 3d is a 130 th-150 th sample pair entered in an embodiment of the invention.
Fig. 4 is a schematic diagram of an actual network training structure in an embodiment of the present invention.
Fig. 5 is a loss value transition curve in an embodiment of the present invention.
FIG. 6 is a preliminary test result in an embodiment of the present invention.
Fig. 7 is a cross-section of a tag wave impedance in an embodiment of the invention.
FIG. 8 is a predicted seismic data profile in an embodiment of the invention.
FIG. 9 is a graph showing the inversion of the high-precision wave impedance in an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to model data obtained by forward modeling, but these are not intended to limit the present invention and are merely illustrative. While at the same time becoming clearer and more readily understood by way of illustration of the advantages of the present invention.
As shown in fig. 1, a method for seismic wave impedance thin layer inversion based on a full convolution neural network in this embodiment includes the following steps:
s1: extracting seismic model data and corresponding wave impedance from forward data to obtain a part serving as a sample pair;
since the profiles obtained by forward modeling are stored in the form of arrays, the entire seismic model data obtained by forward modeling and the corresponding wave impedance are taken out in python by slicing to serve as pairs of samples.
S2: building a full convolution neural network and giving a group of super parameters;
as shown in fig. 2, a full convolution neural network is built according to a network structure diagram, and a convolution layer, a dropout layer, an activation layer, a pooling layer, a BN (batch standardization) layer, and finally a deconvolution layer and an activation layer are sequentially built for three times.
The super-parameters comprise training times, batch sample number, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum and convolution kernel size.
The number of training increases with the number of epochs, the number of updates to the weights in the neural network increases, and the curve changes from under-fit to fit, but the number of times cannot be too large, or else changes from fit to over-fit. Too large and too small a number of batch samples tends to reduce the effective capacity, and the number of batch samples needs to be selected according to the hardware capacity. Learning rates that are too high and too low can reduce the effective tolerance of the model due to optimization failure, and generally have a recommendation range, which is used herein as [1e ] -3 ,1e -2 ]. The discard rate indicates that fewer discard parameters mean that the number of parameters of the model is increased, the adaptability between the parameters is increased, the capacity of the model is increased, but the effective tolerance of the model is not necessarily increased, and the adjustment range is generally [0.1,0.5 ]]. The weight attenuation coefficient can effectively limit parameter variationPlays a certain role in regularization, and the adjustment range is generally [0,1e ] -4 ]. The optimizer momentum is used to speed training and avoid sinking into the locally optimal solution. The convolution kernel size represents an increase in size and an increase in model capacity. Common convolution kernel sizes are: 7×7, 5×5, 3×3, 1×1, 7×1, 1×7.
S3: the sample data is normalized and is input into a full convolution neural network after being subjected to pretreatment of random sampling;
the sample is normalized, so that the network can be better and faster fitted; random sampling is disturbed, so that random errors can be reduced, and each sample is taken with the same probability as possible.
The specific normalization formula is as follows:
wherein: x is x * Is normalized value; x is the original data; max maximum value in the original data; min is the minimum value in the raw data.
S4: and executing a period number epoch by the full convolution neural network to obtain training time, loss value and a trained inversion model.
The samples pass through the built network layers in turn, as shown in fig. 4, with the output of the upper layer serving as the input of the next layer until all layers have been executed. The specific obtained loss value is calculated according to the error (the difference between the true value and the predicted value of the wave impedance) generated during training and verification, and the time required for each time is obtained by calculating the difference between the previous time and the next time. The loss function used herein is the MSE loss function, and the specific formula is:
wherein: l represents a loss value, n is the number of samples, y i Is a predicted value of the current value,is a tag value. The gradient gradually stabilizes the inversion mapping result along with the reduction of the loss value, and finally converges.
S5: and inputting the seismic data needing wave impedance inversion into the trained inversion model, and outputting a wave impedance inversion result obtained by prediction. And comparing the predicted wave impedance with the original wave impedance, and meanwhile, observing and training to obtain a loss value and time of the inversion model, thereby reflecting the quality of the obtained inversion model. If so, storing the inversion model; if the difference is found, returning to step S2 to tune the reference for retraining. And (2) regulating and controlling parameters according to whether fluctuation of loss values is gentle or not and whether inversion results can reflect accurate prediction of formation thickness variation or not so as to meet expected requirements, and setting specific numerical values according to the over-parameter description range given in the step (S2).
In the above step S5, the specific procedure of prediction is as follows:
let x be the seismic data of a line, the size is 201 tracks, 66 sampling points per track, 2ms sampling. Firstly, loading a trained model, inputting an x-conversion three-dimensional tensor (1,201,66) into the model, predicting to obtain wave impedance data of the three-dimensional tensor (1,201,66), and drawing the wave impedance data into a graph to obtain a section of predicted wave impedance; and then comparing the predicted wave impedance with the original wave impedance, if the information of the formation thickness change can be reflected well, the accuracy is high, namely, the model and the loss value are saved, and if the information of the formation thickness change can not be reflected well or the accuracy is not high, returning to parameter adjustment and retraining to obtain a new model.
In the above step S5, the quality control criterion is that the following two conditions are satisfied at the same time: firstly, setting super parameters (such as the super parameters, such as the setting of learning rate and other super parameter values); secondly, the fidelity of the input data (the source of the data is all true and reliable).
The depth of the network used in this example is also related to the size of the input seismic sample, the structure of the network can be altered by the size of the input seismic sample. If the input seismic sample data is larger, the network structure can be deepened; if the input sample data is not large, a too deep network structure is not required.
The network structure is modified according to the size of the input seismic samples, wherein the sample size is specifically related to the number of sampling points, the sampling time and the number of channels. The data is large, and a plurality of convolution layers, deconvolution layers and other network layers can be added. In the case, the input sample data is only 201 channels, 66 sampling points are used for each channel, and a part of the theoretical model data sampled by 2ms is taken, so that the network training model built by the network training structure diagram of fig. 4 is needed.
The full convolution neural network (FCN) classifies input samples at a pixel level, so that the problem of image segmentation at a semantic level is solved, the full convolution neural network can accept an input image with any size unlike the classical convolution neural network which classifies feature vectors with fixed length obtained by using a full connection layer at the last layer, and the feature map of the last convolution layer is up-sampled by adopting a deconvolution layer, so that the input image is restored to the same size and resolution is improved, each pixel can be predicted, spatial information in the original input image is reserved, precision is improved, and finally pixel-by-pixel classification is performed on the up-sampled feature map. In general, the full convolution neural network directly gives the classification condition of the corresponding area of the input sample by utilizing the corresponding relation between the output result and the input sample, and cancels the sliding window selection candidate frame in the traditional target detection.
A schematic diagram of the structure of the full convolutional neural network based on seismic wave impedance thin layer inversion, which is drawn with reference to the schematic diagram of the network structure of the full convolutional neural network for semantic segmentation, is shown in fig. 2. The schematic diagram shows that an input sample sequentially passes through each network layer, the final convolution layer is a deconvolution layer, up-sampling is carried out, the output wave impedance is restored to the same size as the input seismic data, the resolution is improved, and the forward reasoning and reverse learning process is carried out.
In existing deep learning algorithms, because the data sets are too single, more are only developed for specific data sets, such as faces, handwriting font libraries, and the like. More and more researchers apply deep learning to seismic wave impedance inversion, but in seismic exploration, seismic data sources are of good quality, raw seismic data are generally defective and extremely susceptible to infection by various factors, so that preprocessing of the seismic data before inputting the seismic data and corresponding wave impedance labels into a neural network is extremely important. On one hand, the validity of the seismic data can be ensured, and on the other hand, the seismic data is more in accordance with the input format of the full convolution neural network through the adjustment of the format of the seismic data. The preprocessed seismic data can enable the network to converge faster and better in the training process.
The input samples are subjected to normalization and random sampling pretreatment, and under the condition of the same super parameters, an inversion model trained by the seismic data after pretreatment is better. The method has relatively small application limit in actual scenes and is more suitable for seismic data inversion under different observation conditions.
Because the full convolution neural network can accept sample input with any size and classify the pixel level, the resolution is improved, the method is friendly to seismic wave impedance thin layer inversion, and can be well applied to the wave impedance thin layer inversion. The size of the input seismic sample is not fixed, the general practical seismic data is huge, the required sample input can be constructed according to the original seismic data of the input seismic sample, and the structure of the network can be changed according to different sizes of the input seismic sample. The input data is large, the network depth can be relatively deepened, and residual blocks and the like can be added to reduce the influence caused by the depth network. Because the forward model data used herein is not very large, no particularly deep network is required. The pairs of samples as input are artificially constructed, and a total of four pairs of samples are taken as input, and each pair of samples is 20 tracks, 66 sampling points and 2ms sampling. The input sample pairs are shown in figures 3 a-3 d.
Through the establishment of the network structure schematic diagram and the establishment of the sample pairs, a network training process diagram for training data to obtain an inversion model is established. In the process, the training of the convolution layer, the dropout layer, the relu activation layer, the pooling layer, the Batch Normalization batch normalization layer and the final deconvolution layer is performed three times when the seismic sample pair is taken as input, and finally, a trained inversion model can be obtained. An actual network training process diagram is shown in fig. 4.
Each built neural network has the best superparameter combination matched with the neural network, such as the setting of the values of convolution kernel size, learning rate, batch_size, loss function part superparameter, discarding rate and the like. The combination of the optimal set of hyper-parameters generally yields relatively small loss values and the inversion model obtained by training is better. Of course, for each neural network, there is no straightforward method of determining the optimal superparameter combination, and this is a trial and error result. The implementation of the seismic wave impedance thin layer inversion method based on the full convolution neural network mainly comprises the following three steps: firstly, preprocessing data; secondly, the loss value change curve (shown in figure 5) obtained through training and verification is compared with the inversion result (shown in figure 6) of the preliminary test and the tag wave impedance (shown in figure 7), so that repeated parameter adjustment training is carried out on the network; thirdly, the seismic data (shown in fig. 8) needing to obtain the wave impedance are input into a stored inversion model for prediction, and finally a high-precision wave impedance inversion result (shown in fig. 9) is obtained.
What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (6)

1. The seismic wave impedance thin layer inversion method based on the full convolution neural network is characterized by comprising the following steps of:
s1: extracting seismic model data and corresponding wave impedance from forward data to obtain a part serving as a sample pair;
s2: building a full convolution neural network and giving a group of super parameters;
the full convolution neural network comprises a convolution layer, a discarding layer, an activating layer, a pooling layer, a batch standardization layer, a deconvolution layer and an activating layer which are cycled three times, wherein the deconvolution layer and the activating layer are arranged at last;
s3: the sample pairs are normalized and are input into a full convolution neural network after being subjected to pretreatment of random sampling;
s4: executing a period number by the full convolution neural network to obtain training time, a loss value and a trained inversion model;
s5: inputting the seismic data needing wave impedance inversion into a trained inversion model, and outputting a predicted wave impedance inversion result; predicting to obtain the comparison of the wave impedance and the original wave impedance, and at the same time, observing and training to obtain the loss value and time of the inversion model, thereby reflecting the quality of the obtained inversion model, and if the quality is good, storing the inversion model; if the difference is found, returning to the step S2 to adjust the parameters for retraining;
in the step S5, the specific process of prediction is as follows:
loading a trained inversion model, converting three-dimensional tensors of seismic data of one line into the trained inversion model, predicting to obtain wave impedance data of the three-dimensional tensors, and drawing the wave impedance data into a graph to obtain a section of predicted wave impedance;
in the above step S5, the quality control criteria are that the following two conditions are satisfied at the same time: firstly, setting super parameters; and secondly, the fidelity of the input data.
2. The method for thin-layer inversion of seismic wave impedance based on a full convolutional neural network according to claim 1, wherein the super-parameters comprise training times, number of samples in batches, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum, convolution kernel size.
3. The method for seismic wave impedance thin-layer inversion based on full convolution neural network as claimed in claim 2, wherein the learning rate range is [1e ] -3 ,1e -2 ]The method comprises the steps of carrying out a first treatment on the surface of the The drop rate adjustment range is [0.1,0.5]The adjustment range of the weight attenuation coefficient is [0,1e ] -4 ]。
4. A full convolutional neural network of claim 1The seismic wave impedance thin layer inversion method is characterized in that in the step S3, a specific formula is normalized:wherein: />Is normalized value;xis the original data; max maximum value in the original data; min is the minimum value in the raw data.
5. The method of claim 1, wherein in the step S4, the loss function used herein is an MSE loss function, and the specific formula is:
wherein: l represents the loss value, n is the number of samples, ">Is a predictive value->Is a tag value.
6. The method for seismic wave impedance thin-layer inversion based on a full convolution neural network according to claim 1, wherein the depth of the full convolution neural network is related to the size of an input seismic sample, and the structure of the network is changed by the size of the input seismic sample; the network structure is changed according to the size of an input seismic sample, wherein the size of the sample is specifically related to the number of sampling points, the sampling time and the number of channels, the data is large, and a plurality of convolution layers and deconvolution layers are added.
CN202110809060.8A 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network Active CN113625336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809060.8A CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809060.8A CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN113625336A CN113625336A (en) 2021-11-09
CN113625336B true CN113625336B (en) 2024-03-26

Family

ID=78380015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809060.8A Active CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN113625336B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063169B (en) * 2021-11-10 2023-03-14 中国石油大学(北京) Wave impedance inversion method, system, equipment and storage medium
CN115795994B (en) * 2022-09-29 2023-10-20 西安石油大学 Method for inverting logging data of azimuth electromagnetic wave while drilling based on Unet convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580162A (en) * 2020-05-21 2020-08-25 长江大学 Seismic data random noise suppression method based on residual convolutional neural network
CN111723329A (en) * 2020-06-19 2020-09-29 南京大学 Seismic phase feature recognition waveform inversion method based on full convolution neural network
CN112925012A (en) * 2021-01-26 2021-06-08 中国矿业大学(北京) Seismic full-waveform inversion method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3070479C (en) * 2017-08-25 2023-01-17 Exxonmobil Upstream Research Company Automated seismic interpretation using fully convolutional neural networks
US10996372B2 (en) * 2017-08-25 2021-05-04 Exxonmobil Upstream Research Company Geophysical inversion with convolutional neural networks
WO2019055565A1 (en) * 2017-09-12 2019-03-21 Schlumberger Technology Corporation Seismic image data interpretation system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580162A (en) * 2020-05-21 2020-08-25 长江大学 Seismic data random noise suppression method based on residual convolutional neural network
CN111723329A (en) * 2020-06-19 2020-09-29 南京大学 Seismic phase feature recognition waveform inversion method based on full convolution neural network
CN112925012A (en) * 2021-01-26 2021-06-08 中国矿业大学(北京) Seismic full-waveform inversion method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Convolutional neural network for seismic impedance inversion;V. Das et al.;《Geophysics》;第84卷(第6期);第R869-R880页 *
Seismic Impedance Inversion Using Fully Convolutional Residual Network and Transfer Learning;Bangyu Wu et al.;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;第17卷(第12期);第2140-2142页 *
基于全卷积神经网络的建筑物屋顶自动提取;刘文涛 等;《地球信息科学学报》;第20卷(第11期);第1564-1567页 *
基于时域卷积神经网络的地震波阻抗反演方法;王泽峰 等;《第四届油气地球物理学术年会论文集》;第1-4页 *
基于深度全卷积神经网络的地震波阻抗预测方法;王泽峰 等;《工程地球物理学报》;第19卷(第3期);第386-392页 *

Also Published As

Publication number Publication date
CN113625336A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
US11320551B2 (en) Training machine learning systems for seismic interpretation
CN113625336B (en) Seismic wave impedance thin layer inversion method based on full convolution neural network
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN112989708B (en) Well logging lithology identification method and system based on LSTM neural network
CN108182259A (en) A kind of method classified based on depth shot and long term Memory Neural Networks to Multivariate Time Series
CN111832432B (en) Cutter wear real-time prediction method based on wavelet packet decomposition and deep learning
CN112836802A (en) Semi-supervised learning method, lithology prediction method and storage medium
CN115758212A (en) Mechanical equipment fault diagnosis method based on parallel network and transfer learning
CN115393656B (en) Automatic classification method for stratum classification of logging-while-drilling image
CN115618987A (en) Production well production data prediction method, device, equipment and storage medium
CN114203184A (en) Multi-state voiceprint feature identification method and device
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN117292225A (en) Seismic data first arrival pickup method based on SimpleNet network
CN117217915A (en) Stock price prediction method based on deep migration learning
CN113325480A (en) Seismic lithology identification method based on integrated deep learning
CN117409316A (en) TransUNet-based seismic data karst characteristic intelligent identification positioning method
CN116912600A (en) Image classification method based on variable step length ADMM algorithm extreme learning machine
CN116934603A (en) Logging curve missing segment completion method and device, storage medium and electronic equipment
CN113642232B (en) Intelligent inversion exploration method for surface waves, storage medium and terminal equipment
CN117893896A (en) Reservoir classification analysis method and device
CN116449415A (en) Waveform processing method and device for seismic data and related equipment
CN114299330A (en) Seismic facies classification method
CN113740903B (en) Data and intelligent optimization dual-drive deep learning seismic wave impedance inversion method
CN117492079B (en) Seismic velocity model reconstruction method, medium and device based on TDS-Unet network
CN113762497B (en) Low-bit reasoning optimization method for convolutional neural network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant